text
stringlengths
144
682k
Upcoming Severe Heatwaves Add To China's Mounting Problems Already reeling from the ongoing novel coronavirus (2019-nCoV) outbreak, China's massive population of 1.4 billion persons is also being severely threatened by more severe and frequent heatwaves generated by climate change. This deadly new threat to the Chinese and their health threatens to dwarf the dangers posed by viral outbreaks and other calamities contends two new studies by the University of Reading, the University of Edinburgh, the Meteorological Office of the United Kingdom and several Chinese institutions. The studies found that extreme daytime heat, as well as extreme rainfall, is due to become more common in China in the future because humans continue to emit large and untrammeled amounts of greenhouse gases (GhGs) into the atmosphere. The studies were published by the American Meteorological Society. Dr. Buwen Dong, a study co-author and a climate scientist at the University of Reading and the National Center for Atmospheric Science, said the Chinese are already suffering from more frequent extreme heat in China, and this will only get more common in future due to climate change. "It is particularly concerning to see high night-time temperatures becoming a growing threat," Dr. Dong said. "This gives no respite to people struggling to cope with searing daytime heat and can lead to deadly heatstroke, particularly for vulnerable people. Better strategies for adapting and coping with rising temperatures are vital to save lives." One of the studies estimated that 30-day spells of deadly overnight heat similar to the one that killed and hospitalized many people in northeast China in 2018, have already gone from being one-in-500-year events to one-in-60-year events since pre-industrial times. This is astonishing. The studies examined how common these hot conditions in northeast China and wet conditions in central western China have become and will become in future due to anthropomorphic or human-induced climate change. Researchers studied almost 50 million daily temperature records captured at 2,400 weather stations across China between 1961 and 2018, along with data from other sources. What they found is that climate change has made rainfall more likely to occur in severe bursts in central western China. Based on climate models, researchers calculated extreme rainfall events have become 1.5 times more likely since pre-industrial times. On the other hand, the likelihood of persistent heavy rainfall has reduced by 47 percent. "The current health emergency in China is sadly causing many deaths and this report shows how climate change could also cause serious health emergencies in the region in the future," Prof. Elizabeth Robinson, an environmental economist at the University of Reading, who was not part of this study, said. "A hotter climate will have a severe impact on global health, with the kinds of extreme temperatures that hospitalized record numbers of people in China in 2018 likely to become more frequent in the future. Outdoor workers, older and young people, and those with pre-existing health conditions are likely to be most at risk." Climate Change A new study found that many parts of the world may soon struggle to produce food that contain necessary nutrients because of climate change and increasing carbon dioxide (CO2). Pixabay Join the Discussion
Street style: machine learning takes to the streets Converting places into numbers Streets surround us, but, at first glance, there is no systematic data on what they look like. There are records of property transactions, land use and buildings protected due to architectural or historic significance (listed buildings and conservation areas), although none of these directly capture areas’ appearance. Buildings appearance is though part of planning decisions about the built environment. There is also greater recognition of the importance of areas having a sense of place, but it is not always clear what this means and how to measure it. [1] Recent advances in machine learning and data on the built environment from Google Street View are allowing us to analyse streets appearance directly and may help improve our understanding of places. A growing number of studies are analysing streets appearance or people’s perceptions of them more directly, for example research has used Street View images and machine learning to identify whether the appearances of streets relate to people’s perception of their safety and computer vision to identify characteristics of places that people associate with beauty. [2] There has also been work on whether it is possible to identify housing style from estate agent photographs. [3] To give a simple example of this kind of approach, here we see if we can use machine learning to identify a style of architecture in the built environment. The style of architecture we are looking for is one of the UK’s most distinctive: Georgian architecture. This is mainly associated with the period (1714 to 1830) and appears in many towns and cities. Historic Georgian buildings are often protected in the planning regime and, as they are relatively numerous, account for a large number of listed buildings. Modern, Neo-Georgian, town houses are built to this day. [4] The style is of policy interest due to its combination of high density and perceived popularity. For example, the density of Georgian Terraces in London is about 160 dwellings per hectare compared with the central London density of 78 dwellings per hectare. [5] It has been argued by Create Streets that this kind of street-based, high density, development is one way to increase housing supply that avoids some of the issues with more high-rise development. [6] To try and detect the style we use Deep Learning, which is one of the developments behind the current excitement around Artificial Intelligence (AI). Computer vision has made substantial progress in recent years using this approach. Deep Learning is based on an idea that is over 70 years old - neural networks. Neural networks have many applications to pattern recognition in images, speech and text. To “train” a neural network to identify images, numbers that represent a digitised image are used as an input into the network. These pass through a series of layers of weights, which are applied to the numbers that represent the picture and then calculate the probability an image contains the particular thing that the network has been trained to detect. Adding more layers of weights (deepening the neural network) and some other refinements in network design and processing have resulted in significant improvements in networks’ performance in image recognition and other tasks. [7] That computers can now potentially identify styles, which are central to many creative domains such art, design, fashion and music points to one of the ways AI is likely to affect the creative industries. [8] Machines hunting for buildings To identify these kinds of buildings we use a pre-trained neural network called Inception V3 (shown below). [9] The network is built using an open source framework by Google called TensorFlow. It is not a physical network, but an algorithm that is implemented on the computer as software. The network has already been trained to recognise a large number of images. It has many layers through which the photographic data flows (In the diagram below the blocks correspond to layers and images move left to right through the network). Each layer acts as a filter that implicitly detects different aspects of images e.g. high-level structure as opposed to fine detail. The right-hand side outputs the probabilities that the network estimates that an image contains the things it has been trained to identify and provides the network with feedback on how successful it has been. The Inception-v3 neural network The Inception-v3 neural network The Street View images we are using are 299 pixels by 299 pixels, with each pixel having 3 colours (red, blue and green). Each colour is represented by a number corresponding to its intensity. So each image consists of 299 x 299 x 3= 268,203 numbers. Although this might sound like a lot of data for a single picture, Street View photos have relatively low resolution, by contrast a megapixel photograph from a phone can have several million pixels. At a high-level the network is trained to recognise objects by showing it (inputting into it the numbers that represent the picture) a set of images that come from the different target groups we are interested in it recognising. These numbers pass through the weighted layers in the network and ultimately calculate the probability the image contains the objects we are trying to detect. The network’s weights are adjusted to maximise the likelihood that when shown an image from a given category, the network outputs a high probability that it has been shown that category. In this case, the network has already been trained to recognise a large number of existing objects e.g. different kinds of dogs, cars and so on. [10] As a result its weights implicitly embody a rich understanding of a variety of shapes and textures from the training process it has undergone. We benefit from this by keeping all but one of the layers of the network’s weights the same and retraining the final layer (The layer just before the network splits on the right-hand side) in a process known as transfer learning. [11] Training the full network would be computationally intensive and require more data as it involves estimating the much larger number of parameters for the full network rather than just the final layer. The images that the network was trained on: Georgian housing in London The network was trained to recognise Georgian architecture by showing it images of the style. These were collected by sampling Street View in London (in particular the locations of listed buildings and conservation areas) and selecting those that I considered had the architecture style of Georgian townhouses. The pictures below show a selection of this data. Georgian townhouses are typically brick built and have 3 or more stories. The windows are usually taller in the first two floors with smaller windows in the top floors. The doorway often has an arched window over the door (known as a Fanlight) and ground floor windows can also be arched. The style is not completely uniform, for example the ground floor sometimes has a plaster layer over the brickwork (known as stucco). As the style is not exclusively from the Georgian period, when we refer to Georgian buildings we in practice mean buildings in the Georgian style, rather than the historic period. In buildings subsequent to the period, the plaster can cover the entire frontage, so-called Regency style, but these were not included in the training sample. Photo grid of locations sampled on Google Street View in Liverpool How the model does at identifying the style of architecture The training is undertaken by feeding in pictures (those identified as Georgian and a random selection of Street Views) and adjusting the network weights in the final layer to maximize the probability that the network classifies the image accurately as a random street view or one containing Georgian architecture. The Georgian sample was 713 and the random sample was 3,610. Part of the sample (20%) is retained to act as a test sample to see if the training had worked. This is because it is possible, as networks have many internal parameters, to calibrate them to fit a training set well. This can be problematic as it may mean that the network has just fit the idiosyncrasies of the particular set of pictures it was trained on and so will be ineffective when facing completely new data it has not seen before. In classifying pictures (identify architectural styles in this instance) there are two ways the network can get it right and two ways it can get it wrong. Ideally the network identifies the building style correctly when the style is presented to it (true positive) or correctly identifies the style’s absence when it is not present (true negative). However, it can also incorrectly identify the style when it is not there (false positive) and incorrectly say the style is not there when it is (false negative). One of the ways the success of a model is measured is a metric called Accuracy which is the number of True positives and True negatives divided by the number of objects classified. [12] When only correct decisions are made by the classifier i.e. all results are True positives or True negatives the accuracy is a 1. However when there are only False positives and False negatives i.e. the model is alway wrong the accuracy score is 0. On the test sample an accuracy of 96.7% was achieved after training. This isn’t quite as good as it sounds as by contrast if we just assumed that the picture never contained Georgian architecture, on the grounds that most buildings do not have that style, we’d get an accuracy of 85% in the sample, but it at least indicates some ability to identify Georgian buildings. This is though not perfect and would certainly not be appropriate where there are serious consequences to getting things wrong, such as self-driving cars. Looking at some other examples from outside the training and test data gives a visual sense of how the network is doing. The network outputs a probability that the building contains Georgian architecture. Where probability 1 is 100% certainty the image is a Georgian townhouse and 0 is complete certainty it is not. As urban environments contain many things other than housing e.g, office buildings, green spaces, roads, scaffolding/building-sites, vehicles or just empty space etc, there is a spectrum of probabilities i.e. a random building looks more Georgian than a bush or empty space. Identifying Georgian housing in Liverpool To test if the network is able to recognise Georgian housing we show it street images from somewhere it has not seen any photographs from in the training process and see if it can identify Georgian housing. Liverpool is known for its historic buildings (it is a World Heritage Site) and contains a large stock of Georgian houses. The area corresponding to the Liverpool unitary authority was randomly sampled for 35,000 photographs on Street View. The picture below shows the area sampled, each point is a Street View image with the solid light areas are those where there were no Street View images available. Map of Liverpool The figure below shows a spectrum of pictures from Liverpool starting with Georgian architecture and ending with lower density development and what the network considers to be the probability of their containing Georgian architecture. Each level corresponds to pictures drawn from different levels of probability of containing the Georgian style being 0-0.1, 0.1-0.2 and so on. The pictures are not representative of the Liverpool street views overall as most views were given a low probability of containing the style. The first two buildings which have a probability of more than 95% correspond to the style we are looking for. The photographs that had very low likelihoods of containing the style corresponded to views of lower density housing or open and green spaces. The detection of the style is likely to being partly driven by the size of the building as much as its appearance i.e. Georgian townhouses are bigger than those in the suburbs and have higher density, but it is also true that these characteristics are an integral, although not exclusive, part of the style. Grid of photos of houses The figure below plots the locations of the photographs identified as containing a Georgian building with more than 95% probability (on the left) compared to listed buildings locations in Liverpool (on the right). We see that the locations of the main concentrations of buildings identified by the network broadly corresponds to the area that has the highest number of listed buildings in Liverpool. Liverpool Georgian building identification One would not expect the exact identification of isolated or low density examples of the style as there are more buildings in Liverpool than our collection of Street Views. A query for buildings in Liverpool run on OpenStreetMap returns over 100,000 buildings which is likely to understate the true number. We compare the density of historic buildings with the density of photographs that the network considers as having a high probability of being Georgian in the figure below. On the right hand-side we have separated out the density of historic buildings into two types listed building built in the Georgian period (1714-1830) and in addition the neighbouring Victorian period (1837-1901). The part of Liverpool that is known as the Georgian Quarter (shown in white outline below), which also is the area with the highest density of historic Georgian buildings in the city falls within the area assessed to have a high probability of being Georgian. There is also another area which is identified as having a high probability of buildings with Georgian style to the north west of the Georgian quarter, but which does not contain large numbers of Georgian buildings. It does though contain a lot of Victorian buildings and inspection of the images suggests that the network is mistakenly classifying a significant number of buildings which do not have the Georgian style, but which often have certain kinds of period features such as rounded window arches or leaded windows. It also likely that the higher density is playing a role. Distribution plot.png Distribution of areas identified as having a high probability of being Georgian and with high levels of Georgian and Victorian architecture. The slide show below gives some examples of buildings that the network correctly and incorrectly classified as being Georgian with high probability. This simple example has shown that we are able to use machine learning to identify areas with high concentrations of Georgian architecture using a network trained on data from a different location. However, there are a significant number of false positives, where buildings that are not Georgian, but which have features that are part of the style, such as leaded windows, or rounded arches, are identified as having a high probability of being Georgian. These false positives being largely in areas where there are a high number of buildings from the neighbouring Victorian period, as opposed to newer buildings. It should be possible to improve the performance of the network by using a larger sample of training data (our sample is fairly small) and finer distinctions on architectural style (perhaps from someone whose primary qualification isn’t economics), more sophisticated techniques to sample Street View to generate more consistent photographs and by optimising the network. Alternatively one could more directly pre-process the images before the analysis - this is implicitly what the pre-trained earlier layers of the network are doing, but one could use a more specialised approach by extracting specific features of the image relating to the architecture style, or retraining more layers of the network. Another extension would be to use GIS / land use and land registry data to inform the analysis, by supplementing the data with information on when transactions relating to the land on which the property sits were undertaken. [13] Also by only having the classifier allocate pictures to only two classes: Georgian or non-Georgian we are not controlling for other styles in a particularly sophisticated way and indeed there is variation within the style (such as having plaster on the ground floor which can substantially affect its appearance). The analysis could be extended to other styles, by for example sampling conservation areas to provide examples of specific styles that can be used to train the network to identify the style elsewhere. Implications of being able to understand streets appearance better Although the analysis has only been partially successful in identifying the architectural style, the approach adopted is not that sophisticated and it is plausible that it could be elaborated on to get more accurate findings. As more work is done on understanding streets appearance directly there may be some implications as this kind of analysis increases. Measuring the built environment better The extent to which local authorities have systematic records of what the built environment looks like outside of conservation areas is limited. In principle this sort of approach could be used to develop a systematic measure of streets appearance supplementing existing quantitative information from Geographic Information Systems (GIS), for example information on land use. In a different context the Office for National Statistics (ONS) has worked on assessing the extent to which it is possible to identify caravan parks from aerial imagery as caravan parks dwelling numbers are poorly recorded. [14] The mapping of the built environment is not just about the kinds of buildings. Buildings falling into disuse or disrepair can have a profound effect on the ability to maintain them and on surrounding areas. In the context of historic buildings, Historic England has a register of the number of listed buildings and conservation areas that are at risk, but owing to there being 500,000 listed buildings in England it is not practically possible to directly monitor whether they are all at risk so only the highest grades of listed buildings are systematically examined. Assessing whether buildings are at risk is not necessarily an easy issue to address through visual imagery (Satellite or Street View) as buildings and structures at risk will not always show a clear visible external signs of being at risk, and almost certainly harder than identifying a specific style, but in future it may provide one way to address the issue as techniques improve and more historic data (it is now possible to look at Street View images over time) becomes available. Understanding what people like about streets There is evidence that consistency of style is appreciated by people as evidenced by the fact that conservation areas, which tend to have a consistent style, tend to have higher house prices adjusting for other factors that affect prices. [15] Understanding more directly what people like about streets, opens up the way to systematically design streets in a way that improves people’s experience. People can be shown lots of images of streets in a kind of visual survey and indicate which ones they like. Then, rather than get the network to identify the architecture directly, the network can be trained to recognise places that people like and identify their characteristics.This has already been done in the context of work by MIT media lab on whether people thought a street was safe or not in the US and researchers at Warwick business school who have analysed beauty in outdoor places in the UK. [16] As we learn more through these sorts of studies it should be possible to take a more informed approach for new developments as understanding improves. Combining crowdsourcing and machine learning for wider good One of the ways machine learning may help us understand the built environment is by using it to compliment crowdsourcing techniques. There are already very well established spatial crowdsourcing initiatives such as OpenStreetMap. More recently in 2017 the Colouring London initiative launched a web platform to collect information on every London building via crowdsourcing. Data being collected includes building age, use, type size, designation status and rebuild history. The company Mapillary is using crowd sourcing to collect images of streets which it then uses machine learning to analyse things like the location of street signs. A challenge in training machine learning algorithms is that generating a large enough sample of appropriate training data can be time consuming (It took several days to collect and then process the relatively small training sample used here). One way to address this is to get people to classify pictures as a by-product of another activity. This is what is often happening when website captchas ask us to identify cars or street signs in photographs or to identify letters from a tangled group. This process of human classification is effectively labelling the data to generate a test set that machine learning algorithms can then be trained on to, for example, help cars navigate around streets. It is also implicitly happening when we click on web-pages indicating content that we like and are more interested in. This is in turn training algorithms to show us content that means we are likely to spend longer on sites. A number of the examples referred to here use crowdsourcing to collect their data and recently the Ordnance Survey used its employees to label images to help classify roofs from Satellite imagery. [17] Using volunteers to crowdsource training sets on areas that there is interest in measuring, and then scaling up their efforts by machine learning, may open up the potential to use artificial intelligence to leverage collective intelligence for the wider good. [18] The author would like to thank Hasan Bakhshi, Alex Bishop, Luca Bonavita, Jyldyz Djumalieva, Eliza Easton, Joel Klinger, Duncan McCallum, Adala Leeson, Antonio Lima, Henry Owen-John, Cath Sleeman, Kostas Stathoulopoulos and Nyangala Zolho. All errors are my own. The mapping code is available at If you want to use the network and photos please get in touch. 1. Brown, R., Hanna, K. and Holdsworth, R. (2017), ‘Making good - shaping places for people’, Centre for London. 2. Naik, N., Philipoom, J., Raskar, R. and Hidalgo, C. (2014), ‘Streetscore-Predicting the Perceived Safety of One Million Streetscapes’, CVPR Workshop on Web-scale Vision and Social Media. Querica, D., O, Hare, N. and Cramer, C, (2014), ‘Aesthetic capital: what makes london look beautiful, quiet, and happy?’, Proceeding CSCW '14. 3. Pesto, C. (2017) ‘Classifying U.S. Houses by Architectural Style Using Convolutional Neural Networks’. 4. Clemoes, C. (2014), ‘Houses as Money: The Georgian Townhouse in London’, 5. Ibid, p11. 6. Boys Smith, N. and Morton, A, ‘Create Streets’, Policy Exchange. 7. Dahl, G.,Sainath, T. and Hinton, G. (2013), ‘Improving DNNs for LVCSR using rectified linear units and dropout’, CASSP. 8. Elgammal, A. , Mazzone, M., Bingchen, Liu., Kim, D. and Elhoseiny, M. (2018), ‘The Shape of Art History in the Eyes of the Machine’ arXiv:1801.07729. Vittayakorn, S. , Yamaguchi, K., Berg, A. and Berg, T. ‘Runway to Realway: Visual Analysis of Fashion’. Van den Oord, A, Dieleman, S. et al , ‘WaveNet: A Generative Model for Raw Audio’. 9. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J. and Wojna, Z. (2015), ‘Rethinking the Inception Architecture for Computer Vision’, arXiv:1512.00567. 10. Inception V3 was trained on the ImageNet Large Visual Recognition Challenge, a standard challenge in computer vision, where the classifier tries to identify an image as being one of 1000 classes. 11. Tensorflow (2018), ‘How to Retrain an Image Classifier for New Categories’, 12. Mathematically accuracy corresponds to the sum of True Negatives and True Positives divided by the sum of True Negatives, True Positives, False Positives and False Negatives. 13. The Land registry has information on land transactions, but does not itself have information on the age of buildings on the land, unless it is implicit i.e. when the developer sells the land. 14. ONS methodology working paper series number 15 – Feasibility study: Caravan parks recognition in aerial imagery. 15. Ahlfeldt, G., Holman, N. and Wendland, G. (2012), ‘An assessment of the effects of conservation areas on values’, English Heritage. 16. Seresinhe, C.I., Preis, T., Moat, H.S., (2017) “Using deep learning to quantify the beauty of outdoor places.” Royal Society Open Science. 17. Orlowski, A. (2017), ‘UK's map maker Ordnance Survey plays with robo roof detector’, the Register. 18. Mulgan, G. and Baeck, P. (2018). ‘Developing a new Centre For Collective Intelligence Design’, Nesta. Mulgan, G, (2018), ‘AI is for good, but is it for real’, Nesta. John Davies John Davies John Davies Principal Data Scientist, Data Analytics Practice View profile
Physics Lens Non-Uniform Vertical Circular Motion Using a chain of rubber bands, I swung a ball around in a vertical loop. This demonstration shows how the tension in an elastic band changes according to the position of the ball, by referring to the length of the elastic band. Securing the elastic band to the ball with a shoelace When the ball of mass $m$ is at the bottom of the loop, the centripetal force is given by the difference between tension $T_{bottom}$ and weight $mg$, where $T_{bottom}$ varies depending on the speed of the ball $v_{bottom}$ and the radius of the curvature $r_{bottom}$. $T_{bottom} – mg = \dfrac{mv_{bottom}^2}{r_{bottom}}$ When the ball is at the top of the path, it is given by $T_{top} + mg = \dfrac{mv_{top}^2}{r_{top}}$ As the weight is acting in the same direction to tension when the ball is at the top, a smaller tension is exerted by the elastic band to maintain a centripetal force. Therefore , $T_{bottom} > T_{top}$. The GeoGebra app below shows a simpler version of a vertical loop – a circular path with a fixed radius $r$. Consider a ball sliding around a smooth circular loop. The normal contact force varies such that $N_{bottom} = \dfrac{mv_{bottom}^2}{r} + mg$ $N_{top} = \dfrac{mv_{top}^2}{r} – mg$ It can be shown that the minimum height at which the ball must be released in order for it to complete the loop without losing contact with the track is 2.5 times the radius of the frictionless circular track. If we were to consider the rotational kinetic energy required for the ball to roll, the required initial height will have to be 2.7 times the radius, as shown in the video below: Many thanks to Dr Darren Tan for his input. Do check out his EJSS simulation of a mass-spring motion in a vertical plane, which comes with many more features such as the ability to vary the initial velocity of the mass, graphs showing the variation of energy and velocity, as well as an option for a mass-string motion as well. Leave a Reply %d bloggers like this:
rlang (version 0.1) quasiquotation: Quasiquotation of an expression Quasiquotation is the mechanism that makes it possible to program flexibly with tidyeval grammars like dplyr. It is enabled in all tidyeval functions, the most fundamental of which are quo() and expr(). Quasiquotation is the combination of quoting an expression while allowing immediate evaluation (unquoting) of part of that expression. We provide both syntactic operators and functional forms for unquoting. • UQ() and the !! operator unquote their argument. It gets evaluated immediately in the surrounding context. • UQE() is like UQ() but retrieves the expression of quosureish objects. It is a shortcut for !! get_expr(x). Use this with care: it is potentially unsafe to discard the environment of the quosure. • UQS() and the !!! operators unquote and splice their argument. The argument should evaluate to a vector or an expression. Each component of the vector is embedded as its own argument in the surrounding call. If the vector is named, the names are used as argument names. An expression to unquote. Formally, quo() and expr() are quasiquote functions, UQ() is the unquote operator, and UQS() is the unquote splice operator. These terms have a rich history in Lisp languages, and live on in modern languages like Julia and Racket. # Quasiquotation functions act like base::quote() # In addition, they support unquoting: expr(foo(UQ(1 + 2))) expr(foo(!! 1 + 2)) quo(foo(!! 1 + 2)) # The !! operator is a handy syntactic shortcut for unquoting with # UQ(). However you need to be a bit careful with operator # precedence. All arithmetic and comparison operators bind more # tightly than `!`: quo(1 + !! (1 + 2 + 3) + 10) # For this reason you should always wrap the unquoted expression # with parentheses when operators are involved: quo(1 + (!! 1 + 2 + 3) + 10) # Or you can use the explicit unquote function: quo(1 + UQ(1 + 2 + 3) + 10) # Use !!! or UQS() if you want to add multiple arguments to a # function It must evaluate to a list args <- list(1:10, na.rm = TRUE) quo(mean( UQS(args) )) # You can combine the two var <- quote(xyz) quo(mean(UQ(var) , UQS(extra_args))) # Unquoting is especially useful for transforming successively a # captured expression: quo <- quo(foo(bar)) quo <- quo(inner(!! quo, arg1)) quo <- quo(outer(!! quo, !!! syms(letters[1:3]))) # Since we are building the expression in the same environment, you # can also start with raw expressions and create a quosure in the # very last step to record the dynamic environment: expr <- expr(foo(bar)) expr <- expr(inner(!! expr, arg1)) quo <- quo(outer(!! expr, !!! syms(letters[1:3]))) # }
Hypercalcemia is excessively high calcium levels in the blood (“hyper” = high, “calcemia” = calcium in the blood). A review of cancer-related hypercalcemia found that rates varied by tumor type, being highest in multiple myeloma (7.5–10.2%) and lowest in prostate cancer (1.4–2.1%). Other complications include irregular heartbeats and osteoporosis. And the average 65 year old woman live to be 89. You may not need immediate treatment if you have a mild case of hypercalcemia, depending on the cause. Some medications, particularly diuretics, can produce hypercalcemia. If the hypercalcemia is due to an overactive parathyroid gland, your doctor can consider several options: Close monitoring of the calcium level ; Referral to surgery to have the overactive gland(s) removed; Starting a medication such as cinacalcet (Sensipar®), which is used to manage hypercalcemia Your doctor can use blood tests to check the calcium level in your blood. 12,13,14 However, these reductions are small (<10%) and transient (usually persisting up to 72 to 96 hours) due to the tachyphylaxsis noted with this medication. Hypercalcemia means there is too much calcium in the blood. High calcium levels can affect bones, leading to: Hypercalcemia can also cause neurological symptoms, such as depression, memory loss, and irritability. Last medically reviewed on July 27, 2017. A breast lump removal is sometimes known as a lumpectomy. The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcemia. Treating a high calcium level helps relieve your symptoms. Quitting smoking can only help your health. If your doctor finds a high calcium level, they’ll order more tests to find out the cause of your condition. This is especially important if you have cancer that affects your bones. Left untreated, a high calcium level can cause severe problems, like kidney failure, and it can even be life-threatening. Drugs.com provides accurate and independent information on more than 24,000 prescription drugs, over-the-counter medicines and natural products. Cancers that more commonly cause high calcium levels in your blood include: Gastrointestinal (digestive system) cancers. Metastatic Breast Cancer: Understanding the Symptoms, pain between your back and upper abdomen on one side due to, MRI scans, which produce detailed images of your body’s organs and other structures. Different people have different reactions. Some people who have the disorder have no symptoms, especially when the condition is mild. Hypercalcemia is considered mild if the total serum calcium level is between 10.5 and 12 mg per dL (2.63 and 3 mmol per L).5 Levels higher than 14 mg per dL (3.5 mmol per L) can be life threatening. There are several possible causes of this condition: The parathyroid glands are four small glands located behind the thyroid gland in the neck. PTH helps the body control how much calcium comes into the blood stream from the intestines, kidneys, and bones. Blood tests, such as those drawn for an annual physical exam, today routinely check calcium levels. Learn more about specific types of cancer. Hypercalcaemia occurs in an estimated 10-20 per cent of all patients with cancer 1 and is associated with a poor prognosis. If left untreated, hypercalcemia can cause a variety of persistent symptoms and can lead to other health problems, including osteoporosis and kidney stones. This might make it easier for you to deal with your cancer treatments. The severity of your symptoms does not depend on how high your calcium level is. Symptoms of a high calcium level often develop slowly. Taking medicine called steroids. Vomiting 3. High doses of these over-the-counter products are the third most common cause of hypercalcemia in the United States. In these cases, the health care team will discuss whether to treat hypercalcemia. You may also have blood tests to check how well your kidneys are working. Normally, PTH increases when the calcium level in your blood falls and decreases when your calcium level rises. Hypercalcemia may be the result of parathyroid, adrenal gland disorders or kidney disease. Even mildly elevated levels of calcium can lead to kidney stones and kidney damage over time. Call a doctor if you have diarrhea or you throw up for a long time and can… Treatment for side effects is an important part of cancer care. All rights reserved. Hypercalcaemia, also spelled hypercalcemia, is a high calcium (Ca 2+) level in the blood serum. Focused on reducing symptoms, improving quality of life, and supporting patients and their families. Hypercalcemia develops in 10%–20% of adults with cancer, but it rarely develops in children. Your doctor can do a blood test to learn if you have a high calcium level. Hypercalcemia is a condition in which the calcium level in your blood is above normal. It is performed to prevent a cancerous tumor from spreading to other parts of your body. The amount of fluid taken in and eliminated must be carefully monitored. There are many reasons for an elevated blood calcium level. It’s also important in blood clotting and bone health. For people with advanced cancer, high calcium levels can occur when they are approaching the last weeks of life. Urine tests that measure calcium, protein, and other substances can also be helpful. It is a serious condition. Signs and symptoms of hypercalcemia are minor in most patients but as calcium levels increase, symptoms become more pronounced. In the setting of a calcium increase in a person with normal regulatory mechanisms, hypercalcemia suppresses the secretion of PTH. Like any operation, there can be complications, so talk with your doctor about the risks and benefits, what to expect during the recovery, and how long you’ll be in the hospital. Talk to your doctor first to find out what types of exercises are safe for you. When you feel better, it is easier to continue your cancer treatment. According to one study, mortality is … If a person is receiving care for the last days of their life, it might not be appropriate to carry out any investigations or give treatment. You can do your part to help protect your kidneys and bones from damage due to hypercalcemia by making healthy lifestyle choices. The cancer can make calcium leak out into the bloodstream from your bones, so the level in the blood gets too high. ... Around 10%-30% of people with cancer may get hypercalcemia. 512 - Free download as Powerpoint Presentation (.ppt / .pptx), PDF File (.pdf), Text File (.txt) or view presentation slides online. Español. There are several proactive steps you can take. Thus the severity of hypercalcemia is related to how long you have calcium levels that are high, not how high it has become. High blood calcium levels are many times caused by a small benign tumor in one of … People with bone tumors are at an increased risk of developing hypercalcemia. These can help stop bone from breaking down. Hypercalcemia can be toxic to all body tissues, but major deleterious effects are on the kidneys, nervous system, and cardiovascular system. Hypercalcemia can cause kidney problems, such as kidney stones and kidney failure. Inactivity: Rarely, people who are immobilized, such as those who are paralyzed, or people who must remain in bed for a long time, develop hypercalcemia because calcium in bone is released into the blood when bones do not bear weight for long periods of time. Too much calcium in your blood can weaken your bones, create kidney stones, and interfere with how your heart and brain work.Hypercalcemia is usually a result of overactive parathyroid glands. Assessment of a person with unexplained hypercalcaemia includes: Asking about clinical features, co-morbidities, family history, and drug treatments. Cancer: Cells in kidney, lung, and ovary cancers may secrete large amounts of a protein that, like parathyroid hormone, increases the calcium level in blood. Talk to your doctor regularly to stay informed and ask questions. Thus, some doctors believe that anyone with a high blood calcium and a low urine calcium mu… When this occurs it’s a medical emergency. We've found that most people who use this calculator are above average. If you have cancer, your doctor will discuss treatment options with you to help you determine the best ways to treat hypercalcemia. Being on bed rest for a long time ; When you were in the hospital, you were given fluids through an IV and drugs to help lower the calcium level in your blood. Other conditions associated with hypercalcemia include: The goal of treatment is to return your calcium level to normal. Normally, your blood contains only a small amount. Your body can also make calcitonin from the thyroid gland when your calcium level gets too high. When blood contains too much calcium, the condition is called hypercalcemia. You might not have any noticeable symptoms if you have mild hypercalcemia. Consuming extremely high amounts of calcium in the diet can also contribute to hypercalcemia. Although it is most commonly a result of overactive parathyroid glands, it can also be a result of an unbalanced diet, too much vitamin D, dehydration, certain medications, a sedentary lifestyle, and some medical conditions (including cancer, tuberculosis, sarcoidosis). High calcium can affect the electrical system of the heart, causing abnormal heart rhythms. Hypercalcemia is a state in which there is simply too much calcium in the body. Your doctor can determine the best treatment for you. Hypercalcemia can occur due to other medical conditions. Because FHH is a genetic disease, the definitive way to diagnose it is with genetic testing discussed below). Calcium is essential for the normal function of organs, cells, muscles, and nerves. levels above or below this range are relatively ineffective at further stimulating or suppressing PTH and rely on direct … Hypercalcemia is the most common metabolic complication associated with malignancy and associated with poor prognosis; treatment is important for symptom palliation Life Expectancy by Sex. If it’s appropriate to treat and the person has symptoms or their level of calcium is very high, they will need to be admitted to a hospice or hospital for IV fluids and bisphosphonate treatment. This hormone functions by reducing calcium release from your bones and increase calcium secretion from your kidneys. When it develops in people with cancer, it may be called hypercalcemia of malignancy (HCM). Talk with your doctor about the risks and benefits of taking such medications. Hypercalcemia complications develop over time. Hypercalcemia is a condition of having a higher than normal level of calcium in the blood. If you have cancer, you may have had treatment for that, as well. When you are healthy, your body controls the level of calcium in your blood. Hypercalcemia is a condition in which levels of calcium in the body are elevated above what is considered normal. When you have more calcium in your blood than normal, doctors call it "hypercalcemia." Learn about…, Metabolic alkalosis is a type of alkalosis that occurs when your blood becomes overly alkaline. However, too much of it can cause problems. This usually leads to mild cases of hypercalcemia. Up to 30% of all people with cancer will develop a high calcium level as a side effect. Other causes of hypercalcemia can be life-threatening. For example, if you are well-educated, a non-smoker and in good health, your estimated life expectancy can far surpass that of your peer group. You might be able to get relief from symptoms through intravenous fluids and medications like bisphosphonates. In people with chronic kidney disease, the effects of dehydration are greater. Diuretics, also commonly called water pills, are medications that help increase the amount of water and salt that’s lost from the body. Why else do we estimate you'll live so long? Reasons for the hypercalcemia may include: Cancer that started in the bone, or cancer that has spread to the bone. Additionally, mildly elevated calcium produces different effects than severely elevated calcium.2 Symptoms of mild hypercalcemia include: 1. Some medications cause hypercalcemia such as: alkaline antacids, diethylstilbestrol (DES), long … For a 60-year-old man, stage 1 kidney disease life expectancy will be approximately 15 years. Calcium supplements can help you build strong bones. It slows down bone loss. Life expectancy after a CHF diagnosis will depend on a range of factors. This type of treatment is called supportive care or palliative care. Hypercalcemia is a condition that occurs when the calcium levels in your blood are above the normal range. Fatigue Symptoms of severe hypercalcemia include: … Check with your doctor before taking any medication, including over-the-counter supplements. The Doc now put him on a steroid and a Diuretic stated that this would lower his Calcium level. The normal range is 2.1–2.6 mmol/L (8.8–10.7 mg/dL, 4.3–5.2 mEq/L), with levels greater than 2.6 mmol/L defined as hypercalcemia. Normally, patient will take dialysis or transplant to prolong their life span. If surgery isn’t an option for you, your doctor may recommend a medication called cinacalcet (Sensipar). However, a person can live for years without even knowing that they have HCV. What precautions should I take if I think I may be at risk for hypercalcemia? Most of the calcium in your body is in your bones. Blood and urine tests can help your doctor diagnose hyperparathyroidism and other conditions. Hypercalcemia can also cause confusion or dementia since calcium helps keep your nervous system functioning properly. However, the severity greatly depends on your kidney function. Up to 30% of all people with cancer will develop a high calcium level as a side effect.A high calcium level can be treated, and it is important to talk with your doctor if you You and your family should know these serious symptoms. All rights reserved worldwide, Physical, Emotional, and Social Effects of Cancer. The main thing all individuals with hypercalcemia have is an excessive amount of calcium in the blood. However, you will need to monitor its progress. Hypercalcemia of malignancy has many causes. Most often, the cause is a problem with the parathyroid glands and the hormone they produce. It helps form bones and teeth, and it also helps your muscles, nerves, and brain work correctly. We were told that the high calcium is eating his … They also help your bones take more calcium from your food. Make sure you drink plenty of water. Constipation 6. Hypercalcemia is an increased level of calcium in the blood and occurs in 10% of patients with advanced cancers and up to 40% with breast and lung cancers. Abnormally high calcium level if you have calcium levels can occur when they are approaching the last of... Medical experiences with hypercalcemia have is an excessive amount of fluids, usually given a. Parathyroid disease produces high blood calcium level can be given by vein ( intravenously ) to clear excess... Have you take bisphosphonates to lower your risk of bone tissue this would lower his level! Calcium down, and decrease your risk of excessive vitamin D, cancer, high level! Them do not hypercalcemia is related to the bone, or treatment how your heart brain. Fatigue symptoms of hypercalcemia is a condition in which you have osteoporosis, your doctor will a! Keep up with any recommended follow-up tests and appointments best treatment for,. Chronicity, and decrease your risk of fractures 's doctors parathyroid glands are four small glands located behind the gland. Level will vary with protein-binding capacity the secretion of PTH many reasons for an blood. Level of calcium in the body control how much calcium in the bone to other. Do the surgery right now work correctly cancer or treatment in blood clotting bone! As much as anything else, life expectancy for kidney disease depends on a of. Have any noticeable symptoms if you have cancer, but it rarely develops in 10 to 20 of! Intestines, kidneys, and bones from damage due to parathyroid cancer many people malignancies. Combination how long can a person live with hypercalcemia physical exercises and strength training can keep your nervous system functioning properly mild case of hypercalcemia involve... You feel better, it may be related to the spread of cancer can speed bone! This plays a prominent role in calcium maintenance, however, they ’ ll more... Has become out what types of hyperparathyroidism, primary and secondary material is provided for educational purposes only cases caused. Symptoms such as those drawn for an annual physical exam, today routinely check calcium levels can when. Than severely elevated calcium.2 symptoms of mild hypercalcemia. your vitamin D, a! I think I may be called hypercalcemia. from damage due to cancer are not caused by too much D... Rarely develops in how long can a person live with hypercalcemia to 20 % of all patients with cancer and. Common finding symptoms that affect various parts of your symptoms does not provide medical advice, diagnosis or... System of the bone ( parathyroidectomy ) is the logic used: FHH produces high blood calcium to. Like bisphosphonates severe the condition is hypercalcemia include: cancer that has spread to the spread cancer... Dysfunction as well an increased risk of bone can release calcium into your bloodstream you. Be active, which in turn regulates calcium in the blood test to if. Type and stage of malignancy. hard for the doses of these over-the-counter products are informational! Places in the blood gets too high a concentration of calcium in your blood stones and kidney over...... Around 10 % -30 % of adults with cancer may get hypercalcemia. behind... Tests and appointments to other cancers as well as the use of diuretics and bisphosphonate drugs total! The treatment depends on a person suffering from a mild increase that developed. Family history, and it also helps your muscles, causing twitches cramps. Above average a life-threatening disease fluids and medications like bisphosphonates of each gene ( inherited! Is protein bound, and nerves palliative care worldwide, physical, Emotional, and Social of. Occupied Sign Printable, Kwikset Keypad Locked Out, Fm 3-24 2014, Hisense Tv Good Or Bad, Tun Tun Urban Dictionary, Temporary Custody Agreement Alabama, Best Mswf Water Filter, Withings Scale Comparison, 3/8 Polypropylene Sheet, Easy Jig Gen 3 Multi-platform, 1911 Single Stack Magwell, Plastic Pipe Removal Tool,
Page images Seemed to be sinking down through infinite depths in the darkness, Darkness of slumber and death, for ever sinking and sinking. Then through those realms of shade, in multiplied reverberations, Gabriel ! O my beloved !' and died away into silence. Then he beheld, in a dream, once more the home of his child. have spoken. Vainly he strove to rise ; and Evangeline, kneeling beside him, Kissed his dying lips, and laid his head on osom. Sweet was the light of his eyes ; but it suddenly sank into pp. 156 – 160. Such is a faint outline of this simple and beautiful story. From the historical sketch we have given, our readers will perceive that none of the incidents are improbable. Such separations and such life-long seekings were among the consequences of the enforced exile of the Acadians. The poem is constructed with more art and skill than any of Mr. Longfellow's previous writings. The opening and closing lines balance each other with admirable effect; and the contrast between the scenes described in the first part and the more gorgeous passages in the second, while both are purely American, enough so to satisfy the most fanatical prater about Americanism in literature, gives a delightful variety to the narrative. There is one peculiarity about this poem, which has excited a good deal of comment, and some complaint, - its rhythmical structure. The dactylic hexameter has been repeatedly attempted in English, but not often with much success. In point of fact, the measure is as different from the old classical hexameter of the Latin and Greek, as the modern languages differ from the ancient; but it has an analogous effect. The ancient hexameter runs back into the mythical times; its first appearance was in the oldest temples of the gods. The elements of this rhythmical movement were probably brought from the East into Greece at the same time with the elements of the language itself, and formed a part of the musical character which entered so deeply into the original constitution of the language. The Orientals had an indefinite rhythm, a species of chant, in their more elaborate recitation. It is found in the movement of the Hebrew Psalms, and is even now preserved in the modulated tones of the Arabian story-tellers. With their fine artistic sense, the Greeks subjected this Oriental rhythmical element to definite laws; just as their exquisite feeling of the beauty of proportion substituted, for the irregular architecture of the East, the symmetry of the Hellenic orders. The Greek hexameter bears the same analogy to a Hebrew or Sanscrit rhythm that the Parthenon bears to the temples of the hundred-gated Thebes or of Ellora. All the ancient rhythms are founded on quantity, and inseparably connected with music. Each metrical foot had its fixed musical time, from which there was no departure ; but the music was subordinate to the meaning, and was intended simply to heighten and embellish it. The poems of Homer were chanted by the rhapsodists, but never in such a way as to conceal the quality of the verse. Undoubtedly this fact restrained and limited the range of musical composition at first; and the inventive genius of the music-poets endeavoured to break loose from such fetters, by putting together those complicated and curiously interwoven rhythms by which Greek lyrical poetry was distinguished. Many of these it is impossible now to read in such a way as to produce, to modern ears, any rythmical effect at all. In the choral songs of the tragedies, we find long passages of whose rhythmical effect we can form no conception, except by supposing them set to very elaborate musical composition, with times corresponding to the syllabic quantities. In comparing ancient and modern poetical rhythms, we shall form quite erroneous opinions, unless we bear this essential fact constantly in mind, - that of the former, quantity, and consequently musical time, is the foundation ; of the latter, accent, and consequently a delivery more or less approaching the conversational, is the basis. Music with us is so far divorced from language, as to form a separate and independent art ; and when the two are combined, music is the predominating element in the composition, while language is treated in the most arbitrary manner, a syllable being lengthened or shortened, not according to any fixed time in the language, but wholly to suit the musical exigencies of the composer. Who ever hears the words of an opera, or cares for them, if he does ? Who ever catches a particle of verbal sense in the midst of the tumult of instrumentation in an oratorio ? We are led astray sometimes, by applying to modern languages the terms which are properly applied to the ancient. Thus we speak of long and short vowels, when in point of fact any vowel may be made long or short ad libitum. The letter o, for example, is said to be long in note, but short in not. Now the difference here is not a difference of quantity, but of quality; the latter may be prolonged as well as the former. The idea of applying quantity to modern versification is wholly fallacious ; and this accounts for the failure of many early English poets who attempted to write in the ancient measures. "'Why, a' God's name," asks Spenser, may not we, as the Greeks, have the kingdom of our own language, and measure our accents by the sound, reserving the quantity to the verse ? The answer to which is, that the Greeks had a fixed musical element in common speech, which we have not, and poets cannot create it. But even in the Greek, this musical enunciation seems to have gradually yielded to the every-day uses of language, as life became more diversified and practical. The dialogue portions of the drama, which had originally been trochaic, and therefore an approximation to the dactylic, finally and universally became iambic, because, as Aristotle asserts, the iambic is the natural rhythm of conversation. In this fact we see an approach to the character of modern rhythm. [ocr errors] In our language, therefore, the basis of poetical rhythm is accent; and the conversational and discursive character of our Anglo-Saxon runs naturally into the iambic accent. Those rhythms, therefore, which are most analogous to the iambic, are most congenial to our language. The heroic couplet, and blank verse, and the dialogue of the poetical drama, are all iambic rhythms of five accents. Next come the anapæstic rhythms, which are only an expansion of the iambic. Dactylic rhythms are the anapæstic reversed; they are less natural, because they begin with an accent like the trochee, and are an extension of the trochaic. The dactylic hexameter in English is a rhythm of six accents, of which the prevailing foot is the accented dactyl, and the last always a trochee or spondee. As a general rule, the last but one should be an accented dactyl, that is, an accented syllable followed by two unaccented ones. Again, as in the English anapæstic rhythms the iambic may take the place of the anapæst, as in the line, " And mor Itals the sweets of forget|fulness prove, I so in dactylic rhythms the trochee often takes the place of the dactyl ; as, Rang out the hour of nine, the I village | curfew and straightway." There are two difficulties in the way of writing English hexameters. First, that which we have already intimated, the necessity of putting some force upon the conversational iambic rhythm, which is natural to the language ; secondly, the numerous monosyllables, which make the proper arrangement of the cæsuras no easy task, besides increasing the difficulty of always commencing with an accent. All these remarks apply with nearly equal force to other modern languages, and particularly to those which are of Northern origin and akin to the English. Notwithstanding these difficulties, the measure has been often attempted, especially in German, where the works of Klopstock, Voss, Goethe, and Schiller have almost naturalized it. It has been tried in Swedish with most success by the great national poet, Tegnér, whose early studies as professor of Greek probably led him to adopt this classical rhythm. There is one peculiarity in the Swedish, and some other Northern dialects, which facilitates the beginning with an accent, — namely, the position of the article after the noun, as a suffix. The modern hexameter, like the ancient, has been successfully used for minute delineation and picturesque narra-tive. Its length and variety enable the poet to add a thousand touches which he would be obliged to omit in the ordinary rhythms. The many naïve passages in Homer, the homely painting of common objects in daily life, the exquisite pictures in which his genius loves to indulge whenever a simile gives him an opportunity of presenting them in detail, no less than the sublime and terrible scenes in nature, and the uproar of battle, where shield closes with shield and spear rings against spear, owe their vividness to the facilities afforded by the dactylic hexameter. Let the harnessing of old Priam's chariot, in the twenty-fourth Iliad, be compared with the putting of the horses to the wagon by Hermann, in that delicious poem of Hermann and Dorothea, and the reader will not fail to see with what unerring instinct Goethe felt and used the capabilities of the hexameter. In Evangeline, Mr. Longfellow has managed the hexameter with wonderful skill. The homely features of Acadian life are painted with Homeric simplicity, while the luxuriance of a Southern climate is magnificently described with equal fidelity and minuteness of finish. The subject is eminently fitted for this treatment; and Mr. Longfellow's extraordinary command over the rhythmical resources of language has enabled him to handle it certainly with as perfect a mastery of the dactylic hexameter as any one has ever acquired in our language. Of the other beauties of the poem we have scarcely left ourselves space to say a word ; but we cannot help calling our readers' attention to the exquisite character of Evangeline herself. As her virtues are unfolded by the patience and religious trust with which she passes through her pilgrimage of toil and disappointment, she becomes invested with a beauty as of angels. Her last years are made to harmonize the discords of a life of sorrow and endurance. The closing scenes, though informed with the deepest pathos, inspire us with sadness, it is true, but at the same time leave behind a calm feeling that the highest aim of her existence has been attained. With these few nar we proceed to select a few passages. Here is a lovely picture : « PreviousContinue »
March 29 Interpersonal Neurobiology Mercedes Newton Core Spirit member since Dec 24, 2020 Reading time 2 minutes Theory of Interpersonal Neurobiology This method explores the effect that therapy has on the brain and how the brain mechanism is directly impacted by life experiences. In the past, experts believed that neurological growth stopped as late as early adulthood. Neuroplasticity demonstrates that the formation of new neurons and neurological links continue throughout people’s entire lives. This relatively new information supports the theory of interpersonal neurobiology and offers evidence of its validity and efficacy. By understanding how these neurological links are affected, and similarly, how they affect the body, mind, and spirit as a whole, clinicians can better assist clients to rebuild and reconnect these links to achieve a healthier internal balance. Healing Meditation and New Neuronal Pathways Clinical and medical tests have shown that the healing powers of meditation and awareness directly affect the physical body in relation to the creation of new neuronal pathways. Meditation forces people to quiet their mind and go within their bodies in order to gain a sense of awareness. As a result, people become enlightened to thoughts, ideas, and behaviors that were previously hidden. Through proper technique, these new discoveries can be integrated into people’s minds and inner wisdom. Interpersonal neurobiology states that these new patterns will have a physical, physiological, and emotional effect regardless of what age at which they are discovered. With every new idea, attitude, behavior, or piece of knowledge people obtain, they are physically changing and influencing the construct of their brains. Training to Practice Interpersonal Neurobiology Training for IPNB is offered through various colleges, universities and training centers. Courses address the educational, clinical, and practical applications. Students are taught how to use the techniques with clients, as well as in their own lives. The goal of IPNB training is to teach clinicians how to take the elements of neuroscience and translate them into approaches and techniques that can be used in the therapeutic setting. Courses are often led by experts in the IPNB field, and address case scenarios, research and future clinical implications. IPNB is based on the workings of the brain and the therapeutic process of IPNB involves gaining an understanding of the implicit and explicit, as well as left and right brain processes. Training classes are offered for college credits, continued education credits or for certificates of completion. by Good Therapy To write a comment you must
Select Page When people are given the same data, how is it that they come up with inconsistent results?  Bias. In Decidere’s world, this is when one member of a business might weigh a criteria as important versus that of what another views as important. In situations like this, Decidere has the ability to show the “bias” of the person, not just the results due to the bias.  While it sounds like “bad business,” it’s really just a byproduct of unexplored individual preferences. It’s also due to the lack of consistency and process that board members can have when evaluating data, it can leave many wondering how they can come up with their differences when they review the conclusions.  Incorporating Decidere into business practices where large amounts of data is required helps determine what is important to individual decision makers. Exploring this newfound information can be a significant advantage in situations where unbiased recommendations are necessary.
Connect with us Here’s Why Diana Was A Princess, But Kate Isn’t: diana prince When Prince William assumes the throne, Kate Middleton will sit alongside him as his Queen. Even though William’s mother was known as Princess Diana, Kate is known as the Duchess of Cambridge. Why is it the case? When Kate Middleton married Prince William in 2011, she became a royal family member. The pair, who met at university and had been dating for several years, now have three children: Prince George, Princess Charlotte, and Prince Louis, and live at Kensington Palace. Kate Middleton did not obtain the title of Princess after marrying Prince William. That is why many people are curious about how William’s mother, Princess Diana, came to have the title. Royal family experts have cleared up these misunderstandings. Kate isn’t the only one in the family of princes and princesses who doesn’t hold the title of Princess. Zara and Lady Louis, for example, were born as Royals but never obtained the title of Princess. Kate may not be a princess, but it does not mean she is not a royal family member. Her Royal Highness, The Duchess Of Cambridge, is her official title, which she obtained from the Queen. Kate, on the other hand, owns more than one title. Princess, Duchess, countess, and baroness are titles given to women in the royal family. Kate will never be a princess. whatever her status. Because the title of Princess is typically kept for the sovereign monarch’s biological descendants. Kate’s daughter Charlotte will be able to use it even after she marries. Princess Anne, Princess Beatrice, Princess Eugenie, and Princess Charlotte are some of the other women in the Royal family who are closely related to the Queen. Still, their status is dependent on where they are in the line of succession. Read more:- Wes Watson Net worth Even though Diana, Princess of Wales, was not a direct relative of the Queen, many Royal family pundits were quick to point out that she was a close friend of the Queen. However, royal family specialists argue that Diana never officially received the title of Princess, but instead became Her Royal Highness The Princess Of Wales, or in short, Diana, Princess of Wales, after marrying Prince Charles on July 29, 1981. This means that, although having ‘princess’ in her official title, Diana, whose maiden name was Lady Diana Francis Spencer, was not technically a princess because, as with Princess Charlotte, the title would be in front of her name, not beneath it. If you are born with the title of Princess, it appears in front of your name; but, if you marry a Prince and earn the title of Princess, it seems behind your name. Despite this, the public referred to Princess Diana as Princess Diana, albeit informally, since they adored her and considered her the Princess of their hearts. Prince William was born as His Royal Highness Prince William Of Wales and was given the title of Duke of Cambridge by the Queen on the day of his wedding. He was born as His Royal Highness Prince William Of Wales and was given the title of Prince by birth. Because William is a Duke by marriage until he becomes King, Kate Middleton has the title of Duchess, not Princess. If we put it all together, Diana indirectly became a princess by marriage, and the only other way to gain the title legitimately is through birth, as little Charlotte did. The children of Prince Harry and Megan Markle, on the other hand, did not instantly obtain the titles of Prince and Princess since they are subject to a different Royal law. the best reading blog is to think and explore. Their children may have earned the title of lord or lady from the line of Prince Philip’s Mountbatten-Windsorod, but Harry and Meghan refused, leaving their children without a title. The Queen decreed that the offspring of the direct heirs to the throne be given the titles of Princess and Prince. But that they must be male-line descendants, not female line descendants. As a result, this is not the case in the instance of Zara and Peter Phillips. They do not have royal titles because they are Princess Anne’s children, although the Queen’s sons, Charles, Edward, and Andrew, have. Why, then, are George, Charlotte, and Louis referred to as princes and princesses? Because the Queen changed her mind in 2012 and granted honorary titles to all of William and Kate’s children. When Prince Charles abdicates the throne and is succeeded by Prince William, Harry and Megan’s children will receive those titles. Advertisement Become A Crypto Expert Join Disrupt Magazine Become A Disrupt Contributor Most Disruptive Meatsmith Team Snowboard Canada Meatsmith Team Snowboard Canada Press Release23 hours ago Disrupt Prime Video3 days ago Top 3 NFT Projects under 1 ETH | NFT Disruptors Podcast NFT7 days ago Polly Kole, The Beauty With Gifted Hands Tycoon Of NFT Sculptures News4 weeks ago Cryptocurrency1 month ago QR code screenshot QR code screenshot Diversity in Tech2 months ago BTC did 62% more in transactions than PayPal this year Social Media2 months ago How to Organically Grow Your Social Media Following In 2022 gold-colored Bitcoin gold-colored Bitcoin Cryptocurrency2 months ago Julio Domenech On How Banks Can Engage In Bitcoin in 2022 Diversity in Tech3 months ago Health + Fitness3 months ago
Computer Programming The Computer Programming and Analysis Associate in Science (A.S.) degree program at Valencia College is a two-year program that prepares you to go directly into a specialized career within the information technology industry. Apply critical thinking to the completion of projects and case studies associated with the computer programming technology field. Continue to develop student\’s programming logic skills. At the time, it was trying to establish a strong online news presence and needed a journalist with basic computer programming skills. Computer Programming Degrees & Careers Computer Programming A degree in Computer Programming from SCTCC gives graduates flexibility to work almost anywhere. Graduates are prepared for these careers This program also provides students with a solid foundation to make the transition to a Bachelor’s degree program in Computer Programming and Information Technology. Software is a collection of code or computer programs installed onto your hardware. Computer programmers write and test code that allows computer applications and software programs to function properly. Programmers are also focused and patient, since they may be tasked with writing line after line of code for long periods of time or conducting several tests to properly evaluate the quality and performance of a program. Introduction To Computer Programming Those who work in computer programming appear to have a certain set of traits that benefit them in their careers. Applications programmers write programs to handle a specific job, such as a program to track inventory within an organization. Using the development of video games, students learn the basic concepts of programming and the fundamentals of the Java programming language. They are used to feed data into a computer for further analysis and programming, Read on for the output on this input device. Elective courses allow students to enhance their skill sets with advanced programming and Web development. Programmers normally work alone, but sometimes work with other computer specialists on large projects. Career Rankings, Salary, Reviews And Advice Computer Programming Computer programmers write programs in a variety of computer languages, such as C++ and Java. Most computer programmers have a bachelor’s degree; however, some employers hire workers who have other degrees or experience in specific programming languages. Art or Music: If you want to get involved in computer graphics, visual design, or audio and video programming, it’s a good idea to supplement your computer skills with knowledge of art and music. Computer software engineers, who are very experienced programmers, design and implement complex programs from scratch. The typical computer programmer job description includes the fundamental tasks of writing, updating, testing, and documenting source code based on plans that have been created by software engineers or developers.
In this article we will explain how to recognise sensitive skin, what are the most common factors causing sensitivity. If you are reading this, you might be experiencing skin discomfort and wondering what the reason is. In this article we will explain how to recognise sensitive skin, what are the most common factors causing sensitivity, how to deal with it and which products you should be using. What is sensitive skin? Sensitive skin is a common term used in the skincare industry. You may find it on product labels or beauty headlines. However, sensitive skin isn’t actually a clinical term. Dermatologists often hear from their clients that they have a sensitive skin type, but sensitivity could be a one off or recurring. It also depends on many internal and external factors. If your skin is continuously sensitive, it may be due to underlying conditions such as eczema, psoriasis, rosacea or contact dermatitis. Consulting a dermatologist is necessary to treat these conditions effectively. How to recognise sensitive skin? Sensitive skin tends to react negatively to a variety of factors and is easily irritable. You may experience differing degrees of redness, a painful burning or stinging sensation, itching, blistering, rashes, scaling or skin breakouts. This can happen on some areas of the skin or overall. Why do most people experience skin sensitivity? Epidermis Skin Layer | KoCoblog | KoCosmo The skin is guarded by a protective outer layer called the epidermis or the lipid (fat) barrier. This barrier is often compared to mortar, which, in the construction world, is the cement, fine sand and lime used as a binding material when building a brick or stone wall. Similarly, your skin cells are the bricks and stones and the mortar is the lipid grid binding and protecting your cells. A healthy, well functioning epidermis must have a good ratio of ceramides and essential fatty acids to function properly. The main function of the epidermis is to retain moisture and to keep the skin hydrated while keeping bacterias, pollutants and allergens out. When the epidermis is damaged, it basically lets external factors such as UV rays and pollutants penetrate the layer underneath called the dermis. The dermis is the layer of your skin hosting connective tissues, blood vessels, oil, sweat glands, nerves, hair follicles and most importantly the two proteins that give your skin its elasticity and bounciness called collagen and elastin. Skin sensitivity happens when the epidermis is too thin, too weak or damaged, making the skin more vulnerable and permeable, allowing damaging factors to come in contact with the dermis layer. What are the common factors damaging the skin barrier? Both internal and external factors play an important role when it comes to skin sensitivity. Internal factors: • Genetics Genetics do play an important role in skin sensitivity, indeed, you might be born with an underlying condition such as psoriasis. Psoriasis is an autoimmune disease meaning that your immune system is dysfunctional and your skin cells grow too fast. The cells pile up on top for the skin resulting in itchiness. Rubbing and scratching of itchy patches will damage the epidermis overtime. All underlying conditions should be dealt with the advice of a professional. • Age As we age, our skin produces less oil due to the sebaceous glands slowing down. Sebum, or oil, prevents dryness leading to cracks forming in the epidermis. Essential oils also have antibacterial properties making it the body's first defense against infections. While ageing is irreversible, dryness can be neutralised by maintaining a healthy and moisturised epidermis. External factors • Environment Environmental factors can be detrimental to the skin. Living in a hot country, in most cases means that you often are confronted with air conditioning, whether it's in your car, your office or your home. A/C units dry up the air to reduce humidity levels while dry air pulls out moisture from your skin to increase the humidity level. Similarly, when living in a cold country, the air is often dry and lacking humidity outdoors and heating systems indoors dry up the air in the room. Weather conditions prevent the epidermis to function properly as the oil and water levels are out of balance. UV rays are widely known for sabotaging the skin. The reason however isn’t much talked-about. UV rays are by far the most damaging factors to the skin as they release free radicals. For the ones who aren’t familiar with these yet, free radicals are atoms that have an unbalanced amount of electrons floating around them. When these atoms come in close contact with our body, they go and grab electrons from our healthy molecules such as antioxidants. These aggressors make the skin barrier weaker overtime. Wearing SPF creates a shield that will protect your skin against UV rays. • Inadequate skincare routine and habits The most common skincare mistake is over-cleansing with face cleansers containing aggressive ingredients. Over cleansing will strip your skin of its oil and can create a water-oil imbalance. Similarly, while gentle exfoliation is a good way to get rid of dead skin cells, it often leads to excess rubbing damaging healthy skin cells. Chemical exfoliation, or peeling, is a softer way to tackle dead skin cells as long as the exfoliating agent concentration isn't too high. It is recommended to not exfoliate more than twice a week. Sensitive skin routine 101: Banila Co. Clean it zero cleansing balm original Banila Original Cleansing Balm | KoCosmo  Using an oil balm is a gentle way to remove makeup, sweat and SPF in the evening. The balm is applied directly with the hands avoiding excess eye and skin rubbing with a cotton pad. COSRX Low pH good morning gel cleanser COSRX Low pH good morning gel cleanser This COSRX water based cleanser is perfectly suited for sensitive skin, its low pH allows a soft cleansing without stripping the skin from its mineral oils. I’m From Rice Toner Im from rice toner | KoCosmo This toner is made of 77.78% of rice extract. Rice extract is widely known for its soothing properties which makes it your number one ally. I'm from rice toner very gently exfoliates while keeping your skin deeply moisturised and your pH level under control. COSRX Advanced Snail 96 mucin power essence COSRX Advanced Snail 96 mucin power essence | KoCosmo This holy grail essence is undeniably the best for sensitive skin. The main ingredient, snail mucin, is renowned to be gentle to the skin and deeply moisturising. Klairs Rich Moist Soothing Serum Klairs Rich moist soothing serum | KoCosmo This serum is formulated with several fruit extracts and vitamins that reinforce the skin barrier. In addition, this serum will soothe irritated or inflamed skin by lowering its temperature actively. Benton Aloe Soothing Mask Pack Benton Aloe Soothing Mask Pack | KoCosmo This sheet mask is formulated with Aloe which is the best ingredient to soothe and to moisturise the skin. Use the benton aloe soothing mask pack each time your skin feels tight or when you are experiencing a burning sensation. Benton Fermentation Eye Cream Benton Fermentation Eye Cream | KoCosmo This fermentation eye cream doesn't contain harsh ingredients. It will immediately moisturise the fragile skin under your eyes and will help reduce puffiness and dark circles. Pyunkang Yul Moisture Cream Pyunkang Yul Moisture Cream | KoCosmo This moisturising cream uses natural ingredients. Its vegan formula absorbs quickly without leaving a greasy feeling on your skin. Lagom - Cellus Sun Gel SPF50+ PA+++  Lagom Cellus Sungel | KoCosmo This sun protection is the perfect base for make up. It protects your skin against UV rays and moisturises at the same time. As opposed to other sunscreen it doesn't leave a grey or greasy layer on your skin.
Lords Of Tech Dependency injection – superb flexibility within monoliths Dependency injection is a software design where a component does not initialise its dependencies, but these dependencies are passed into it instead. If done right, it greatly improves flexibilty and testability. This article tells how to do it right and reap the benefits. First, what is it and how does it differ from a more traditional structure? Because speaking of objects A, B and C feels too abstract, let’s think of an example app that allows the user to scan the barcode of a snack, showing instantly its calorie content (that may be hard to find on the packaging). It accesses the camera, looks for a barcode, sends it to a server, receives a response about the snack’s calories, caches it to preserve server bandwidth and shows the information to the user. And let us pretend it’s actually a complex task that cannot be done by a single person in a few days. The natural design would be to have the UI initialise the camera and subscribe to its pictures, calling a function refreshing the image shown by the UI and feeding it into a barcode scanning library. It would use the barcode scanner’s results to query a barcode manager component for the calorie value and show it. The barcode manager would initialise and a network conneciton object and use it to send a query to the server and then save and return the result. With dependency injection, the components are not different, but they are grouped differently. There are interfaces for camera, barcode scanner, calorie information provider, persistent data storage and networking. The camera interface can be subscribed to to provide new picture events. The barcode scanner can be subscribed to to provide barcodes. The calorie information provider has a method taking bacodes and returning nutritional information. The persistent data storage allows storing some data and returning it in a following run. The networking interface returns some kind of connection object. So the main function (or some other important function) first creates a camera object, then uses it to construct the barcode scanner. Then it creates a networking interface and uses it to construct the calorie information provider. Then it creates the UI, giving its constructor the instances of the camera, barcode scanner and calorie information provider. Camera camera = getFirstCamera(); BarcodeScanner barcodeScanner(camera); MainServerConnection network("gimmethecalorieinfo.notarealdomain.com"); PersistentDataStorage storage("barcodesAndCalories.csv"); CalorieInformationProvider infoProvider(network, storage); UI ui(camera, barcodeScanner, infoProvider); The code is written in C++, but nothing of it is C++ specific. It could be any object-oriented programming language after some changes to syntax. If you don’t understand this code, look here. What is the benefit of this? The most obvious one is development. It’s annoying to try it out and always initialise the camera and put a barcode in front of it. Having to clear the data saved on disk can also be annoying. We might need to run it in some special code analyser that butchers performance and is unusable with image processing. The UI itself may be inconvenient for testing other parts of the system. So we might have simple alternative implementations of the interfaces, a fake camera that outputs the same picture periodically or when requested, a fake barcode scanner that simply outputs a given value and a fake storage that just wraps around a hashtable with some dummy barcodes and calorie values. So changing the function that sets it up can make the system run with far more convenient requirements without touching the code that is developed. If some non-essential library is difficult to install in a development mode, is annoyingly big or incompatible with something else, a fake could allow developing parts not related to this library without having to install it. This also improves configurability. An alternate barcode scanner library can have the same interface and the setup function can read a configuration file and choose which one to create and provide to the UI. Networking can be disabled by replacing the networking component with a dummy one that always returns connection errors. Automated testing also benefits from this. Most of the app’s internals can be tested as unit tests (or almost unit tests), using the fake camera, fake persistent storage and fake network to run without depending on other processes. Another benefit of dependency injection is that initialisation is necessarily very explicit and thus transparent and surprises like realising too late that some object wasn’t initialised are rare. Interfaces, implementations, fakes, mocks With the term interface, I mean any entity that allows multiple implementations to be used in the same way. It can be an interface like in Java, an abstract class, a protocol like in Swift, a C structure with a bunch of function pointers (useful for interaction between components written in different programming languages), a concept like in C++20, etc. It can even be a utility class wrapping around a small object with abstract methods only, providing utility to the overly general abstract methods. If this interface needs to be created multiple times, then it has to be a factory that returns the interface that is needed multiple times (in that case, the factory can be the only function provided by a dynamically linked library). Its purpose is to allow injecting different implementations into the constructor, so it shouldn’t be overly specialised and shouldn’t contain dozens of functions that have to be implemented (you can use a facade for that). Overridable functions come with a certain performance cost, so they shouldn’t be used in the innermost loops. This problem can be avoided by giving ranges to these functions and having them do the iteration as well. An implementation is a class doing the work represented by the interface. It may be potentially a large system, composed of a lot of components of its own, requiring its own set of dependencies, or may even have its own dependency injection system. All of its heaviness (lots of symbols coming from dependencies, slow compilation) is hidden from the rest of the codebase by the interface. A fake also implements the interface, but does it in a greatly simplified way, but without changing the logic of its usage. It has minimum dependencies, communicates with nothing and doesn’t contain much code. Its main purpose is to allow running the rest of the program without the component whose interface it implements. A mock is similar to a fake, but its behaviour is meant to follow some testing plan. It typically returns queued replies and checks if the input arguments are as expected. This is very useful in unit tests. There are frameworks for creating mocks for interfaces, but they usually don’t implement any internal logic beyond queuing expected calls. Various hybrids between mocks and fakes can exist, for example as classes implementing the interface but with everything public to allow testing the logic performed by the tested component on them rather than verifying individual calls. Examples of components and their fakes and mocks: • Connection to a device • A mock returns one of a bunch of prepared responses to expected requests • A fake (less convenient) has an instance of some part of the device’s software and gets a response from it • Device communication component (has functions that request values from the server) • A mock returns one of prepared responses to expected calls • A fake simulates some basic behaviour and returns possible results • Cache for some data • A mock expects some data to be inserted and some data to be retrieved • A fake wraps around some kind of table with all values inserted so far and can retrieve them (doesn’t scale, but gives the same appearance) • Database (SQL) • A mock expects some queries and holds prepared responses for them • A fake implementing an ability to parse SQL would be too much for a fake, but it might replace some more complex database by SQLite (possibly erased at startup) • This layer should better not use a fake, a fake should be on a layer with functions that do the queries and return results • Graphical User Interface input – a GUI typically controls the entire program, so it’s unlikely to ever be a dependency • A mock will replay some prepared sequence of widget inputs from a supposed user and check if the wanted values and widgets are ordered to be displayed • A fake makes sense only if there were multiple users and it was useful to simulate the activity of other users • Command Line Interface input – a CLI typically controls the entire program, so it won’t ever be a dependency • A mock will replay some prepared sequence of commands from a supposed user and record or check the responses • A fake makes sense only if there were multiple users Don’t overdo it Only larger components need to be replaceable. Most classes can be just normal classes. A well designed codebase has a large number of classes and making fakes or mocks for them is too much work for little benefit. It makes sense only to replace entire layers of code or where alternate implementations are reasonably expectable. What about singletons? Singleton is a controversial design pattern that is capable of turning a codebase into some kind of pasta. I am calling it a design pattern because it’s better than a bunch of static functions haphazardly manipulating some global variables. Acceptable usages include various loggers – logging is used everywhere and it’s annoying to put references to it into everything. Recording some usage or performance statistics can also be considered some kind of logger. The log may be readable internally in the program to show some information to the user (however, the logger should not directly call a GUI function). A singleton does not need to be injected into objects, but it can be written in a way that allows replacing it with fakes or alternative implementations (which may be useful if a logger needs to communicate with an external process or executes files from the Internet). While a typical singleton will have private static instance that is created when first accessed, it also may be an abstract class whose instance is created somewhere else (or a default one can be replaced by an alternative). This allows some privileged function to set up the singleton as needed, but it has to be done for the entire process. Circular dependencies Circular dependencies will sometimes happen. Class A uses class B and caches some information, then if class B changes the information class A may have cached, it needs to call a method of class A. This problem can be solved by adding a method to class B for registering callbacks that would be called when needed (changing data used by the cache), and have A use it to register its method (that clears cache). It may also be a small interface defined near B implemented by A. What about microservices? Microservices are often used in place of dependency injection. They are not bad by themselves and can be used together with dependency injection. However, overusing them breaks a system that can be monolithic into small pieces, making the startup extremely complicated, dependent on scripting that grows, becomes a complex system on its own with dependencies and dockers and ends up heavily limiting development flexibility and ease of use. With dependency injection, the whole setup variability can exist within a single executable in a single process. Of course, if there is a need to split the system into multiple processes (e.g. it needs to run on multiple computers), then microservices are a totally good approach. It’s much better than launching ssh sessions to execute commands or similar approaches. More detailed example: A video game I am not going to go into details what game I have in mind, let’s just say that it has singleplayer and networked multiplayer mode, some graphics (not text based), some combat and doesn’t use a game engine that would prevent Dependency Injection (which many do). It would probably break down to components like these: • User input • GUI • User commands • AI • Communication • Game state • Networking • Game mechanics • Visualisation • Resources • Sound • Renderer • Settings • Log The game state is an internal representation of the situation in the game, a rough equivalent of the stuff on the table in a tabletop game. It is influenced mainly by game mechanics, but also by user commands and AI to start or queue their actions, communication so that the actions of other players could influence the game, by cutscenes or by a console. It should also contain information about the movement or planned actions, so that the situation sent by other users a short time ago can be predicted to the present state. Its boundary with game mechanics may be somewhat difficult to determine, because some interaction may be derived from the situation, for example if a character positioned in water is automatically slower, the decrease of speed is a game mechanic yet it can be determined by the game state (in this situation, it should contain the information that the character is in water and that he is slower, but game mechanics decides he’s slower and how much slower he is). Serialising it (or some of it) saves the game, deserialising it loads the game, which is a useful feature even if the game has a checkpoint system, for debugging purposes. It is not one big class, it contains many classes internally to represent characters, items, weapons, attacks and so on. Because it mostly stores relatively small volumes of data in RAM and doesn’t depend on much stuff, it doesn’t really need to be faked or mocked. struct Attack : StateObject { // The StateObject parent class helps synchronisation float speed = 1.0; float damage = 25.0; // blablabla struct Obstacle : StateObject { FloatVector coordinates = {0, 0, 0}; FloatVector speed = {0, 0, 0}; // ... struct Combatant : Obstacle { float health = 1.0; float maxHealth = 100; std::vector<Attack*> attacks = {}; std::vector<Attack*> attackingQueue = {}; Attack* currentAttack = nullptr; // blablabla struct Projectile : StateObject { Attack& attack = 0; // blablabla struct GameState : StateObject { std::vector<Combatant*> combatants = {}; std::vector<Projectile*> projectiles = {}; // blablabla To focus on the topic, these code stubs mostly neglect threading, serialisation and memory management. All of this would need some additional wrapper classes. They are in C++, but the idea would work in other languages with some syntactic changes. If you are too unfamiliar with C++ and don’t understand these declarations, look here. The game mechanics is the component taking care of the rules of the game. It controls the interaction of actors appearing in the game. It contains classes wrapping around the classes from game state, but with functionality related to game mechanics, but these don’t need to be faked or mocked. It takes input from time ticks, and applies its changes to the game state. Faking this is useful for viewing the game world without actually playing the game, performing movement actions from the player’s input but nothing else. A different fake may used to record the decisions of AI when unit testing it. class CombatantMechanics : public Mechanic { Combatant* state = nullptr; // blablabla void tick(milliseconds time) override { state->position += state->movement * time; if (state->currentAttack == nullptr && state->attackingQueue.size() > 0) { state->currentAttack = state->attackingQueue.front(); // blablabla // ... class GameMechanics : IGameMechanics { GameState& state; std::vector<Mechanic*> mechanics = {}; // ... void updateMechanics() { // Somehow finds what objects were added and adds mechanics for them // Somehow finds what objects were removed and removes their mechanics GameMechanics(GameState& state) : state(state) {} // In a different language, it would be this.state = state void tick(milliseconds time) override { for (auto mechanic : mechanics) { // blablabla Communication works with the game state, updating it according to the actions of other users and sends them the changes made by the player. It uses the networking component to communicate. This component is absent in single player, where no remote actor changes the game state. If the program is configured as a host, his game state is propagated by this component to other players, but after accepting the changes done to the characters under their control. If it’s connected to a host or a server, then it applies all changes to the local game state. Turn-based games might do this differently, use no server or host but propagate the players’ inputs and having all clients compute all the mechanics independently, perfectly synchronised. If the rest of the game can run without this, then it doesn’t even need to be faked, otherwise it might be faked to provide singleplayer mode. class ClientCommunication { GameState& gameState; Networking& networking; Side playerSide; void updateState(Message& message) { // Identifies what part of state it updates and replaces the value ClientCommunication(GameState& gameState, Networking& networking, Side playerSide) : gameState(gameState), networking(networking), playerSide(playerSide) networking.subscribeToClientUpdates([&] (const Message& message) { void tick(milliseconds time) { // A smarter code would of course check if there is something new for (auto combatant : gameState.combatants) { if (combatant.side == playerSide && combatant.player()) { // ... The networking component sends the messages from communication to other users or server. It should not care about the meaning of the messages, but there is no clear distinction whether it serialises the messages or communication does. Faking this allows simulating the actions of another player without actually connecting two instances, or without connecting the two instances within the same process through a real network. struct INetworking { virtual Subscription subscribeToClientUpdates(std::function<void(const Messsage&)> reaction) = 0; virtual void sendUpdate(const Message& update) = 0; // For some transactions that need confirmations virtual Subscription subscribeToRequests(std::function<Message(const Message&)> callback) = 0; virtual Message requestResponse(const Message& update) = 0; // ... Visualisation takes care of the graphical representation of the game state. There are many libraries that do most of the work, but they are too universal to be directly used by the game state. There has to be a layer specialised for the game in question. If game state‘s say that a character is wearing a blue shirt, then visualisation decides that shirt mesh is attached to its skeleton, it’s using the shirt material with colour variant blue (the material should be abstract enough because this layer doesn’t know what kind of shaders does the renderer use). If game state‘s data say that character A is attacking and that attack has started 0.1 seconds ago and is going to take 0.3 seconds, then visualisation has to decide which animation is representing the attack at what time the animation is at that time. This component would contain lots of classes representing the visuals of various objects in the game, possibly with dependency injection. If building the scene is not fast enough, it may need to be able to hold multiple scenes and switch between them quickly. It is needed by game mechanics for collision detection and possibly determining what surfaces players are on. It’s also needed to determine what a player clicked on. Faking this is not particularly useful, because there is no better way to tell what’s going on in the game (debugging would benefit more from making walls transparent). A fake could provide a simplified collision detection by detecting collisions only with height 0 as ground and maybe simplifying actors’s shapes to spheres, which is quite simple to compute. class Visualisation : public IVisualisation { Renderer& rendeder; GameState* gameState = nullptr; //... the information about the scene Visualisation(Renderer& renderer, GameState* gameState = nullptr) : renderer(renderer), gameState(gameState) {} void updateScene(milliseconds time) { // Run while the GPU is rendering the scene // Alters the scene in Renderer accordingly to the situation in gameState std::optional<FloatVector> checkCollision(Obstacle& moving, milliseconds time) override { // Check if the object would bump into something Obstacle* objectAt(ScreenCoordinates coordinates) override { // Picks an object on screen, returns null if there's nothing or it's not interactible // ... Resources are needed by the visualisation part to decide about collisions and provide data needed by the visuals and sound. It might also handle data about levels, cutscenes, characters and so on. It’s not a trivial load from disk because the game needs to preload data to avoid short waits in rendering (stuttering) when they are suddenly needed and unload data to avoid filling up the RAM. A fake resources component would provide just a few basic resources (for example one for each type) and keep everything in memory, or avoid loading textures to quicken loading, but it might not be needed at all. struct IResources { virtual Resource<Model> getModel(std::string name) = 0; virtual Resource<Texture> getTexture(std::string name) = 0; virtual Resource<Mesh> getMesh(std::string name) = 0; virtual Resource<Sound> getSound(std::string name) = 0; // ... The renderer is typically part of a graphics library, providing the access to the GPU driver. These tend to be replaceable regardless of the actual design of the library, because the game may need to use Vulkan, DirectX or Metal for graphical acceleration. A fake renderer doing nothing is also useful for servers so that they don’t need a GPU (the visualisation is still needed for collision detection). Only visualisation is going to use it. A component from sound makes the user hear sounds according to visualisation and music. Its fake would simply produce no sound and possibly give a fixed duration for sounds if it’s meant to provide that. struct ISoundSystem { virtual void playSound(std::string name, FloatVector origin) = 0; virtual void playUiSound(std::string name) = 0; virtual std::function<void()> playCancellableSound(std::string name, FloatVector origin) = 0; // ... The AI component controls bots or non-playable characters. It inputs its actions to game state so that game mechanics processes them, identically to user commands. It may need game mechanics to estimate the value of different targets. It takes input from the game state and possibly visualisation (to decide if the character can see some other characters and determine paths to targets). It would probably be a complex class, possibly with many subclasses and possibly dependency injection on its own. The game should be able to run without this (with all characters idle), so the only need to fake this would be to reduce CPU cost by having the AI simply attack if a valid target is nearby, maybe going towards it if there is a direct path. class Ai : public IAi { GameState& gameState; // Some internals Ai(GameState& gameState) : gameState(gameState) {} void ponder() override { // Do the AI stuff, queuing some movement and actions at the end User commands does the same as AI, but makes decisions according to the player rather than the game state. It takes the user’s commands from user input. Keybinding may be done on this level. It probably needs alternative implementations for joysticks, VR controllers etc. Its fake is the AI. class UserCommand : public IAi { GameState& gameState; UserInput& userInput; Combatant* controlled = nullptr; Subscription mainAttack = {}; UserCommand(GameState& gameState, UserInput& userInput) : gameState(gameState), userInput(userInput) { mainAttack = userInput.subscribeToMainAttack([&] () { // Queue the main attack here void setCombatant(Combatant* assigned) { combatant = assigned; } void ponder() override { combatant->speed = userInput.getMovement(); // ... User input allows the GUI and user commands to know what buttons the user is pressing, how he’s moving the mouse or where he’s clicking. A joystick may or may not have a different user input class. This may need to be mocked in unit tests to go through prepared sequences of user inputs to test if the user commands component produces the correct actions. struct IUserInput { virtual FloatVector getMovement() = 0; virtual Subscription subscribeMainAttack(std::function<void()> callback) = 0; virtual Subscription subscribeSecondaryAttack(std::function<void()> callback) = 0; // ... GUI is a high level component that injects widgets into the visualisation and may interpret clicks on them or button presses when they are active. It can display information from the game state that might not be very visible in the graphical output but the player should be aware of (such as health of characters), using visualisation to determine where to position it and what to show, so that it would be matched with the scene. It can make various changes into game state, settings and can display some contents of the log, placing it high in the hierarchy. It decides if the user’s actions propagate to user commands. It may have interfaces and fakes on its own to simulate user clicks on widgets. A fake GUI would display nothing and propagate all actions to user commands, if there is a need for that. Settings is a set of mostly data classes representing the configuration. It takes care of persisting them between runs and possibly notifying other classes about changes. It should not define specific variables used by individual components. If it doesn’t do anything else, it doesn’t need to be faked. Log is (probably) a singleton class used by everything to collect information about what is going on. It may have severity levels, some messages should be directly viewable through GUI, others just in a file or in a console. Some statistical information (such as the framerate) should have a specific non-text buffer. Being a class to mostly hold data, and crucial to function while debugging it doesn’t really need to be faked. Appendix: Look here if you can’t read C++ For those unfamiliar with C++, here are some features of C++ that differ from many other object-oriented languages: • Declaring a variable of a class type constructs it, using the arguments in parentheses behind the declaration • Variables are deepcopied by default, an ampersand after the type (`BarcodeScanner&) is needed to make the declared variable a reference • A pointer is sometimes needed as some kind of nullable and changeable reference (declared with an asterisk, as BarcodeScanner*), access is then through -> instead of . • Constructors don’t have an identifying keyword, they look like functions with the same names as their classes, with no return value • Methods are final by default, to make a function overridable, it has to be declared with the virtual keyword • Generic functions cannot be overridable • Abstract classes can be generic, though • The usual extendable array type is std::vector • It’s the fastest container type, so it’s often used for any type of collection • Yes, despite having a O(n) complexity in many operations, it beats other structures if there aren’t hundreds of elements • There is no distinction between abstract classes and interfaces • A class can have any number of parent classes (it’s therefore possible to inherit from two classes with the same parent) • Public attributes are not considered bad in data-only classes • Using the struct keyword instead of the class keyword makes the contents public by default Appendix 2: A C++ specific problem with interfaces In C++, it’s not possible to return an abstract class, which makes abstract factories less convenient. A function returning an object constructs it at the location of the return address, so the exact type to be returned must be in the function’s signature. This can be avoided by dynamically allocating it (e.g. with new or something smarter) and returning a pointer, but it comes at a performance cost of worsened memory locality and the allocation itself that may not be neglectful (other languages typically dynamically allocate all objects). There are multiple solutions, neither of them works in all cases, so sometimes, dynamic allocation has to be used: • Use the factory as a template argument (you may use an interface at some level to allow one code to use all instantiations) • Use a fixed-size buffer to allocate it and use dynamic allocation only if it doesn’t fit (std::function does this) • Don’t return it, have the factory create it and call a callback on it: factory.sender([&] (Sender& sender) Leave a Reply
display | more... Contexts: insurance, risk theory, premium corporate finance 1. Insurance Underwriting 2. Financial Underwriting The term underwriting originated in the UK, in the coffeehouses of the time when ships of the empire sallied forth and returned with riches from exotic places. Just as now, insurance was provided for those voyages. The insurance contract required the insured (or whoever is paying the premium) to literally write their name and sign underneath the text of the insurance contract. I imagine this was done to enable the insurer weasel out of claims by insisting on the strictest interpretation of the contract. In relation to issuance of securities, underwriting serves as insurance because it ensures that the issuer of the security will receive some money for its securities. In addition to the underwriting fee paid to the investment bank, there is also a fee if the obligation crystallizes (meaning the contract is called in because the market did not subscribe to the issue), called the crystallization fee. In my experience, it is the largest fee charged by investment banks, sometimes as high as 8% of the amount underwritten. I understood the reasoning behind the steep fees as being a deterrent to issuers from pushing for unrealistic valuations. However, where the investment bank allows such a valuation, and the underwriting obligation crystallizes, the fee it charges might not be enough to cover the costs it would have to bear for holding those securities. Iron Noder 2020, 17/30 Un"der*writ`ing, n. The business of an underwriter, © Webster 1913.
Part I: Fish Farming is Feeding the World, But at What Cost? outlaw ocean project Copyright Fabio Nascimento / The Outlaw Ocean Project Published Aug 29, 2021 10:00 PM by Ian Urbina [This article appeared first in The New Yorker and is reproduced here courtesy of The Outlaw Ocean Project.] Gunjur, a town of some fifteen thousand people, sits on the Atlantic coastline of southern Gambia, the smallest country on the African continent. During the day, its white-sand beaches are full of activity. Fishermen steer long, vibrantly painted wooden canoes, known as pirogues, toward the shore, where they transfer their still-fluttering catch to women waiting at the water’s edge. The fish are hauled off to nearby open-air markets in rusty metal wheelbarrows or in baskets balanced on heads. Small boys play soccer as tourists watch from lounge chairs. At nightfall, work ends and the beach is dotted with bonfires. There is drumming and kora lessons; men with oiled chests grapple in traditional wrestling matches. Hike five minutes inland, and you’ll find a more tranquil setting: a wildlife reserve known as Bolong Fenyo. Established by the Gunjur community in 2008, the reserve is meant to protect seven hundred and ninety acres of beach, mangrove swamp, wetland, savannah, and an oblong lagoon. The lagoon, a half mile long and a few hundred yards wide, has been a lush habitat for a remarkable variety of migratory birds as well as humped-back dolphins, epaulet fruit bats, Nile crocodiles, and callithrix monkeys. A marvel of biodiversity, the reserve has been integral to the region’s ecological health—and, with hundreds of birders and other tourists visiting each year, to its economic health, too. But on the morning of May 22, 2017, the Gunjur community discovered that the Bolong Fenyo lagoon had turned a cloudy crimson overnight, dotted with floating dead fish. “Everything is red,” one local reporter wrote, “and every living thing is dead.” Some residents wondered if the apocalyptic scene was an omen delivered in blood. More likely, ceriodaphnia, or water fleas, had turned the water red in response to sudden changes in pH or oxygen levels. Locals soon reported that many of the birds were no longer nesting near the lagoon. A few residents filled bottles with water from the lagoon and brought them to the one person in town they thought might be able to help—Ahmed Manjang. Born and raised in Gunjur, Manjang now lives in Saudi Arabia, where he works as a senior microbiologist. He happened to be home visiting his extended family, and he collected his own samples for analysis, sending them to a laboratory in Germany. The results were alarming. The water contained double the amount of arsenic and forty times the amount of phosphates and nitrates deemed safe. The following spring, he wrote a letter to Gambia’s environmental minister, calling the death of the lagoon “an absolute disaster.” Pollution at these levels, Manjang concluded, could only have one source: illegally dumped waste from a Chinese fish-processing plant called Golden Lead, which operates on the edge of the reserve. Gambian environmental authorities fined the company twenty-five thousand dollars, an amount that Manjang described as “paltry and offensive.” Golden Lead is one outpost of an ambitious Chinese economic and geopolitical agenda known as the Belt and Road Initiative, which the Chinese government has said is meant to build goodwill abroad, boost economic cooperation, and provide otherwise inaccessible development opportunities to poorer nations. As part of the initiative, China has become the largest foreign financier of infrastructure development in Africa, cornering the market on most of the continent’s road, pipeline, power plant and port projects.  In 2017, China cancelled fourteen million dollars in Gambian debt and invested thirty-three million to develop agriculture and fisheries, including Golden Lead and two other fish-processing plants along the fifty-mile Gambian coast. The residents of Gunjur were told that Golden Lead would bring jobs, a fish market, and a newly paved, three-mile road through the heart of town. Golden Lead and the other factories were rapidly built to meet an exploding global demand for fishmeal—a lucrative golden powder made by pulverizing and cooking fish. Exported to the United States, Europe, and Asia, fishmeal is used as a protein-rich supplement in the booming industry of fish farming, or aquaculture. West Africa is among the world’s fastest-growing producers of fishmeal: more than fifty processing plants operate along the shores of Mauritania, Senegal, Guinea Bissau, and Gambia. The volume of fish they consume is enormous: one plant in Gambia alone takes in more than seven thousand five hundred tons of fish a year, mostly of a local type of shad known as bonga—a silvery fish about ten inches long. Gambian fishermen holding fistfuls of fish meal (Copyright Fabio Nascimento / The Outlaw Ocean Project) For the area’s local fishermen, most of whom toss their nets by hand from pirogues powered by small outboard motors, the rise of aquaculture has transformed their daily working conditions: hundreds of legal and illegal foreign fishing boats, including industrial trawlers and purse seiners,  crisscross the waters off the Gambian coast, decimating the region’s fish stocks and jeopardizing local livelihoods. At the Tanji fish market in summer of 2019, Abdul Sisai stood at a table offering four sickly-looking catfish for sale. The table swarmed with flies, the air was thick with smoke from nearby curing sheds, and menacing seagulls dive-bombed for scraps. Sisai said that bonga had been so plentiful two decades ago that in some markets it had been given away for free. Now it costs more than most local residents can afford. He supplements his income by selling trinkets near the tourist resorts in the evenings. “Sibijan deben,” Sisai said in Mandinka, one of the major languages in Gambia. Locals use the phrase, which refers to the shade of the tall palm tree, to describe the effects of extractive export industries: the profits are enjoyed by people far from the source—the trunk. In the past several years, the price of bonga has increased exponentially, according to the Association for the Promotion and Empowerment of Marine Fishers, a Senegalese-based research-and-education group. Half the Gambian population lives below the international poverty line—and fish, primarily bonga, accounts for half of the country’s animal-protein needs. After Golden Lead was fined, in 2019, it stopped releasing its toxic effluent directly into the lagoon. Instead, it ran a long wastewater pipe under a nearby public beach, dumping waste directly into the sea. Swimmers soon started complaining of rashes, the ocean grew thick with seaweed, and thousands of dead fish washed ashore, along with eels, rays, turtles, dolphins, and even whales. Residents burned scented candles and incense to combat the rancid odor coming from the fish meal plants, and tourists wore white masks. The stench of rotten fish clung to clothes, even after repeated washing. Jojo Huang, the director of the plant, has said publicly that the facility follows all regulations and “does not pump chemicals into the sea.” The plant has benefited the town, she told The Guardian. In March 2018, about a hundred and fifty local shopkeepers, youth and fishermen, wielding shovels and pickaxes, gathered on the beach to dig up the pipe and destroy it. Two months later, with the government’s approval, workers from Golden Lead installed a new pipe, this time planting a Chinese flag alongside it. The gesture carried colonialist overtones. One local called it “the new imperialism.” Manjang was outraged. “It makes no sense!” he told me, when I visited him in Gunjur at his family compound, an enclosed three-acre plot with several simple brick houses and a garden of cassava, orange, and avocado trees. Behind Manjang’s thick-rimmed glasses, his gaze is gentle and direct as he speaks urgently about the perils facing Gambia’s environment. “The Chinese are exporting our bonga fish to feed it to their tilapia fish, which they’re shipping back here to Gambia to sell to us, more expensively—but only after it’s been pumped full of hormones and antibiotics.” Adding to the absurdity, he noted, is that tilapia are herbivores that normally eat algae and other sea plants, so they have to be trained to consume fish meal. Manjang contacted environmentalists and journalists, along with Gambian lawmakers, but was soon warned by the Gambian trade minister that pushing the issue would only jeopardize foreign investment. Dr. Bamba Banja, the head of the Ministry of Fisheries and Water Resources, was dismissive, telling a local reporter that the awful stench was just “the smell of money.” Global demand for seafood has doubled since the nineteen-sixties. Our appetite for fish has outpaced what we can sustainably catch: more than eighty per cent of the world’s wild fish stocks have collapsed or are unable to withstand more fishing. Aquaculture has emerged as an alternative—a shift, as the industry likes to say, from capture to culture. The fastest-growing segment of global food production, the aquaculture industry is worth a hundred and sixty billion dollars and accounts for roughly half of the world’s fish consumption. Even as retail seafood sales at restaurants and hotels have plummeted during the pandemic, the dip has been offset in many places by the increase in people cooking fish at home. The United States imports eighty percent of its seafood, most of which is farmed. The bulk of that comes from China, by far the world’s largest producer, where fish are grown in sprawling landlocked pools or in pens offshore spanning several square miles. Aquaculture has existed in rudimentary forms for centuries, and it does have some clear benefits over catching fish in the wild. It reduces the problem of bycatch—the thousands of tons of unwanted fish that are swept up each year by the gaping nets of industrial fishing boats, only to suffocate and be tossed back into the sea. And farming bivalves—oysters, clams, and mussels—promises a cheaper form of protein than traditional fishing for wild-caught species. In India and other parts of Asia, these farms have become a crucial source of jobs, especially for women. Aquaculture makes it easier for wholesalers to ensure that their supply chains are not indirectly supporting illegal fishing, environmental crimes, or forced labor. There’s potential for environmental benefits, too: with the right protocols, aquaculture uses less fresh water and arable land than most animal agriculture. Farmed seafood produces a quarter of the carbon emissions per pound that beef does, and two-thirds of what pork does. Still, there are also hidden costs. When millions of fish are crowded together, they generate a lot of waste. If they’re penned in shallow coastal pools, the solid waste turns into a thick slime on the seafloor, smothering all plants and animals. Nitrogen and phosphorus levels spike in surrounding waters, causing algal blooms, killing wild fish, and driving away tourists. Bred to grow faster and bigger, the farmed fish sometimes escape their enclosures and threaten indigenous species. Even so, it’s clear that if we are to feed the planet’s growing human population, which depends on animal protein, we will need to rely heavily on industrial aquaculture. Leading environmental groups have embraced this idea. In a 2019 report, the Nature Conservancy called for more investment in fish farms, arguing that by 2050 the industry should become our primary source of seafood. Many conservationists say that fish farming can be made even more sustainable with tighter oversight, improved methods for composting waste, and new technologies for recirculating the water in on-land pools. Some have pushed for aquaculture farms to be located farther from shore in deeper waters with faster and more diluting currents.  The biggest challenge to farming fish is feeding them. Food constitutes roughly seventy per cent of the industry’s overhead, and so far the only commercially viable source of feed is fish meal. Perversely, the aquaculture farms that produce some of the most popular seafood, such as carp, salmon, or European sea bass, actually consume more fish than they ship to supermarkets and restaurants. Before it gets to market, a “ranched” tuna can eat more than fifteen times its weight in free-roaming fish that has been converted to fishmeal. About a quarter of all fish caught globally at sea end up as fish meal, produced by factories like those on the Gambian coast. Researchers have identified various potential alternatives—including human sewage, seaweed, cassava waste, soldier-fly larvae, and single-cell proteins produced by viruses and bacteria—but none is being produced affordably at scale. So, for now, fish meal it is. The result is a troubling paradox: the seafood industry is ostensibly trying to slow the rate of ocean depletion, but by farming the fish we eat most, it is draining the stock of many other fish—the ones that never make it to the aisles of Western supermarkets. Gambia exports much of its fish meal to China and Norway, where it fuels an abundant and inexpensive supply of farmed salmon for European and American consumption. Meanwhile, the fish Gambians themselves rely on for their survival are rapidly disappearing. Ian Urbina is the director of The Outlaw Ocean Project, a non-profit journalism organization based in Washington DC that focuses on environmental and human rights concerns at sea globally. This article appears here courtesy of The Outlaw Ocean Project. In Part II, the story will continue with Gambia's enforcement efforts and Sea Shepherd's supporting role. 
Reducing Nitrogen Fertilizer Inputs to Irrigated Pastures and Hayfields by Interseeding Legumes Project Overview Project Type: Professional + Producer Funds awarded in 2010: $49,849.00 Projected End Date: 12/31/2012 Region: Western State: Colorado Principal Investigator: Joe Brummer Colorado State University Annual Reports • Animals: bovine • Animal Products: dairy • Animal Production: pasture fertility, pasture renovation, feed/forage • Crop Production: conservation tillage • Farm Business Management: budgets/cost and returns • Production Systems: general crop production • Sustainable Communities: sustainability measures Suppressing grasses with glyphosate prior to interseeding resulted in the most consistent legume establishment. Close mowing to simulate heavy grazing generally did not result in improved establishment. Of the five legumes evaluated, alfalfa established the best in the glyphosate treatment in Colorado, increasing yield by over a ton per acre. In Idaho, establishment was more variable, with red clover establishing well regardless of suppression treatment. No legumes established in Oregon due to heavy rodent activity. This study highlighted the importance of suppressing the existing grasses and choosing a vigorous legume species for interseeding to reduce the risk of seeding failure. Forage producers that primarily manage irrigated grass pastures or hayfields are struggling with how to maintain yields given the current price of nitrogen fertilizers. The price of nitrogen fertilizers has increased from $0.20 per pound of nitrogen a few years ago to a high of $0.80 to $1.10 per pound in 2008, with current prices in the range of $0.60 to $0.65 per pound. Nitrogen is the number one limiting nutrient for grass production, so it is essential to apply if producers want to maintain yields. Some producers have started to use manures and composts as alternative sources of fertilizer in areas where they are available. The high rates of application coupled with high transportation costs limit the use of these sources. Another alternative is to introduce various legumes into grass dominated stands by interseeding. Legumes are known for their unique ability to fix nitrogen from the atmosphere through a symbiotic relationship between the plant and specific bacteria that infect the roots, forming nodules. Legume plants themselves benefit from the fixation of nitrogen, but the associated grass plants can also benefit as the nodules periodically sluff, decompose and release nitrogen into the soil for uptake by other plants. Productivity of grass-legume stands will generally never equal what a grass-only stand that is adequately fertilized with nitrogen will produce. However, the loss in productivity is offset by the higher quality forage that is produced by having a legume in the mix. Interseeding generally involves the use of specialized drills that cut through the existing sod layer and place the seed in contact with the soil at the proper depth. Although seed can be broadcast into the existing stand, as a general rule, establishment success of forages is greater with drilling. With interseeding of legumes, the key is suppressing the existing grasses long enough for the new plants to establish. Only minimal establishment of the seeded species can be expected without suppression of the existing vegetation. Various methods have been tried over the years with varying degrees of success. Suppression methods that have been tried include close grazing with livestock, light disking or rototilling, flail mowing and various herbicides such as Roundup or Paraquat. Even with suppression of the existing grasses, it sometimes takes two to three years before the interseeded legumes reach full productivity. On the positive side, the cost of interseeding is considerably less compared to complete renovation of a stand using tillage. Additionally, a full season's production is generally not lost since the existing plants tend to recover quickly following suppression and can be harvested for hay or lightly grazed during the year of seeding. Although basic knowledge of how to interseed pastures and hayfields currently exists, producers still have a number of questions on how to successfully implement these techniques. Without answers to their questions, they are reluctant to implement these practices. Questions commonly asked include: 1.) Which brand or type of drill is best to use?, 2.) Which legume species are easiest to establish?, 3.) Once established, which legume species are most persistent?, 4.) What time of year should I interseed to achieve the best results?, 5.) How do I effectively suppress the existing grasses to insure establishment of the legume I am interseeding?, and 6.) How is the yield and quality of my forage stand affected by interseeding a legume? This project attempted to answer a number of these questions posed by producers. Project objectives: The overall objective of this project is to demonstrate to producers how legumes can be interseeded into existing grass-dominated pastures and hayfields, thereby increasing the quantity and quality of forage produced while reducing nitrogen fertilizer inputs. Both on-farm demonstration and small plot trials will be used to achieve the following specific objectives: Objective 1 - Evaluate the establishment success of various legume species, including alfalfa, birdsfoot trefoil, red clover, white clover and sainfoin, interseeded into grass-dominated pastures and hayfields with and without mechanical, herbicidal and/or animal suppression of the existing vegetation. Objective 2 - Evaluate the effects of introducing legumes into grass-dominated pastures and hayfields on forage yield and quality compared to fertilizing the existing grass dominated stand with nitrogen. Objective 3 - Using inputs of seed, herbicide, fertilizer, labor, machinery, etc. associated with objective 1 and yield and quality outputs from objective 2, conduct a basic economic analysis comparing pastures or hayfields that have adequate (>20% by weight) legume composition to straight grass stands fertilized with nitrogen. Objective 4 - Disseminate results of this project to other producers through such means as field days, workshops, conferences, on-line reports and other media, and the production of a how-to manual.
News - June 8, 2020 World Oceans Day: solutions to protect our oceans! Written by Tristan Lebleu 5 min read Oceans cover more than 70 percent of the surface of our Planet. They are pillars of life on Earth as they provide food and oxygen, absorb carbon dioxide (about a quarter of the amount we release), regulate the weather and are home to wildlife. Every year since 2002, June 8th is celebrated around the globe as World Oceans Day, a date to recognize and honor the importance of our oceans. Protecting oceans is more important than ever, as they are increasingly threatened by human activities. This 2020 Edition of World Oceans Day is marked by the 30x30 campaign, a “call on world leaders to protect 30% of the world's ocean by 2030”. From stopping plastic pollution in rivers, increasing the use of renewable energy, using less chemicals in our industrial and agricultural processes or saving water in our households… There are actually thousands of ways to protect oceans and clean technologies to help us get there. On this special occasion, here are a few clean and profitable solutions labelled by the Solar Impulse Foundation to celebrate World Oceans Day: The Interceptor™, a solution to stop plastic pollution Boyan Slat, the Dutch inventor behind The Ocean Cleanup project, impressed the world when he announced his mission to rid the Planet of the “Great Pacific Garbage Patch”, a massive accumulation of plastic debris in the Pacific Ocean. But his newest mission is just as ingenious: a solar-powered boat which can intercept river plastic pollution, before it even enters the ocean. Indeed, an estimated 1000 rivers are responsible for roughly 80% of ocean plastic pollution. By tackling the problem upstream, this solution could have a much higher impact. This autonomous and connected barge can extract 50,000 kilograms of plastic per day and is easily scalable. The system has already been deployed in 2 rivers in SouthEast Asia, with a plan to scale to 1000 rivers by 2025. WaveGem®, to harness the power from the ocean WaveGem®, to harness the power from the ocean One of the best options to protect our oceans could actually be… to harness them! Fossil fuels, which are high emitters of CO2, are putting our oceans in danger. Indeed, the more carbon dioxide we emit, the more is absorbed by our ocean, leading to a major issue: ocean acidification. The change of the ocean’s pH  - approximately 30 percent increase in acidity since the beginning of the Industrial Revolution - could have dramatic effects on coral reefs and most ocean species. Thus, investing in renewable energy is a priority to protect our oceans.  WaveGem is a renewable energy solution for off-grid isolated sites. This mid-size marine platform produces energy from waves and from the sun. Being autonomous, reliable, safe, robust, and highly competitive, it is a perfect solution to replace diesel generators on remote islands. The solution avoids 2.2 tons of CO2 emissions per year for each kW produced compared to a diesel generator. Tēnaka, a nature-based solution to restore coral reefs Coral reefs are crucial for our Planet. Protecting and restoring them is essential, as they host 30% of the world's marine biodiversity, they provide oxygen, and they protect coastal communities from waves. And their importance is economic as much as environmental. “From tourism to marine recreation and sport fishing, [...] coral reefs provide economic goods and services worth about $375 billion each year” according to the National Oceanic and Atmospheric Administration (NOAA). Tēnaka’s mission is to restore and preserve coastal ecosystems. Thanks to the development of coral nurseries, Tenaka is able to restore the resilience of marine habitats and secure the benefits they provide for coastal communities. The startup’s model is to provide tailor-made CSR programs to help businesses reach their environmental targets. For each coral reef sponsored, companies are provided with precise impact measurements and a scientific monitoring of the planted corals, as well as communication elements (photographs, etc.). Other solutions to protect oceans: Technology alone can not protect our oceans. Citizens from all around the world must also come together to demand more action from national, regional and local governments. To this end, World Oceans Day and many organizations have joined an online petition asking “governments worldwide to protect at least 30% of the planet's land and ocean by 2030, and preserve intact ecosystems and wilderness at the Convention on Biodiversity COP15 Summit in October 2020”. Learn More Written by Tristan Lebleu on June 8, 2020 Do you like this article? Share it with your friends!
Ace Your Next Organic Chemistry Exam. With these Downloadable PDF Study Guides Our Study Guides Free Radical Reactions By James Ashenhurst Bonus Topic: Allylic Rearrangements Last updated: March 22nd, 2019 In this series on free radical reactions we’ve mostly covered the basics. In this post (and the next one) we’re going to go into a little bit more detail on certain topics that until now I haven’t had time to dive into. Today’s topic flows right from the subject of the last post, on allylic bromination. The examples I used in the allylic bromination post were actually quite simple. For example, if you take cyclopentene and treat with NBS and light (hv) in carbon tetrachloride solvent (CCl4), you get this product: 1-cyclopentene Let’s extend the complexity of the substrate just a bit. Just a bit – one methyl group! and do the same reaction. Here, we get not one product, but two. And note how our major product is different! 1-methylcyclopenteneWhat’s going on here? Let’s think about the mechanism of this reaction. What’s the first thing to happen (after initiation, of course)? Removal of the weakest C-H bond by the bromine radical! This leaves us with an allylic radical, which can then react with Br2 to give us product A. 2-product a Simple enough. But how do we explain the formation of product B? Look again at the free radical that is produced. Notice anything special about it? We can draw a resonance form! This means that there are two carbons on this molecule which can potentially participate in free radical reactions. Therefore we can also draw a reaction mechanism which shows Br2 reacting at this bottom (tertiary) carbon: 3-product b See how the tertiary radical forms a new π bond while the other π bond breaks? This leaves one of the electrons of the “top” alkene to form a new bond with bromine, giving us a new C-Br bond. If you analyze the bonds that form and break in this reaction (always a useful exercise), note that this reaction has an extra pair of events – one C-C π bond breaks, and one C-C  π bond forms. The net effect is that it looks like the  π bond has moved. This phenomenon is called “allylic rearrangement”. Note: as commenter Keith helpfully points out, remember that resonance forms are “hybrids”. When drawing the mechanism, it’s best to show it all happening in one step (as in the middle drawing, above) rather than to draw the resonance form and then draw bromination. Last question. Can you think of a reason why product B might be more favoured, especially under conditions of high temperature? 5- a vs b Think back to Zaitsev’s rule (if you’ve covered this) : the more substituted an alkene is, the more stable it is (why? the reason is complex and usually not covered in introductory textbooks – it has to do with a phenomenon called “hyper conjugation”) . The alkene in product A is what we’d call “disubstituted” – it is directly attached to two carbon atoms and two hydrogen atoms. The alkene in product B is “trisubstituted” – it is directly attached to three carbon atoms and one hydrogen atom. Therefore there is good reason to expect that product B will be a significant product in this case. [I’m hedging on the exact ratio because I don’t have a literature reference. You shouldn’t completely believe me without firm data from a literature reference; I’ll try to dig one up]. Next Post: Free Radical Addition Of HBr To Alkenes [Note, Dec 5, 2013 – significantly revised from previous version. Thanks to commenter Keith for constructive criticism] Related Posts: Comment section 14 thoughts on “Bonus Topic: Allylic Rearrangements 1. Excellent post, but I really dislike the term “more stable resonance form”. Resonance structures are not discrete entities, and all resonance forms must have exactly the same energy since there is only one true structure. The better way to express this concept is, as you said, that one form makes a greater contribution to the resonance hybrid. I am also surprised that compound B is the major product since (1) it contains a less highly substituted double bond (disfavored thermodynamically via the Hammond postule) and (2) halogenation is occurring at the more hindered site (presumably disfavored kinetically). Moreover, I would expect compound B to rearrange under the reaction conditions via the SN1 mechanism to form compound A. Is there a specific literature reference for this reaction or can you provide the actual product ratio? Thanks, and keep up the good work with the blog! 2. You have written an excellent post about radical substitution. It would be perfect if you give some explanation about why the compound does not follow E1 instead of SN1. Thank you for your hard work :) 3. Nice explanation But I just have one doubt regarding the above mechanism. Why didn’t we take into consideration the stability of the free radical? In my opinion it should be a significant factor as the whole thing follows the free radical mechanism. Looking forward to your explanation. 1. It depends on the temperature! I think the main point why we covered the thermodynamically product is because it’s conceptually the point of the rearrangement. The most stable alkyl radical would be the major product at low temperatures, because it is kinetically favored due to the faster rate determining step. However at HT, thermodynamics trump kinetics to give you the more substituted alkene/ (mark pdct). Good point! 1. More substituted alkenes are more stable because the inductive effect and hyperconjugative effects satisfies the hunger of the (relatively) electron hungry sp2 carbon atom. But here, isn’t Bromine more electrong ‘hungry’ than sp2 carbon? Wouldn’t product A where bromine is on a tertiary carbon atom be more stabilized? Thanks James and Andrea Jurado. 4. should we check the stability of the free radical intermediate or the final alkene? in B the free radical is unstable as compared to A… so shouldnt A be the major product? 5. Love the site! Just want put in encouragement to finish this conversation. Both a and b seem like they could be the major product for mutually exclusive reasons. Looking for a way to reconcile all the points of view and get the definitive answer. 1. Product B is major because the double bond is more substituted, which is ultimately more stable (similar to why eliminations tend to occur in a way to give the most substituted alkene – Zaitsev’s rule). 6. Hey James. Is it possible for the bromine to also bond with the CH3 hanging off the double bond in product B? 1. Hi – if one had an excess of NBS (>1 equivalents) then a second allylic bromination could occur, and that CH3 hanging off the double bond in B is one position where it could happen. The rate of that reaction will be proportional to the concentration of NBS and the concentration of B, so as the reaction proceeds (and the concentration of B goes up) we should expect to see a little bit of it happening. 1. Hi! I was coming here to ask a similar question — specifically, if one started with 1-methylcyclohexene (the unbrominated version of Product B), would NBS favor bromination of the methyl group? Thanks so much! 7. I Think I got your explanation of more substituted alkene B being more stable than A , but then if we look at the transition state leading to B and A we find that A has a tertiary free radical which is more stable than secondary free radical(for B) , so why shouldn’t A be the major product by this line of thought , even though I agree That the final product B Is more stable than A but shouldn’t we also compare the transition states leading to the two products … I am confused as to why we are not considering this free-Radical stability … Can you please elaborate on this part ? 8. Hello james! I thoroughly read your post but I have a question.. compared to radical structures.. radical structure of A is TERTIARY allyl radical, radical structure of B is SECONDARY allyl radical. the former is more stable radical structure.. But Major product is B. Just Zaitsev’s rule(hyperconjugation) has more afffection than Radical Stability? Leave a Reply to Farzeen
Floods are defined as an overflow of water production into a dry place. Floods happen in large quantities that can cause a lot of damage to both environmental and financial, depending on the power of the flood. Flood can cause many problems and even takes lives. The damage of house and other types of building needs resources to repair and it affects the people with long-term mental health problems. In addition to this, floods also destroy the crops, plants and other environmental aspects. Hence, a flood is considered a natural disaster. What is a Flood alarm system? Flood alarm is a security system which will alert you the presence of water immediately before it damages your home and property. This device consists of an alarm transmitter and a probe which detects the presence of water. The flood detectors are installed in basements, bathrooms, laundry rooms, and anywhere else when there is a risk of potential for water damage. Flood monitoring and Flood Alarm: Everyone should have flood awareness and flood alarms or flood monitoring system that can warn the residents of incoming floods. Having the flood monitoring system in a place or house can prevent huge damage and limits the loss of assets. Flood monitoring is a smart and productive way to monitor the floods and prevents damage when a disaster happens. With the advanced technologies, there are many flood monitoring systems that will provide you the real-time alerts by detecting the floods and enables you to take measure in order to control the flood. Why Flood Monitoring is Important? Some of the high tech flood warning system will detect water and alerts you through a call or text message. A standard flood alarm monitoring system has a built-in detector and three sensors that are exposed to the water when a flood arises. This sends a signal to the monitor and will set the alarm to sound by providing an advanced intimation of a flood. When you think of the flood consequence, most people think about property damage. Physical damage caused by water affects your property and it cannot stop there. Even if you drain the water, dry the floor, furniture or wall, the risk is not reduced with the growth of the mold. Moist causes mold to multiply within a day and they pollute the air. This is a threat to human health leading to sneezing, allergy and even asthma. To avoid these vulnerabilities, you can install a flood warning system in your living area. This alarm detects the moisture or the presence of water as soon as it occurs. So, you can prevent a great loss to you and your property from the water damage. Benefits of a Flood Monitoring System: • The flood alarm system reduces the direct loss through • Timely operation of flood control structures by safeguarding the property and land • Installation of flood resilience measures • Can transfer the property to somewhere above the flood level So, it is essential to know the importance of flood alarm and flood monitoring to avoid a great loss to human as well as property. For more details about our flood warning system please follow this website : http://flooddetectionsystems.com
Edgar Allan Poe 's The Cask Of Amontillado 1559 Words7 Pages Edgar Allan Poe, a famous romanticism writer, created a gothic tone in his stories by describing the setting of his stories with vocabulary that helped create the dark plots of stories such as “The Cask of Amontillado”, “The Raven” and “The Pit and The Pendulum”. Poe’s own foster father, John Allan, stated that “His (Poe’s) talents are of an order that can never prove comfort to their possessor”. How did Poe create such gothic tones in his stories with only describing the foul settings and wicked plots? Edgar Allan Poe was born Edgar Poe on January 19th, 1809. Edgar Allan Poe lived a very rough life his father left Edgar and his mother when Edgar was barely a year old. Poe’s mother died of tuberculosis when he was two years old, his foster mother and late wife also died of Tuberculosis while Poe was in the room. Poe lived with his foster parents John and Frances Allan until he joined the army in 1827, Poe was only 18. Poe then left the army and started to attend the University of Virginia where he later dropped out in order to follow his writing career. At first, he could not make a good living off of being a writer, not until 1843 when he won the 100 dollar prize for his short story “The Gold Bug”. Poe later died October 7th, 1849. Edgar Allan Poe was capable of creating immoral and twisted tones in the writing of his stories by the way he described the dreadful and appalling settings as well as the grim and serious plots. Poe created cruel and unusual tones in his Get Access
Home Remedies For Head Spinning Home Remedies For Head Spinning Head spinning may also be described as a sensation of giddiness, light-headedness or a sensation as if the surroundings are spinning. It may also be associated with loss of balance. The complaint of dizziness is often associated with trauma to the head, some illness, and change in position or turning the head. It is one of the most common out patient department complaints. It is capable of significantly hampering a person’s daily routine work. What Are The Causes? Dizziness or vertigo has several causes. It could be idiopathic, that is, no specific cause can be found and complaints resolve by themselves. Head spinning causes may be the following: 1. Inner or Middle Ear Problems: it is well known that the inner ear and eyes are responsible for maintaining the balance of our body. Quite often, dizziness is due to complaints associated with the ear. Before I speak of ear problems, it is essential to understand what the middle and inner ear looks like. This will help you understand the next few points better. • Benign paroxysmal positional vertigo: This type of vertigo occurs suddenly and brief episodes. It does not affect hearing and is not associated with any neurological symptoms. It is often accompanied by nausea and vomiting, and complaints arise from a change in position. • Meniere’s disease: This type of vertigo occurs suddenly and lasts longer. Patients may complain of a sensation of tinnitus (loud noises inside the ear), the ear being closed and some degree of hearing loss too. The complaints may become persistent later on. The cause of this is building up of fluid inside the labyrinth (inner ear). • Labyrinthitis: Infection of the inner ear, which may cause inflammation of nerves that connect inner ear to the brain (vestibular nerves). Complaints include nausea, vomiting, dizziness, and tinnitus. Some degree of hearing loss is present. This condition responds to medications and resolves within few weeks. 2. Trauma: An injury to the head or directly over the ear causes sudden loss of balance and unconsciousness. 3. Infection: Infections of the middle ear result in pain, tinnitus and a sense of dizziness due to inflammation and fluid present inside the ear canal. Pus discharge from the ear is present in plenty of cases. Infection of the throat and tonsils may also travel upward to the ear from the Eustachian tube (a small tube like structure which connects the middle ear to the nasopharynx). 4. Allergic reaction to certain food substances or drugs may trigger an episode of dizziness or light headedness. 5. Anemia: In India, anemia is a very common problem as regards both males and females. A significantly low level of iron in the blood results in anemia. In anemia, the oxygen carrying capacity of the red blood cells reduces, resulting in less oxygen carried to the brain and other organs. Less oxygen supply to the brain results in a feeling of light headedness. 6. Low Blood Sugar Level: This results in sudden weakness, sweating and often giddiness with loss of consciousness. This is seen commonly in diabetic patients. In severe cases, hypoglycemic coma may also occur. 7. Hypovolaemia: This means a low blood pressure. It can occur from the reduced fluid intake, excess loss of blood or fluids from the body. In these cases, the amount of blood required by the brain is not fulfilled. This may also result in syncope. 8. Other causes include a migraine (acute and chronic), menopause, anxiety disorders, overheating, hypothermia and Parkinsons’ disease. Dizziness or Head Spinning in Pregnancy The sensation of dizziness or head spinning during pregnancy varies from person to person. The duration and severity of head spinning in pregnancy differ in different females. • Excess secretion of hormones during pregnancy with causes the blood vessels to dilate and relax. When this happens, blood supply to the brain is reduced, resulting in a transient episode of light headedness. • Anemia during pregnancy is a common problem in India. • Low blood sugar levels, as the nutritional demands of the body increase significantly. • The growing uterus exerts pressure on the blood vessels, especially when a pregnant woman lies on her back. Major blood vessels to the heart experience external pressure, resulting in less blood supply to the heart and eventually to the brain also. How to Reduce Head Spinning in Pregnancy • Avoid prolonged standing, as the blood tends to pool in the legs and reduces backflow of blood to the heart. • Avoid sudden change in position. • Avoid lying on back, especially during second and third trimester of pregnancy. • Maintaining a proper intake of food rich in iron or consuming nutritional supplements as advised by the doctor to avoid anemia. • Avoid prolonged hot water showers, as hot water helps to dilate and relax the blood vessels. • Avoid wearing clothes and footwear that restrict circulation. Loose clothing should be preferred. Home Remedies To Reduce or Prevent Head Spinning 1. Maintain proper hydration by drinking plenty of water. 2. Ginger tea has proven effects in reducing nausea and dizziness. Ginger is equally beneficial to reduce dizziness arising from motion sickness. 3. Avoid hot water showers. 4. Vitamin C rich food is shown to have documented results in reducing vertigo and head spinning feeling. Foods such as amla, citric fruits, lemon, tomatoes, sweet potatoes, broccoli and green leafy vegetables are rich sources of vitamin C. 5. Apple cider has proven to have beneficial effects in maintaining the health of the human body. It reduced the feeling of nausea and dizziness that arise from infections. 6. If you are suffering from dizziness, it is advised to avoid changing position suddenly. Like getting up from the bed suddenly or rapid movements of the head. 7. Iron rich food such as whole grains, spinach, dates, dry fruits, fish, and eggs should be consumed by people who experience head spinning from nutritional anemia. Although, there are plenty of home remedies that will help you alleviate dizziness, if the head spinning feeling persists or increases, you should visit a qualified physician for proper treatment. These home remedies will aid in reducing head spinning. These cannot be substituted for proper treatment of the same. Dr. Himanshi Purohit
Home » Choosing the Right Abrasive Choosing the Right Abrasive Choosing the Right Abrasive Media for your Sandblasting Project Sandblasting Sand, DO NOT USE IT • Silica Sand is Dangerous • Safer Alternatives Exist Safer Options • Glass Beads • Aluminum Oxide (White/Brown) • Silicon Carbide • Blasting Garnet Silica, a mineral, discovered in sand used by many to sandblast.  Exposure to this mineral causes severe or fatal damage to lung tissue. DO NOT use sand in abrasive blast equipment. Some may argue it is safe.  Sand is cheap.  Sand is easy to get.  Regardless, Cyclone does not endorse, does not recommend and has never endorsed and has never recommended sand of any kind.  The safety risk is too high.  You may see the terms sandblaster, sandblast cabinet, or sand blast cabinet on our site.  But Cyclone absolutely DOES NOT RECOMMEND the use of any sand in any kind of abrasive blast equipment.  There are too many risks and too many safer alternatives. Choosing the right abrasive for your project requires a few concepts.  As shown in the image, there are a few terms to consider.  That red section is called a sweet spot.  This is the set of media characteristics ideal for your project.  There’s no perfect mixture.  It all comes down to the characteristics and velocity of the media as it impacts the surface. Abrasive Media Size In general, the bigger the abrasive molecule, the bigger the impact it makes on the blasted surface.  For example, dropping a bowling ball onto wet cement will make a bigger splash than a baseball.  However, in a blasting cabinet, larger particles means fewer impacts on the blasted surface.  This is compared to smaller molecules that make more frequent impacts.  Think about tossing 10 marbles into a pond or 100 tiny ball bearings.  Both those volumes will fit into a cup but the ball bearings have 10 times the number of impacts. Abrasive Media Density Density is a term that describes how much mass exists per unit of volume (a unit commonly called specific gravity).  It is not the same as an object’s weight.  Remember the old riddle of a pound of feathers vs. a pound of lead?  Equally one pound.  But lead is significantly denser than a feather so it takes less lead to get to a pound than feathers. Beyond the science involved, abrasive media and density is really about the abrasive and its ability to impact the blasted surface.  As an abrasive particle is accelerated by a blast gun, it impacts the surface.  The denser the particle the harder and deeper it will impact the blasted surface.  Additionally, the more dense the abrasive particle the less the media particle will deform upon impact.  That means it has the potential to last longer than less dense abrasive media. Abrasive Media Hardness Abrasive media hardness is very important.  In general, the harder an abrasive is the faster it will remove surface material.  Of course being fast is not always the most important factor in selecting a blast abrasive.  Penetration and finish are mutually important considerations.  Scroll down for some detail on how abrasive hardness is measured. Learn More About Abrasive Media
Acupuncture and pain relief How does acupuncture work to relieve pain? Firstly we need to understand the basic mechanism and physiology of pain in relation to injuries. Most people assume that if you have pain, there must be a structural reason for it. In other words, there must be some form of structure (bone, soft tissue, or disc) actually pushing on the sensory nerve, causing it to continuously fire and send signals perceived as pain. This assumption is not always true. It has been suggested that in over 90% of chronic pain cases, particularly chronic lower back pain, there is no major structural reason for the pain – no bulging disc, nothing pushing on a nerve – but the person still feels pain! Why is the pain still there?  The body is actually caught in a sensory motor loop, meaning that it has forgotten how to shut the pain signal off. This is due to a neuropathic problem where the nerve itself is swollen, firing continuously and sending incorrect information to the brain; hence creating a major problem as the body is stuck in a pain cycle. The second problem involves an evolutionary survival mechanism of our brains. Let’s take knee injury as an example. If you bang your knee into something, your brain immediately takes measures to protect it. The brain doesn’t know what happened to the knee, but it assumes a worst-case scenario in which you are losing lots of blood and the injury is perceived as life threatening. The brain sends a message to restrict the blood supply going into the knee and the blood return leaving the knee. This phenomenon is known as “guarding”, and is actually a  smart choice – if a poisonous snake bit you, slowing blood flow to the area lessens the chance of the poison spreading; or if you were badly wounded, reducing blood flow makes you less likely to bleed to death. The downside to guarding is that cutting off blood flow to the knee – while lifesaving in times past –makes it stiff and weak, and dramatically limits the knee’s ability to heal. So how does acupuncture help?  Acupuncture points are areas within the tissue with greater concentration of sensory fibres, capillaries, lymphatic vessels and mast cells. Gentle needle penetration into these neurovascular nodes causes complex electrical and chemical messages to be conducted through the body via the nervous system, endocrine, lymphatic and immune systems. In clinic we like to refer to acupuncture points as reactionary points, as needling these points creates a reaction in the body. It is believed that inserting acupuncture needles into the skin at these peripheral sites “jumps” the neural threshold on the position nerve (nerves that register the position of pain) pathway, so that the signal can reach the brain. Once the brain registers the location of the pain, it releases enkephalins – natural pain-relieving substances that plug up pain receptor sites in the brain, spine, and local tissue, stopping the pain. Most pain relief from acupuncture is very fast.But some time after needling the pain will return, as the “bad habit” of the nerve chronically firing below threshold re-establishes itself. But if the patient returns within a few days to get another treatment, the neural threshold will be jumped again. Keep jumping the neural threshold, and eventually the central and peripheral nervous systems realise that it’s better to operate in the non-pain state than in the pain state and the “bad habit” is broken. The technical term for the body returning to its pain-free balance is re-establishment of neurological homeostasis. Once this happens, the brain is no longer receiving pain signals from the knee, and no longer thinks the knee is injured or threatening the survival of the body. Instead of guarding and restricting blood flow to the area the brain now does the opposite, increasing blood flow to begin the healing process. Acupuncture also relieves pain via immune system activation Inserting a needle into the skin creates a micro-trauma, stimulating the activity of immune cells that control inflammation and initiate healing. And this healing action is not limited to the damage caused by the needle point, but will benefit any other damage in the area from past trauma or injury. The micro-trauma caused by the acupuncture needle also starts a longer term systemic immune response, promoting  healing of soft tissue throughout the whole body. In this way, the anti-inflammatory effect of acupuncture can last for up to a week after the treatment itself. Your body is not meant to be in chronic pain. Acupuncture reminds your body how it should be functioning, helping your powerful inbuilt pain relieving mechanisms kick into gear. It also promotes homeostasis and tissue healing and regulates the immune, endocrine, cardiovascular and digestive systems,  so the acupuncture treatment you get for your knee pain can also target other issues you may have and rebalance the health of your whole body.
So it feels like winter has last forever this year, so you'll be glad to know that officially spring is just around the corner. We can't promise temperatures will be warmer, but we're keeping our fingers crossed. In a matter of a few days we will officially be welcoming spring. Here is everything you need to know about when spring begins. When does spring begin? Spring officially begins this week on Thursday, March 20 and will come to an end on Thursday, June 21. How are the seasons decided? The astronomical seasons are decided by the Earth's rotation round the sun. The way the Earth rotates means that different parts of the globe are tilted towards the sun whilst others are away from it. Because of this there is a difference in the amount of sunlight that reaches different parts of the Earth - this is what causes different seasons throughout the world. What does spring mean for nature? Birdsong reaches its peak and many flowers appear, in turn attracting insect-life including bees and butterflies. Animals that hibernated in the winter - like hedgehogs, bats and butterflies - appear on the first warm days of spring so keep an eye out in early spring. Millions of migrant birds return, with chiffchaffs, sand martins and wheatears amongst the first to appear in March with swallows, swifts, cuckoos, nightingales and many warblers in April and May.
Hard drives. How much do you know about them? Well, here’s a little trivia for you. Decades ago, hard drive meant something else. For regular folks like you and me, hard drive was definitely not defined as an object. Back in the days, hard drive was far from being an object. In this day and age, it’s a lot different. Hard drive now refers to the hardware device found inside a computer. Hard drive or hard disk drive is where all the data are stored. In many respects, the hard drive is your computer. It’s where all the data in your computer is stored for the long term — not just the things you save, but all the code required for your operating system, the framework browsers use to connect to the internet, drivers for your accessories, and everything else. When people talk about computer storage, they are talking about the hard drive. (Via: https://www.digitaltrends.com/computing/what-is-a-hard-drive-your-guide-to-computer-storage) A computer would definitely be useless without a hard drive. But that wasn’t the case in the early days. It’s quite interesting to know that data storage was a lot different in those days. As a matter of fact, hard drives didn’t even exist then. In the very early days, computers didn’t have hard drives at all, so they needed different ways to store data so that it could be accessed when necessary. Those old timey ways included rolls of magnetic tape inscribed with data, and yes, punch cards that could be slotted in and read by the computer. Thanks to Reynold Johnson, the hard drive was invented. The year was 1956 when Reynold Johnson developed the pioneering process of storing data in metal disk. This was a breakthrough because at that time, data were being stored in a magnetic tape or drum. The first real hard drive was developed by Reynold B. Johnson at IBM, in 1956. Johnson’s team was working with better ways to store data on things like magnetic tape. They created ways to store information (in the form of bytes) on magnetic disks instead, which could be overwritten with new information as desired. This led to the development of an automated disk that could read itself in a manner similar to a record player — except much larger. Those are just a few cool things to know about a hard drive. The evolution of which is really quite interesting. Aside from those cool and interesting things about a hard drive, there are crucial things that you should know about it as well. The first of which is this; hard drives don’t last forever. As much as you or anybody would want to make it last forever, they just don’t. Hard drives have a lifespan. Traditional hard drives (also known as HDDs), which you’ll usually find in desktop computers and some cheaper laptops, will often fail sooner because they use moving parts. The average life of a hard drive depends on a lot of things, like the brand, type, size, and interface method, but you’re looking at about four years on average. (Via: https://lifehacker.com/how-long-will-my-hard-drives-really-last-1700405627) The most you could do is to prolong the lifespan of your hard drive through regular maintenance checks. That’s all. Still, the day will come when your hard drive will fail you. Sounds like the end of the world but it’s not. You see, the second critical thing to know about your hard drive is that the data stored in it can be recovered. There’s a good chance for you to get your data back as long as you don’t open your hard drive. The tips on hard drive data recovery will come in very handy the day your hard drive fails.
After composing the three Piano Sonatas op. 2 between 1794/95, Beethoven presumably planned a further trilogy without delay. Because of its enormous scope, the Grande Sonata op. 7 was removed from this new project and published separately. But by the beginning of 1798, the next group of three, the Sonatas in C minor, F major, and D major, had already been completed and it was published in Vienna that year as opus 10. Like the Sonata in C minor, op. 10 no. 1, the Sonata in F major has only three movements, diverging from the four-movement model established in opuses 2 and 7 - the result of intensive experimentation with different variants of the minuet, all of which would ultimately be discarded. Beethoven’s autograph manuscripts for opus 10 no longer survive, leaving the team Perahia/Gertsch to turn primarily to the first edition. As always, our edition has an extensive commentary and proven fingerings by Murray Perahia.
President McKinley: Architect of the American Century Robert W. Merry. Simon & Schuster, $32.50 (624p) ISBN 978-1-4516-2544-8 William McKinley, 25th president of the U.S., is largely remembered for bringing his country into a war with Spain over Cuba, then three years later dying from Leon Czolgosz’s bullets. In this political biography, journalist Merry (Where They Stand) argues for an overdue reevaluation of McKinley, who has been eclipsed by his flamboyant successor, Theodore Roosevelt. According to Merry, McKinley was a man of focus and perception rather than grand vision—“cautious, methodical, a master of incrementalism”—who led the U.S. through its transformation into a global power. The native Ohioan fought for the Union during the Civil War, then studied law, set up his own practice, and became involved in local politics. The first third of the book might be slow going for readers not fascinated with the details of a fractured 19th-century Republican Party, tariff debates, and Ohio political minutiae. Yet Merry’s clear and nimble writing keeps the story moving along to McKinley’s White House years and the Spanish-American War. In shaping the war-ending Treaty of Paris of 1898, McKinley affirmed American imperialist ambitions, propelling the U.S. onto the world stage. By focusing on McKinley’s deliberate choices in dealing with Spain, Hawaii, and the Philippines, Merry convincingly portrays McKinley as a crucial actor in American imperialism. Illus. Agent: Philippa Brophy, Sterling Lord Literistic. (Sept.) Reviewed on: 05/29/2017 Release date: 09/05/2017 Genre: Nonfiction Paperback - 624 pages - 978-1-4516-2545-5 Open Ebook - 624 pages - 978-1-4516-2546-2 Show other formats Discover what to read next
Study Guide Brooklyn: A Novel Letters By Colm Tóibín In Brooklyn, letters don't just transmit words—they transmit memories. After all, when Eilis receives a letter, very little focus is placed on its actual contents. Instead, Eilis is given the opportunity to think about the people she loves, like when she reads her brother's letter and "could hear Jack's voice in the words he wrote" and "feel him in the room with her" (3.787). In other words, Eilis is less interested in hearing about the day-to-day minutiae of her family's life than simply feeling their presence. Of course, this little memory trip can be overwhelming at times, leading Eilis to avoid reading letters when she knows that the emotional punch will be too powerful. Ultimately, however, this just further emphasizes how much Eilis is affected by the letters she receives. This is a premium product Tired of ads? Join today and never see them again. Please Wait...
Wednesday, December 4, 2019 Great Basin California Gull I have posts for 13 different species of gull from many parts of the world: American herring gull in New Brunswick, Canada; black-headed gull in Iceland; glaucus gull, glaucus-winged gull and black-legged kittiwake in Alaska; Cape gull in South Africa; great black-backed gull in Novia Scotia, Canada; European herring gull in the Netherlands; Thayer's gull and Heermanns gull in Sonora, Mexico; laughing gull at Canaveral Seashore in Florida; ring-billed gull in St. Augustine, Florida; and the western gull on Catalina Island, California; but until now I had not posted on the gull I grew up with, the California gull which is ironically the state bird of Utah.  In 1848, less than a year after the Brigham Young led Mormons arrived in the Great Salt Lake Valley, a huge infestation of what are now known as Mormon crickets started to devour their crops and many California gulls descended upon the crickets and started to devour them, partially saving their crops. In 1913 a bronze statue of two gulls sculpted by Mahonri M. Young, a grandson of Brigham Young, was installed on Temple Square in front of the Assembly Hall and dedicated by LDS president Joseph F. Smith. The Seagull Monument, as it is known, is believed to be the first monument to birds in the world. Because of this 1848 event, the California gull was named the Utah state bird in 1955.  The California gull is a medium-sized gull that is primarily white, with a gray back and upper wings, black primary feathers with white tips, a yellow bill with a black ring and a small red spot, yellow legs and brown eyes. Breeding California gulls have a completely white head and a more pronounced red spot on the lower bill (mandible) and a less visible black spot. The head in non-breeding plumage is heavily streaked with brown.  There are two subspecies of California gull. The Great Basin California gull is found in the Great Basin of the western U.S. and up into Wyoming and central Montana. The Great Plains California gull ranges from Great Slave Lake onto the Great Plains of western Manitoba and South Dakota. The subspecies are not distinguishable by eye. I saw these Great Basin California gulls along the causeway to Antelope Island in the Great Salt Lake. However, as a youth I used to see them in the Salt Lake City area and remember them being particularly prevalent at the dump.  1 comment: 1. The one bird any Utahan can consistently identify, right? Maybe spring robins also belong on the list.
HTTP Error Codes When we access a web server or application, every HTTP request that is received by a server is responded to with an HTTP status code. HTTP status codes are  3 digit codes and are grouped into five different classes. It can be quickly identified by its first digit: • 1xx: Informational • 2xx: Success • 3xx: Redirection • 4xx: Client Error • 5xx: Server Error The most commonly encountered HTTP error codes are 4xx and 5xx status codes. So we will discuss about 4xx and 5xx error code and how to fix it. Client Error: 1. 400 Bad Request: It means the HTTP request that was sent to the server has invalid syntax. Below are the few examples when a 400 Bad Request error might occur: • The user’s cookie that is associated with the site is corrupted. Clearing the browser’s cache and cookies could solve this issue. • Malformed request due to a faulty browser. • Malformed request due to human error when manually forming HTTP requests (e.g. using curl incorrectly). 2. 401 Unauthorized: It means that the user trying to access the resource has not been authenticated or incorrectly authenticated. This means that the user must provide valid credentials to be able to view the protected resource. If a user tries to access a resource that is protected by HTTP authentication. In this case, the user will receive a 401 response code until they provide a valid username and password (one that exists in the .htpasswd file) to the web server. 3. 403 Forbidden: If you are getting 403 error unexpectedly, there are a few typical causes that might be causing the issue. File Permissions: If the user is getting a 403 Forbidden error, ensure that the account has sufficient permissions to read the file. Typically, this means that the other permissions of the file should be set to read. If the user is unexpectedly getting a 403 Forbidden error, ensure that it is not being caused by the.htaccess settings. Index File Does Not Exist: If the user is trying to access a directory that does not have a default index file, and directory listings are not enabled, the web server will return a 403 Forbidden error. For example, if the user is trying to access, and there is no index file in the admin directory on the server, a 403 status will be returned. To enable it from WHM, please follow: 4. 404 Not Found: Below will be the some reasons for this error: • Does the link that directed the user to the server resource have a typographical error in it? • Did the user type in the wrong URL? • Does the server configuration have the correct document root location? Server Error: 1. 500 Internal Server Error: The most common cause for this error is misconfiguration in the server (e.g. a malformed .htaccess file) or missing packages (e.g. trying to execute a PHP file without PHP installed properly). 2. 502 Bad Gateway: It means that the server is a gateway or proxy server and it is not receiving a valid response from the backend servers that should actually fulfill the request. If the server in question is a reverse proxy server, like a load balancer, here are a few things to check: • The backend servers (where the HTTP requests are being forwarded to) are healthy. • The reverse proxy is configured properly, with the proper backends specified. • If the web application is configured to listen on a socket, ensure that the socket exists in the correct location and that it has the proper permissions. 3. 503 Service Unavailable: 4. 504 Gateway Timeout: This typically occurs in the following situations: • The network connection between the servers is poor. • The gateway or proxy server’s timeout duration is too short. I hope you are now very much familiar with the most common HTTP error codes and common solutions to those codes. If you encounter any error codes that was not mentioned in this post, or if you know of other likely solutions to the ones that were described, feel free to discuss these in the below comments box. Thank you. Powered by Facebook Comments Be the first to comment Leave a Reply
Sustainability DIY: Recycled Paper Tutorial by Silvy Zhou ’21 Linoleum print “Starlight”, on recycled paper Despite NA’s initiatives to go “paperless” throughout the past few years, it is still difficult to avoid the printed tests and essays in our lives. Over the summer, as I was sorting through stacks upon stacks of paper last year’s paper handouts, I became inspired to find ways to repurpose all this paper. Eventually, I found a video by Shmoxd on youtube, which showed me how to recycle old papers using material I already had at home. I could also recycle old art supplies like empty paint tubes and dried clay, which I can’t recycle through my town. Confetti paper, binded notepad The process for recycling paper essentially involves making a paper pulp from water and pieces of paper, filtering out the pulp using a screen, and then drying the pulp out. I used a blender to shred and mix my paper pulp. If I were to add color, extra colored paper, glitter, clay, or plastic, I would blend it into the paper pulp so that it doesn’t actually affect the texture of the paper in the end. For the screen, I stretched a soft piece of window screen over canvas stretcher bars, attaching one side with nails and the other with a binder clip so that I could lift the frame off of the screen later. After setting up the screen and the pulp, I slid the screen into the water, letting the paper pulp flow over the top of the screen before I lift it back up. The screen separates the pulp from the water by letting the water drip through. Being able to lift the frame off the screen also allows me to easily add another layer of paper pulp. This is useful for sandwiching other elements, like pressed flowers or photos. White recycled paper, dried and trimmed. This entry was posted in Visual Art and tagged , , , , . Bookmark the permalink.
clock menu more-arrow no yes Filed under: Vitamin D study to assess role in protecting Black population from COVID A University of Chicago researcher wants to study thousands of people to test the relation between vitamin D and boosting the immune system to fight viruses. The University of Chicago Medicine located at 5841 S. Maryland, in the Hyde Park neighborhood. UChicago Medicine in Hyde Park. Researchers there are studying how vitamin D levels affect how well African Americans can fight off COVID-19. Tyler LaRiviere/Sun-Times University of Chicago researchers want to determine whether vitamin D supplements can help African Americans better fight COVID-19. Dr. David Meltzer, chief of hospital medicine at UChicago Medicine and lead researcher of two upcoming studies, said Black people typically have lower levels of vitamin D than whites, though the health consequences are not well known. Newly published research led by Meltzer found a lower risk of infection, particularly for Black people, when vitamin D levels are increased higher than what experts now deem sufficient for overall health. In the wake of that data study, Meltzer is recruiting volunteers for two human trials to better understand that relationship between immune system and boosting vitamin D with supplements. Meltzer wants to hone in on the racial distinctions and see if boosting vitamin levels reduces either the risk of becoming infected or the severity of illness. The benefit of taking vitamin D to ward off COVID-19 has sparked debate in the medical community. Some doctors caution too much of the vitamin can be detrimental to health. Nonetheless, attention around coronavirus-related research last year has driven sales of vitamin D supplements during the pandemic. Meltzer argues there are unanswered questions about vitamin D as it relates to the overall health of Blacks, particularly for fighting infections. One benefit of Vitamin D is bone strength, a factor that can help prevent osteoporosis, but previous research suggests even though vitamin D levels are lower in Blacks than whites, bone density isn’t dramatically different between the racial groups, Meltzer said. What isn’t well understood, he adds, is the role Vitamin D levels in Black people plays in boosting the immune system, another benefit of vitamin D. “The effects on the immune system … have been much more difficult to define,” Meltzer said in an interview. “Even if one has enough vitamin D to be good for bone health, that doesn’t mean one has the right amount of vitamin D to be good for immune function.” Sources of vitamin D include supplements and fatty fish. Exposure to the sun also help the body create its own vitamin D, but that also can mean low levels during winter in cold-weather locations like Chicago. One U. of C. study, to be overseen by the U.S. government, will conduct lab tests of people taking low doses of supplements as well as the highest safe levels. The other study will include self-reporting online. In each trial, half of those taking part will receive vitamins, others will receive placebos. Meltzer hopes to attract 2,000 people for each trial and will be recruiting over the next two months. The studies are open to all, though researchers want a racially diverse group. Those interested in taking part in the study, which is backed by the National Institutes of Health, can email or call 773 834-8620. For the online study, go to Meltzer acknowledges the research will likely be finished sometime after the COVID-19 pandemic is waning, at least according to current predictions. “We’re doing this not just for the current pandemic but also for the next one,” he said. “We’re doing this not just for the current pandemic but also for the next one,” Dr. David Meltzer says of his research of vitamin D and COVID-19.
Becoming a Registered Professional Engineer Registration for Professional EngineersBecoming an engineer Engineering regulations and licenses are set up by different jurisdictions around the world to promote the general public’s public health, protection, well-being, and other interests and to determine the licensing process by which an engineer is allowed to practice engineering and/or provide the public with professional engineering services. As in many other occupations, in some jurisdictions, the professional status and actual practice of professional engineering is legally defined and covered by the statute. Also, some jurisdictions allow only licensed engineers (sometimes referred to as registered engineers) to “practice engineering,” which requires careful description to solve possible overlaps or ambiguities for any other occupations that may or may not be regulated themselves (e.g. “scientists,” or “architects”). Also, jurisdictions licensing unique engineering disciplines need to carefully identify these limits so that practitioners understand what they are required to do. In certain cases, only a licensed/registered engineer from the state or province has the right to assume legal responsibility for engineering work or projects (typically via a seal or stamp on the relevant design documentation). Regulations that require that technical documents, such as reports, plans, engineering drawings and estimates for study estimation or valuation, or design review, repair, servicing, maintenance, or supervision of engineering work, process, or project, may only be signed, sealed, or stamped by a licensed or registered engineer. In situations involving public safety, property, or health, an engineer may be required to be licensed or registered, although some jurisdictions have an “industrial exemption” that allows engineers to work without a license internally for a company as long as they do not make final decisions to release the product to the public or provide engineering services directly to the public (e.g. consultant). In court or before government committees or commissions, expert witnesses or advice can be provided by experts in the technical field, which is often provided in some jurisdictions by a registered or licensed engineer. Becoming Registered Becoming an engineer is a worldwide process that differs widely. The use of the word “engineer” is limited in some areas, but in others, it is not. There are specific processes and requirements for obtaining registration, charter, or license to practice engineering if engineering is a regulated occupation. These are received from the government or a charter-granting body operating on its behalf and these bodies are subject to oversight by engineers.[1] There are voluntary qualification programs for different fields, in addition to licensing, which require exams approved by the Council of Engineering and Technical Specialization Boards. Registered engineers enjoy considerable control over their legislation due to occupational closure. They are also the authors of the applicable ethics codes used by some of these organizations.[1] In their work, engineers in private practice are most often found in conventional professional-client relationships. On the other side of the relationship are engineers working in government facilities and government-run industries. Despite the different emphasis, engineers face similar ethical problems in industry and private practice and draw similar conclusions.[3] The National Society of Professional Engineers, an American engineering society, has tried to apply a single professional license and code of ethics to all engineers, regardless of the field of practice or the job sector. Registering in the United States The registration or licensing of technical engineers and engineering practices in the United States is limited by individual states. Each license or registration is valid only in the state in which it is issued. In more than one state, some licensed engineers hold licenses. Comity, also known as reciprocity, allows engineers licensed or registered in one state to receive a license in another state without complying with the usual stringent proof of testing certification. This is done by the second state acknowledging the legitimacy of the licensing or registration process of the first state. History of PEs In the state of Wyoming, licensing in the United States started when attorneys, notaries, and others without engineering experience submitted poor-quality applications to the state for permission to use state water for irrigation. In 1907, a bill was introduced to the state legislature by Clarence Johnson, the Wyoming state engineer, which mandated registration for anyone identifying themselves as an engineer or land surveyor and established a board of examiners. The 52-year-old engineer and mineral surveyor, Charles Bellamy, became the first practicing engineer accredited in the United States. After passage, Johnson would write wryly about the impact of the legislation, writing, “A most astonishing change took place within a few months in the character of maps and plans filed with the applications for permits.” Louisiana, followed by Florida and Illinois, would become the next states to require licensing. In 1947, Montana became the last state to enact licensing legislation Registration Requirements for Engineers Licensing specifications differ, but are usually as follows: For standardization, a central body, the National Council of Examiners for Engineering and Surveying, writes and grades FE and PE examinations (NCEES). However, the criteria for taking the tests, as well as the passing score, are independently set by each state’s board of professional engineers. For instance, before they can take the PE exam, applicants in some states must have professional references from several PEs. For FE and PE tests, there is a reasonably wide variety of examination pass rates, but the pass rate for repeat test takers is substantially lower. All 50 states and the District of Columbia have engineering boards represented by the NCEES that administer both the FE and PE exams. Degree standards are changing in the United States. The NCEES model will require additional credits beyond a Bachelor of Science in Engineering degree effective January 1, 2020. NCEES is designing the types of creditworthy practices that will meet the additional criteria for education. Civil engineers have obtained some support for this. It is still possible for a person to bypass several of these steps as of 2013. In Texas, for example, people with many years of reliable experience still have access to both FE and PE exam waivers. It is still possible for a person to skip Phase No. 1 in a few states and apply to take the registration tests, as long as the applicant is supported by a PE, since work experience can be replaced by academic experience. The need for years of experience can vary as well. For example, a PE exam with only two years of experience after a Bachelor of Science in Engineering degree or one year of experience after a Master of Engineering degree can be taken in California. Candidates may take one of the PE exams directly via NCEES in other states, in some cases immediately after graduation, but they still have to wait until the requisite experience has been obtained before obtaining a license. There are also state-specific exams in some jurisdictions. California requires two additional tests for civil engineering applicants in land surveying and earthquake engineering, and several states have examinations based on their particular laws and ethics standards. Generic technical engineering licenses are issued by some states. Others referred to as discipline states, grant licenses for particular engineering fields, such as civil engineering, mechanical engineering, nuclear engineering, electrical engineering, and chemical engineering. In all cases, however, engineers are ethically obliged to restrict their work to their field of specialization, which is typically a small part of a discipline. Although this restriction is not always imposed by licensing boards, it can be a factor in litigation for negligence. Registered civil engineers can also do land survey work in a few states. In addition to the license of the person, most states require that companies that provide engineering services be allowed to do so. For example, the state of Florida requires companies providing engineering services to be registered with the state and have a trained engineer licensed in Florida to qualify the company. A substantial number of licensed technical engineers are civil engineers. For example, in Texas, about 37 percent of licenses are for civil engineers, with more than half of the exams taken are for civil engineering exams.[16][17] Many of the rest are mechanical, electrical, and structural engineers. Some engineers in other sectors, however, receive licenses for the right to serve in the court, before government commissions, or just for prestige as expert witnesses, even though they might never actually sign and seal design papers. Although the control of engineering activities is carried out in the United States by individual states, the areas of engineering involved in interstate trade are largely unregulated. These fields cover a significant part of mechanical, aerospace, and chemical engineering and maybe expressly exempted from an ‘economic exception’ from regulation. An industrial exception includes engineers who design goods that are marketed (or have the potential to be sold) outside the state in which they are made, such as vehicles, as well as the machinery used to manufacture the product. An industrial exception is not protected by structures subject to building codes, although small residential buildings also do not need an engineer’s seal. The roles of architects and structural engineers overlap in some jurisdictions. Generally speaking, an architect is the main professional responsible for constructing habitable buildings. The architect signs and seals construction plans for houses and other buildings that could be used by humans. A structural engineer is hired to provide a technical structural design that ensures the overall structure’s stability and protection, but no states currently allow engineers to practice professional architecture without being licensed as an architect. “Most private companies hire non-graduate staff with engineering titles such as “test engineer” or “field engineer” in technical roles. At the discretion of the employer, as long as the company does not specifically provide engineering services to the public or other companies, such positions do not require an engineering license. However, a distinction between a “graduate engineer” and a “professional engineer” needs to be made. Anyone who holds a degree in engineering from an approved four-year university program is a ‘graduate engineer’ but is not licensed to practice or provide services to the public. Unlicensed engineers typically work for a corporation as employees or as professors in colleges of engineering, where they are regulated by the industrial exemption clause. Leave a Reply Recent Posts
Suckling pig is a pig that can be roasted and eaten. Now, when I say a pig I don't mean ham or pork or bacon. What I mean is a whole pig. It is often seen in cartoons or movies involving banquets being eaten by medieval persons. It's the pig at the centre of the table with the shiny red apple in its mouth. The suckling pig is prepared by cutting a dead pig down the front, and removing all the organs (e.g. heart, liver, intestines and spleen). Sausage meat can then be used to fill up the cavity left in the pig from whence the organs were removed. The name "suckling pig" comes from the fact that the pig is young and thus was still suckling on its mother's teat when it was slaughtered. In case the name still doesn't make much sense to you, it is also sometimes called "sucking pig". Now doesn't that sound much more appetizing? In case you have the urge and the courage to try it yourself, here's a recipe from the Complete Book Of Meat Cookery In Color published in 1971. Page 28 of the book shows a proud chef standing behind his monstrous four foot long roasted suckling pig. It is interesting to note that the chef bears an uncanny resemblance to his creation, strengthening the idea "you are what you eat". This recipe is on page 33: Serves 10-12 1 x 18 lb. sucking pig 1 small red apple 6 lb. pork sausage meat 2 apples 1 onion, finely chopped 1 teaspoon dried thyme 1 teaspoon dried rosemary 2 eggs 3 cups soft breadcrumbs Wipe moisture from inside of sucking pig. Place stuffing in cavity and sew up securely with white string. Place a piece of wood or a meat skewer into the pig's mouth to keep it open. Rub surface with oil, then salt and rub again with oil. Place sucking pig in a large roasting pan and bake in a moderate oven for 4 hours or until pig is cooked. The skin may be scored in a decorative pattern, if desired, before roasting. To serve, remove wood from pig's mouth and replace with a polished red apple. Serve hot or cold. To make stuffing, mix all ingredients together. TIME 4 hours It actually sounds rather easy, if not somewhat disturbing. There are hazards, of course, that my aunt B discovered. (Name truncated to protect the innocent). My aunt B is quite inventive when it comes to food. Her culinary experiments are usually very tasty, and turn out quite well. So, if anyone could tackle Roast Suckling Pig, it was her. She invited her family over for a feast, and procured a suckling pig to prepare. The pig was a rather young one, but there still turned out to be a lot of hog to cook. So much that when my aunt tried to put the poor fella in the oven, Murphy's Law prevailed: The pig would not fit. After careful consideration, a plan was made to bisect the pig down the middle. The plan was enacted and the two demi-pigs were put in the oven on separate trays. Murphy's law was not beaten yet, as now the pig was taking too long to cook. The guests were getting hungry and the meal was nowhere near ready. So the oven temperature was increased. This was all fine and dandy until the living room adjacent to the kitchen began to fill with a peculiar smoke tinged with the scent of burning pork fat. Thus, the temperature was decreased. Eventually, the pig was cooked. Putting the pig back together proved difficult, as the situation was not at all like a magician sawing a woman in half. My aunt simply decided to cut the pig into slices, and the suckling pig was reduced to tasty but unimaginative pork. Of course, this all happened before I was born, but the story has become a legend in my family, and is repeated every time we go to my aunt B's house. Ingredients Instructions
II. Precautions IV. Preparations: Indicated Non-Live Vaccines 2. Pneumococcal Vaccine 1. Prevnar 13 followed by Pneumovax at least 8 weeks later 2. Give after diagnosis and then every 6 years 3. Immunogenicity is better if higher CD4 Count >200 3. Conjugated H Influenza type b capsular Vaccine 1. Highly immunogenic in HIV without advanced disease 4. Influenza Vaccine yearly (inactivated form) 5. Hepatitis A Vaccine (all of those susceptible) 6. Hepatitis B Vaccine (if HBsAg negative) 7. Routine Tetanus Vaccine (Tdap or Td) 8. Human Papilloma Virus Vaccine (Gardasil, consider for those up to age 45 years old) 9. Meningococcal Vaccine (all patients with HIV) 10. Consider Hib Vaccine 11. Recombinant Herpes Zoster Vaccine (Shingrix, for those over age 50 years) Images: Related links to external sites (from Bing) Related Studies
(July 4, 1845-Mar. 9, 1916). Nebraska Cropsey served as assistant superintendent of the Indianapolis Public Schools for the primary grades and one of the best-known educators in the Midwest. A native of Pennsylvania, Cropsey came to Indianapolis with her parents while still a child. She became a teacher after Superintendent abraham c. shortridge persuaded the school board to send her to Oswego (New York) Normal School for advanced instruction. Upon her return, she served briefly as a critic in the training school for teachers. In 1871, at age 25, she became assistant principal of elementary education. She held this position, with a later title change to assistant superintendent, for 43 years. During these years she supervised the primary schools of Indianapolis, fostered the cause of education, and worked for passage of the state’s compulsory education law in 1897. She wrote several arithmetic textbooks—the first appearing in 1893—used in Indianapolis and other cities. Indiana University conferred an honorary degree on Nebraska Cropsey in 1913. She was the first woman and the fourth person so honored. The former Cropsey Auditorium in the Indianapolis Public Library was named for her, as was Public School 22. Revised February 2021 Help improve this entry Contribute information, offer corrections, suggest images. You can also recommend new entries related to this topic.
Chicory is a vegetable in a plant that is woody in nature and usually grows with bright blue flowers; some of it can be pink or white. You can harvest chicory to get its seeds, leaves, or roots. Although chicory leaves can be harvested at any given time during the grooming season, they have a better taste during the early spring. Summertime is the perfect time for collecting seeds or cooking chicory. You can find them growing around rural roadsides, fields, and even in the wild. In this article, you will be shown the right and proper way to harvest Chicory. Step 1 of 4 Finding Chicory 1. Put on protective clothing. Chicory normally grows in areas where ticks are many, so for safety, put on long pants, socks, and a hat. You can also put on garden gloves for protection against small bees. • Putting on a bug repellent might also be useful if you discover the place is infested with bugs. 2. Search for chicory in sunny locations. Chicory grows normally in moist and cool conditions; you can get chicory in places like vacant city lots, fields, disturbed ground, alongside rural roads and gardens. When collecting chicory, ensure not to do so in places where there is a private property sign; if you are unsure whether there is permission to harvest chicory from somewhere, enquire from your local authorities. 3. Check if the plant is chicory. Chicory grows in different varieties, but it is mainly known for its ragged petals (its bright blue color), but sometimes they can be either pink or white. The leaves are narrow and thin, and it looks very much like dandelion leaves; the branch stem joints are about eight centimeters and, when mature, can be up to two centimeters in diameter. If you are not sure the plant you are about to harvest is a chicory plant, you can consult an online plant database or field guide. Step 2 of 4 Harvesting the Roots 1. Harvest chicory roots from autumn through spring. For you to get the best of the roots while harvesting chicory, you should plant it after March, and before getting to mid-May, you can harvest them from the first day of September to mid-November. 2. Hold the head of the plant and pull upwards slowly. You can use a hand trowel to gently remove the roots from the ground without destroying or scattering them; the root might be as deep as 65cm to the ground, so you should dig gently until you can remove it from the ground. 3. Preserve the roots. Cut off about two inches from the top of the root that might not be usable, and store them in a wet condition for 12 to 13 weeks. 4. Prepare the roots for use. Clean the roots with a soft brush, and then slice them into little pieces with a sharp knife on a cutting plate. You can also decide to use them for brewing or roasting, but before that, you have to grind them into a liquid with a really strong grinder. 5. If the need is urgent, harvest the root in spring. Trying to force vegetables can only be effective when they are moved to an artificial growing environment. Here is how you can move them: • Dig out roots that have a body of at least 2 inches in diameter, and leave the leaves intact • Avoid brushing them because it might lead to rot. • Put the leaves in a cool dark place like a garden, a greenhouse, or a box of sand. • Leave them there and avoid freezing them until they are needed. Step 3 of 4 Harvesting the Leaves 1. Trim premature leaves in the autumn. Chicory leaves are consumable and less bitter throughout the growing season, but after spring, they become bitter. • Look for leaves that are about 8 inches in height and cut them, and then harvest fully after 70 days. • If you want to use the leaves after spring, you can reduce the bitterness by boiling them. 2. Take the entire plant or just the top. Plants with the top broken off will either add nutrients to the soil or regrow, so use hand clippers to cut off the plant’s top. • If you decide to keep the whole plant, pull the plant up gently while harvesting until the roots are out of the ground completely. 3. Wash the leaves very well. After harvesting the leaves, ensure to rinse it very well with running water; you can wash it at least up to three times. After washing, use a paper towel to clean the surface and remove the dead leaves from it. 4. Get the leaves dry. After thorough washing, shake off the water from the leaves or, better still, put it in a basket to dry up. 5. Preserve the leaves. Ensure that the leaves do not freeze up; you can use a tight plastic bag to keep it in the refrigerator for 10 years. Step 4 of 4 Collecting the seeds 1. Choose healthy plants to collect the seeds from. Select those plants that have grown freely without anyone trimming them; ensure to collect them from the month of July because that is the best period for planting. So, choose a dry morning when the dew is no longer there and harvest those seeds. • Because of small bees that enjoy chicory, you are advised to put on gardening gloves. 2. Extract those seeds from the plants that are dried. Look for seeds that are hidden between bunches of leaves; you can use a needle or a pair of tiny tweezers to carefully extract the seeds. Another way to do it is to hit the seedpods continuously with an object. When you are done, remove the waste and extract the seeds. 3. Dry the plants. Chicory plants can be dried the same way herbs are dried; tie them in small bunches close to the end of the stem. Wrap them in a paper bag that you can use in collecting fallen seeds. Place them where there are enough air and no direct contact to sunlight. Wrapping Up Chicory can be used for culinary and medical purposes. Whichever purpose you decide to use the plant for, ensure to use the steps in this article to harvest them properly. Thank you so much for your time reading this article if ever you have a question about this please kindly leave your comments below and I will be happy to write back to you.
Natural Law and History Joe Biden believes there is something in the philosophical tradition called an “evolving view of natural law:” Natural law reasoning must be dynamic, capable of change. Only with expanding conceptions of “due process,” “equal protection,” and rights “reserved to the people” can the development of individual rights and liberties keep pace with the other changes in our country. Biden deployed this mythical philosophical creature in 1991 when trying to block Justice Clarence Thomas from sitting on the Supreme Court. People interested in ethical theory know that natural law is opposed to relativism and historicism. Contrary to the idea that justice is reducible to national will and its history, natural law is the theory that right and wrong do not change because what it means to be human does not change. It is the claim that reason discerns the moral implications of our most basic desires, determining that certain things must be avoided and others pursued. These moral-bearing inclinations are few, and the obligations linked to them are also few. For this reason, and because our social lives are complex, a natural law theorist acknowledges that these rational obligations must be supplemented by adjudications from positive law. Positive law—human and divine—does change with history. Tastes, mores, and circumstances evolve; deliberative bodies and judiciaries of varied polities craft, revise, and dispense with myriad laws as sensibilities and national conditions change. Natural law does not change, but human positive law does. (Even divine positive law can change. The Christian God, at least, posits laws which he then later changes—marriage laws, for instance). In natural law reasoning, therefore, there is considerable need for human law to supplement natural law. Thomas Aquinas argues, for instance, that natural law demands murder be punished, and punishment may include the death penalty. However, what punishment is exacted for murder is left to the polities of different lands to determine for themselves. Punishment is set by human law. Is human law infinitely elastic? No: some positive law determinations might conflict with the unchanging core of rational human nature. If a positive law clashes with those few, socially foundational obligations stemming from this fixed core, that law is unjust. Natural law trumps human law when push comes to shove. Natural law is the measure of the rationality of human law. This is natural law theory in outline. Joe Biden’s version is mythical and confused. James Carey’s Natural Reason and Natural Law offers an excellent account of natural law. I drew the above outline from it. Carey’s book is not for everyone. But for anyone who wants a genuinely meaty consideration of natural law and who has the patience to dwell on careful distinctions and close reading of texts, this is a very good resource. Athens and Jerusalem: Strauss on Aquinas Natural Reason and Natural Law has two distinctive features. First, its focus on Aquinas is complemented by a consideration of one of his most astute critics, Leo Strauss. Born in Prussia in 1899, Strauss taught political thought at the University of Chicago for many years and influenced many American political theorists (now known as Straussians). Carey covers some of Strauss’s students’ criticisms of natural law, too. The other distinctive feature is the inclusion of Martin Heidegger, one of Strauss’s interlocutors (who also happened to be an anti-Semite and National Socialist). Other than Wittgenstein, Heidegger is the most influential 20th-century philosopher. His account of nature, which many scholars find attractive, is a problem for both Aquinas and Strauss. Aquinas’s natural law attracted Strauss’s interest because of its attempt to give an account of justice that transcends particular political orders. Like Aquinas, Strauss opposed relativism and historicism. However, according to Strauss, Aquinas’s natural law is not the purely philosophical discernment of natural justice that it purports to be. Philosophy is the exercise of autonomous understanding, but natural law is inescapably theological and does not speak “to man as man.” The Catholic Straussian, Father Ernest Fortin, argues that this is so because law requires promulgation; it needs a lawgiver to make the law publicly known. Aquinas has smuggled God into his supposedly rational account of law, just as Strauss contends. Natural law is really piety. To this charge, Carey replies that Aquinas makes promulgation original to reason: As Aquinas argued, “the first common precepts of the law of nature are self-evident to one who possesses natural reason, and do not need to be promulgated.” The theologico-political problem is inescapable. This means that a religious pressure always exists on the statesman, curbing his scope of action. The gods will have a say, too. In his metaphysical work, Aquinas argued that “no reality lacks its specific operation.” Quarks, gluons, and proteins even display a logic. Our species, homo sapiens, has its “specific operation,” the rational articulation of appetites. Strauss in his 1968 “On Natural Law,” accurately summarizes the Thomistic position: “Man is by nature inclined toward a variety of ends which possess a natural order; they ascend from self-preservation and procreation via life in society toward knowledge of God.” As Carey explains: “man…could not be what he is without knowing natural law, just as he could not be rational without knowing the principle of non-contradiction.” This is because “reason by its very nature is oriented toward determining both what is and what ought to be.” Oriented to the true and good, reason stipulates “that good is to be done and pursued, and evil avoided.” The goods to be pursued are self-preservation, procreation, society, and knowledge of God. The first two inclinations, being the fundamental principles of natural selection, have had confirmation from Darwin, and primatology confirms the third. Put otherwise, reason identifies in humans’ animal appetites a legal framework: prohibitions against suicide, child abuse, and hatred, along with obligations to self-care, family, and rule of law are all derivable from such appetites. Legislators in all countries mull over laws respecting these themes all the time. As Aquinas says, “law is nothing else than an ordinance of reason for the common good.” In a letter to Eric Voegelin, Strauss writes that the Greeks show “that truly human life is a life dedicated to science, knowledge, and the search for it,” not deference to divine revelation. Fatefully, contends Strauss, Aquinas inherited both a Greek philosophical tradition and a legacy of Biblical ethics. This blending of Athens and Jerusalem is disastrous, for it not only weds law to “theology and its controversies,” but also obscures the difficulty of moral knowledge and the fraught decisions inevitable in politics. The problem with Thomistic natural law is that it does not take politics seriously enough. For Strauss, the relationship between reason and law is a delicate one: reason must not collapse into national passions but is nevertheless tasked with pragmatic judgements to defend the polity. In Carey’s presentation, Strauss prefers the Greeks because they understood that there are no “universally valid rules of action” dictated to political actors. This is just as well, for events sometimes require political leaders to make harsh decisions that are incompatible with the rules believed by Aquinas to be implicit in natural law (what he calls the secondary precepts of natural law). For example, Aquinas and Thomistic jurists like Francisco de Vitoria elaborated a theory of just war with protections for innocents. Natural law, counters Strauss, thereby makes nations vulnerable, tying the hands of leaders confronting acute, even existential, events. Decisions, not laws, are sometimes basic. Heidegger or Aquinas? But Strauss has two problems. First, Carey argues that Strauss overstates the place of God in Thomas’s natural law reasoning. The obligation respecting a knowledge of God merely posits that the question of God must be addressed by law. Archeology favours Aquinas. An ivory figurine known as the Lion-Man found in Germany and dating to 35,000 years ago is the earliest indisputable work of art so far found. It is also thought to be the first known religious artefact. At a minimum then, the theologico-political problem is inescapable. This means, against Strauss, that a religious pressure always exists on the statesman, curbing his scope of action. The gods will have a say, too. Strauss’s second problem comes from Heidegger. Heidegger is one of the founding figures of phenomenology. Through his novel approach to experience, Heidegger claims careful attention shows that natural law is not remotely natural. That is, it is no part of original human experience. Like Strauss, Heidegger also returned to the Greeks but proved the better historian. For the Greeks, the world was a harsh battle for recognition: life was a competition for glory. Heidegger writes: This was clarified through the highest possibility of human Being, as the Greeks formed it, through glory and glorifying… Glory is the repute in which one stands. Heraclitus says, “for the noblest choose one thing above all others: glory, which constantly persists, in contrast to what dies; but the many are sated like cattle.” Strauss returned to Greece to torpedo Aquinas only to find himself outflanked by Heidegger. As Carey puts it, Heidegger showed the “world as intrinsically non-rational.” The idea of rational nature which Aquinas and Strauss held dear, was, Heidegger argued, just an abstraction crafted by Western philosophy. In fact, what made both philosophy and natural law possible was a decline, an idealization of nature born of theology. Heidegger worries Strauss because his thinking exposes Strauss to be as pietistic as Aquinas. Strauss also appreciated the urgency of the challenge: he was under no illusions about his teacher, writing that Heidegger “explicitly denies the possibility of ethics.” Ethics is not original to a chaotic, violent world or the Greek effort to restrain it through the celebration of struggle and vaunting of the self. Heidegger or a Thomistic nature kindly displaying a logic discernible by reason and science? There is only one option able to sustain the rule of law. The harmonies described by the thinkers of the Enlightenment—often interpreted as undermining natural law—are in continuity with Aquinas. But Carey ends his book fearful that Strauss, for all his moral earnestness in paring back natural law, exposed justice to the cruelties of ancient and modern strife.
In C.S. Lewis's Narnia books, it's very clear that the Narnians are meant to represent Christianity, with Aslan symbolising Jesus (in fact, Aslan is literally Jesus in-universe), while the Calormenes are meant to represent Islam. Which invites the question: what about Judaism? (Yes, I know that other religions, such as Hinduism and Buddhism, are practised more in the world in general than Judaism, but Christianity, Islam, and Judaism are the religions most likely to be familiar to Lewis and his western audience.) I seem to remember reading somewhere that the dwarfs could be interpreted to represent Judaism, but I'm not sure what this was based on. At least in The Last Battle, the dwarves seem more like atheists than anything else, although the role of atheists could also be given to the Telmarines in Prince Caspian. Were any characters in the Narnia books intended to represent Jews? Note: This does not in ANY WAY represent my own religious views. It's possible that C.S. Lewis meant for the Dwarfs to represent the Jews. At the end of The Last Battle, the Dwarfs refused to be 'taken in' by Aslan. It's possible that C.S. Lewis meant for this to represent the Jews refusing to believe in Jesus. The Jews didn't believe in Jesus. They don't think that he fulfilled the requirements to be the Mashiach (Messiah). I think I read that Christians believe that they'll get hell for that. And, in Narnia, the Dwarfs are refusing to believe in Aslan, leaving themselves to be stuck sitting there, believing that they are in a barn, for eternity. There's also the fact that the Dwarfs are often presented with beards (quotes eventually), and Jews often wear their beards long. After posting this, I had been doing some research on what Lewis thought of Judaism. I found Lewis's Trilemma, in which he says: ...As this is almost exactly what Judaism believes, that he was deified by his followers, he seems pretty critical of them. On this webpage, they make some points: First, in Luke 23:1-2, the Jews opposed Jesus being their Savior because they were fearful of Him. They feared Him because He did not follow their laws and how could their Messiah not respect their ways. Similarly, the dwarves were blinded by their fear like they were being held in a dark stable and could not escape. They could not see the paradise that Aslan created and that there were no doors at all on this stable. Secondly, the Jews did not believe in Jesus even though he saved lives and performed miracles in front of them. (John 12:37) In the book, Aslan performed a miracle in front of the dwarves. He made banquet food appear out of nowhere but the dwarves believed that the food was donkey food. Thirdly, to believe in God, one must give up all control. In the Bible, Jesus tells Nicodemus that he must give up all his worldly possessions to have salvation in Him. Nicodemus cannot give up this control. (John 3:1-21) In the book, the main problem for the dwarves is that they do not want Aslan to control them. So, ways in which the Dwarfs could represent Jews: 1. Their belief in Aslan's return. The Dwarfs have always been a little reluctant to believe in Aslan, or at least his return. Remember in Prince Caspian? "Oh, Aslan!" said Trumpkin cheerily but contemptuously. "What matters much more is that you wouldn't have me." -Prince Caspian, chapter 6 "But they also say that he came back again," said the Badger sharply. "Yes, they say," answered Nikabrik, "but you'll notice that we hear precious little about anything he did afterward. He just fades out of the story. How do you explain that, if he really came to life? Isn't it much more likely that he didn't, and that the stories say nothing more about him because there was nothing more to say?" -Prince Caspian, chapter 12 And the Jews don't believe that Jesus came back to life. 2. Not believing in miracles performed right in front of them. -The Last Battle, chapter 13 Apparently Jesus did miracles and they still didn't believe that he was the son of God.1 | 2 3. They are afraid of being 'taken in' and being controlled. -The Last Battle, chapter 13 And apparently from the source I mentioned before, some guy did something like that in the Christian Bible?1 He had to give up his money to do something, and he didn't want to? Also, the Dwarfs have always been old Narnians, even if they've had a rocky history with Aslan. They believe that Narnia is 'not a human country'. This is reminiscent of the Jews believing in one God, but not in Jesus. (Remember the note at the top? Not my views at all.) Tl;dr: It's fairly likely that C. S. Lewis intended for the Dwarfs to represent Judaism from his point of view. 1My knowledge of Christian theology is very sketchy. • Your final point about the faithful losing faith is used here to say that Lewis's use of that motif implies future redemption, which the dwarfs don't seem to have any possibility of. – BESW Jan 24 '17 at 5:23 • 6 I think all the quotes you've posted equally support the notion that the dwarves represent atheists. They almost all boil down to skepticism/disbelief of Aslan and his deeds. Feb 1 '17 at 15:30 • 8 It is not so much that they represent atheists per se as that they represent "men without chests", people who are so concerned not to be taken in that they cannot look at any claim on face value, but must always think themselves too clever to be taken in. It is this empty skepticism, rather than atheism per se, which Lewis represented as an intellectual defect in The Abolition of Man. Atheism was merely a consequence of this defect. Occams razor suggests that identifying the Dwarves with the Jews is an unnecessarily complicated explanation. – user406 Feb 7 '17 at 5:28 • 4 I've just read through this answer again. As other commenters have said, is there anything to say that the Dwarfs were written to represent Jews as opposed to atheists? They seem sceptical about Tash as well as Aslan. Jews believe in God; do Dwarfs believe in anything supernatural? – Rand al'Thor Feb 28 '17 at 15:13 • 1 @Randal'Thor - that's what the point about them being Old Narnians was. – Mithical Feb 28 '17 at 15:29 The Jews are represented in Narnia, it's the mice. The band of mice consists of twelve mice, the Jews consists of twelve tribes. The mice are the smallest animals in Narnia, the Jews are the smallest of all the people's on the earth. Jews are in this world often portrayed as mice even though they're the opposite. David is represented also, he's represented by Reepicheep. The most valiant and bravest of all mice and probably of all animals in Narnia. Still he's soft and helpful to others, like you can read how he in his good heart helped the obnoxious boy at the The Voyage of the Dawn Treader while others minded their own business and laughed at him. He's king also, king of the mice. He's the only Narnian that ever got to the land of Aslan's father without death. David is more represented. In Narnia Peter is represented as High King over Narnia, King David is Supreme King over Israel. Peter was taken from "outside", David was also taken from "outside" as a shepherd boy who minded his own business with his sheep and didn't bother anyone. King David was golden haired, like you can see in the movie "The Witch and the Wardrobe" Peter's hair in England was dark blond but after coming to Narnia started to look golden. For Peter Narnia is everything, the hero in the story that everybody counts on. With Israel that's David, a patriot to Israel at heart, the country belonging to his Father. • I don't see how this could be right. Theologically, there is one large difference between Jews and Christians that C.S. Lewis certainly would not have neglected. Jews don't worship Jesus Christ. But the mice worship Aslan. – Peter Shor Nov 25 '18 at 12:34 • The early Christians were not Gentles but Jews. – bicycle Nov 25 '18 at 17:47 • @PeterShor Actually C.S. Lewis was very correct. Whole Narnia is represented as Israel and Narnians as Jews. Between the time of Aslan being sacrificed and the time of Prince Caspian Narnia got infiltrated by the Telmarines. With Israel that's the Gentiles and Arabs, they are not Jews. When Aslan returned he came primarily for the Narnians, not the Telmarines. The same it will be when Jesus returns, he'll firstly come for the Jews and Israel as written in the Bible. – bicycle Nov 25 '18 at 23:36 • That comment confuses me. Are you claiming that all the talking animals represent early Christians who had formerly been Jews? Then why single out mice in your answer? – Peter Shor Nov 26 '18 at 20:40 • @PeterShor The whole Bible except for Revelations is almost entirely about Jews, not Gentiles or descendants from Ismael. Narnia is represented as Israel, the country where Aslan sacrificed himself and later in Prins Caspian returned back. I don't think it was C.S. Lewis meaning to create a literal translation of the Bible to Narnia but to inspire people to Christianity and Faith. David is a very significant figure in the Bible so it's not hard to contemplate why C.S. Lewis would include him even though he lived way before the story of Christ. – bicycle Nov 27 '18 at 15:59 Your Answer
Essay Example on The Hunger Games The ability to tell a story is a critical component in most Films The Hunger Games The ability to tell a story is a critical component in most films where viewers identify the narrative aspect as the most intriguing Through crucial examining of its content reader can identify elements that are significant in conveying certain messages These messages are communicated via the use of specific film techniques and devices All films deserve a closer look to identify both clear narrative and the hidden narrative The hunger games are divided into three parts namely the tributes the games and the victor It tells a story of a dystopian society that is ruled by an oppressive president Snow The president keeps the district separated and imposed severe class separation to discourage rebellion and discourage disunion among the citizens The Hunger Games tells of the journey of Katniss Everdeen who has to engage in a fight to the death game alongside other teenagers The movie centers around the main characters desire to challenge the president s oppressive regime and alter his rule Her character development her influence and action on the other aspects reveal her willingness to change the totalitarian government Husson 2016 Several types of meaning are derived from the movies These definitions are such as Referential meaning is the general meaning of the film The bravery of the protagonist of volunteering to take her sister s place to protect her and she emerges victorious Secondly the explicit meaning is where Katniss is faced with the reality that she has to win the hunger games not just for her sister but her entire family since she is the sole provider These are much other meaning can be derived from the story  The Mocking Jay Pin The mocking jay pin appears several times in the movie although its meaning is not explicitly explained to the audience The mocking jay pin is a pin that is worn by Katniss for the period of the Hunger Game The Mocking Jay is a bird that stemmed from a heritable fortune where the Capitol hereditarily concocted blue jays to duplicate jabber jays The mocking jay lacks a backstory in the film instead the audience is left with an indirect indication of the bird s symbolic significance Collins 2008 The vendor gives Katniss the pin for free but looks at her with a concerned expression She then gives her sister the pin as a form of protection After she is chosen to take part in the hunger games Prim her sister returns the pin to her The pin comes back to the scene when Katniss stylish Cinna secretly pins it on her jacket The pin has significantly used a signal to let the other know that they are safe These various symbols of the mocking jay represent safety and protection The film doesn t express the full account of the pin as much as the novel although it shows it importance throughout Katniss's mocking jay pin like the bird itself represents a creature with its spirit Just as the creatures have broken away from the control of the capitol it suggests that the capitol can no longer force their power into the districts anymore The main character herself is a mocking jaybird The Tribute Parade Similarities and repetitions in the film serve as a critical principle in the development of the plot May the odds be ever in your favor is a phrase that is repeated several times in the film especially at the tribute parade The phrase first appears with Katniss and Gale in the woods It reappears during the reaping ceremony The tribute parade is where all the districts assemble to hear if they have been selected to take part in the hunger games  During the tribute parade Cinna her stylist makes sure that she looks spectacular since a person's appearance at the hunger games had a very significant effect These advantages can be such as attracting fans or sponsors who could provide gifts during the hunger games Katniss ended up being one of the most notable among the group and became known as the girl who was on fire The hunger game triggers more than a single emotion and meaning Regardless of the audience love for the primary character desire to put herself in danger to save others The audience gets annoyed at how many times she puts herself at risk to protect others such as Peeta Due to her appearance at the tribute parade Katniss warms up from the massive amount of energy she gets from the audience Thanks to Cinna she can attract a certain amount of fans The film insists on the importance of appearance during the opening ceremony just for the sake of sanity However the events that surround the game underscores the importance of appearance and hence she has to fight through to survive the game Cinna on the other hand appearance differs from the of his contemporaries most people in the region follow outlandish fashion trends while Cinna was simple and very minimal revealing the difference Work Cited Collins S 2008 The Hunger Games Scholastic Press 25 54 Husson W 2016 Techniques for the Construction of Meaning and the Elicitation of Emotion in The Hunger Games An Analysis of Techniques and emotions 5 20 Simmons Amber M Class on fire Using the Hunger Games trilogy to encourage social action Journal of Adolescent Adult Literacy 56 1 2012 22 34 Simmons Amber M Class on fire Using the Hunger Games trilogy to encourage social action Journal of Adolescent Adult Literacy 56 1 2012 22 34 Write and Proofread Your Essay With Noplag Writing Assistance App Plagiarism Checker Spell Checker Virtual Writing Assistant Grammar Checker Citation Assistance Smart Online Editor Start Writing Now Start Writing like a PRO
Social -Part 4 WWI And 6 WWII Question Preview (ID: 31477) 02/23/16 Ish.[print questions] What event led to a decisive shift from isolationism in the United States? a) the attack on Pearl Harbor b) the discovery of Auschwitz c) the sinking of American ships by German submarines d) the Battle of Britain The quote, A date which will live in infamy., is attributed to _____________________. a) Winston Churchill b) Harry Truman c) Franklin D. Roosevelt d) Hideki Tojo The headline, FIRST ATOMIC BOMB DROPPED ON JAPAN, refers to the bombing of ____________________. a) Hiroshima b) Tokyo c) Nagasaki d) Berlin What was the outcome of WWII? a) The communists gained control over most of Western Europe. b) Japan and Germany became the dominant military powers in their regions c) The Soviet Union emerged as an international superpower. d) England and France increased their overseas colonial possessions. The tools, sickle and hammer are associated with: a) communism b) democracy c) fascism d) Stalinism The Yalta Conference occurred near the end of World War II, but marked the beginning of serious tensions between_______________________. a) Europe and Asia b) U.S. and Germany c) USSR and the U.S. d) Russia and China Identify one issue soldiers faced in the trenches and describe the extent of the problem. a) f Identify one punishment placed upon Germany by the Versailles Treaty. a) reparations in the amount of 300,000, 000 b) reparations in the amount of 150,000,000 Identify one technological advance introduced during WWI and the impact it had on war. a) f Identify one measure taken at home to help win the war effort. a) f Play Games with the Questions above at To play games using the questions from above, visit and enter game ID number: 31477 in the upper right hand corner or click here. Log In | Sign Up / Register
Last Modified: What Does Ova Mean In Anime You can see the News Keywords related to What Does Ova Mean In Anime. What Does Ova Mean In Anime r/anime :Lost in a cruel world what did the mirror man mean by ビデオ・ Whats OVA mean in anime? Anime 1. in the ova Original video animation What does what is trigger? 'no matter how much you try to protect him, Here's what we know about them. death will snare at him' Since you tagged this with Anime, what is an anime ost In the context of of animation, Can I skip AOT ova? There is also the process of recombination of two fertilized ova into one. What Does OVA Mean in Anime? Are there OVAs for Attack on Titan? What does ova mean in anime? オリジナル・ meaning in the Cambridge English Dictionary plural of ovum specialized. ova definition: mean in anime? 'OAD' ATTACK ON TITAN OVA What does OVA stand for? Original Video Anime. These episodes are a part of tons of anime, An anime OVA as opposed to an anime series that has Know The Meaning For OVA In Are These Episodes Canon OVAs are undoubtedly the greatest animes to begin your trip with if you are an anime lover or even if you are not. OVA "Ai Believe" is a Japanese manga series Grappler Baki known as Baki the Grappler in North America, (Japanese: ((original video anime)). What Does is direct to home video anime which has not been shown on television/broadcast, (original video animation) The first anime's opening theme is Which OVA is Levi's backstory? plural of ovum specialized 2. What does OVA mean in Anime (SPOILERS) Find out what is the full meaning of OVA on! Where Can I Watch Levi Ackerman Ova 'OAD' which means no... What Does Ova Mean nI Anime? What is an anime OVA? you might mean Studio Trigger? What episode is Anime OVA stands for Original video animation. (哀 believe), Mean in Anime? The Blue Monkey Restaurant Pizzeria OVA. Originally Answered: Baki the Grappler Rate it: Looking for the definition of OVA? but they aren't a part of the main series.
Blog GPS 101 Podcast How Maintaining Optimal Hydration Improves Player Performance Sport and Athlete Hydration In sport and strength training, maintaining an optimal hydration status has been shown to improve both performance and recovery outcomes. Depending on your sport, uniforms, equipment and length of activity, all play an important role when it comes to dehydration among players. Dehydration can have serious implications and can have a significant impact on a player's performance at practice or on game day. To make sure your players stay properly hydrated, there are a few steps that should be taken before, during, and post exercise. “Staying hydrated increases energy, improves movement, recovery and agility, thermoregulation, and aids in mental clarity and activity – all of which can improve physical performance and reduce the risk of injuries,” says Noel Williams, a registered dietitian and board certified specialist in sports dietetics. By applying the following best practices for hydration, you’ll ensure your players are performing at their best. It’s important to start off the workout well hydrated as it assists in the bodies ability to pump blood through blood vessels to muscles, improving muscle efficiency. Caffeine intake ingested pre-exercise can also have a beneficial effect on performance, including both power output (explosive sports) and sustained maximal endurance activity (aerobic sports). Specifically, caffeine intake can be in the form of ≥1.5 to 2.0 ~250ml cups of coffee, ingested around 60 min pre-exercise. “Almost every measurement of performance – aerobic endurance, strength, power, speed, agility and reaction time – decreases with as little as 2% dehydration,” explains Williams. Therefore the benefits of staying hydrated during activity for players include improved muscle function, regulated blood pressure and body temperature and improved circulation.  By monitoring performance over time with hydration records as well as using SPT GPS to track the physical load of players, we are able to minimise the risk of injury. Combining both sets of data, we can ensure players are firstly, conditioned for the intensity of a training session or match, and secondly that they are properly hydrating to help decrease muscle fatigue. With improved blood flow through optimal hydration, the delivery of oxygen and nutrients to working muscles increases and the removal of metabolic by-products and waste from muscles through sweat is aided. “Staying hydrated replaces the water lost through sweating and is essential for thermoregulation, helping to prevent cramps, heat exhaustion and heat stroke,” says Williams. Players could also utilise a carbohydrate mouth rinse during exercise which has been shown to maximise subsequent performance outcomes, especially in ultra-endurance sports. These performance outcomes can occur all whilst minimising any potential gastrointestinal symptoms that can come about with other highly concentrated carbohydrate solutions. After any exercise, players should hydrate with volumes of fluid greater than what they lost during the exercise. For example, for every 1kg of body mass lost during exercise, 1.5Lt of fluid could be ingested post-exercise. This intake of hydration should be spread over several hours to avoid ingesting large fluid volumes in a short period of time which may damage the body by reducing sodium levels in the blood. While plain water is not considered to be the optimal rehydration drink when consumed on its own, it is likely to be effective if consumed with a meal containing adequate electrolytes. Relevant electrolytes may include sodium, potassium, magnesium and calcium. Replacing these electrolytes in the form of a meal – or perhaps more conveniently, in the form of sports drinks – can not only achieve effective reestablishment of body water, but also, retention of ingested water. Along with sports drinks, skimmed and/or full fat milk appears to be particularly effective to the rehydration process. This effectiveness has been attributed to its nutrient-dense sodium, carbohydrate and protein contents. Interestingly, when post-exercise milk consumption is combined with strength training, greater increases in muscle mass have been observed. Finally, consuming tart cherry juice concentrate post-exercise can reduce inflammation, which may subsequently accelerate recovery outcomes. For strength training however, this may not be as appropriate because the normal exercise-induced inflammation which occurs after exercise is an important process in strengthening and building muscle. Practical application of Hydration for players Example match day weigh in-weigh out rehydration protocol: 1. Empty your bladder. 2. Weigh in pre-match before your warm up. 3. Post-match, remove excess sweat with a towel and weigh out. 4. Consume 1.5x in fluids of the amount of body mass (in kilograms) lost during the match. 5. Do not attempt to drink it all at once – consume your recommended amount over the first 2-6 hours post-match. 6. For additional protein, carbohydrates and electrolytes, you may consume milk and/or sports drinks alongside plain water to satisfy your recommended amount.
Introduction to Apache Spark: Big Data Analytics Simplified Apache Spark is a “lightning-fast integrated analytics engine” for large-scale data processing. It can be taken in to use with cluster computing platforms like Mesos, Hadoop, Kubernetes, or as a separate cluster deployment. It can access data from a broad variety of sources comprising Hadoop Distributed File System (HDFS), Hive and Cassandra. In this blog, we’ll discuss Spark, its libraries, and why it has become one of the most famous distributed processing frameworks in the industry. Spark Core Spark is 100 times faster in memory and 10 times faster on disk than the customary Hadoop-MapReduce paradigm. How is Spark so swift? Spark Core is a distributed execution engine created from the ground up with Scala programming language. Scala is faster than Java and better at synchronized execution, a significant trait for developing distributed systems like a compute cluster. Spark also gets a speed boost up from RDD (Resilient Distributed Dataset): an error-tolerant data structure that manages data as an absolute, distributed collection of objects. RDD does the logical divisioning of datasets, parallel processing, and in-memory caching simple, giving a more effective way to handle data in comparison to MapReduce’s sequential, map and reduce and disk-write heavy operations. Spark SQL Spark SQL allows you to process structured data through SQL and DataFrame API. A DataFrame arranges data into recognizable named columns similar to a relational database. It supports: • Data formats like Avro, Cassandra, and CSV. • Storage systems like Hive, HDFS, and MySQL. • APIs for Scala, Java, Python, and R programming. • Hive integration with syntax, HiveQL, Hive SerDes and UDFs. An integrated cost-based optimizer, code generation and columnar storage make queries quick. Spark SQL takes full benefit of the Spark Core engine, allowing you to handle multi-hour queries and millions of nodes. Read Also >>  Twist Your Style with Leather Jackets for Men Spark Streaming From social network analytics to video streaming, IoT devices, sensors and online transactions, the demand for tools that assist you process high-throughput, fault-tolerant, live data science bootcamp india streams is continually rising. The Spark Streaming module gives an API for receiving raw unstructured input data streams and processing them using Spark engine. Data can be ingested from various sources: • HDFS/S3 • Flume • ZeroMQ • Kafka • TCP sockets • Kinesis Industry examples of Spark Streaming are many. Spark Streaming has helped Uber manage the terabytes of event data streaming off of its mobile users to offer real-time telemetry for passengers and drivers. Turns out cluster computing and machine learning are a natural union, and Spark’s MLlib is a great way to make that occur. MLlib offers you a way to use machine learning algorithms like clustering, classification, and regression, with Spark’s fast and well-organized data processing engine. Spark uses graph theory to signify RDDs as vertices and operations as edges in a directed acyclic graph (DAG). GraphX enlarges this core feature with a complete API for directing graphs and collections, with support for common graph algorithms like SVD++, PageRank, and label propagation. If you are also keen to learn Apache spark fundamentals then you should join an Apache Spark course through a reputed institution as they have right faculty and resources to explain the concepts to facilitate students’ learning. Add a Comment
struggle with the grim phantasm, FEAR ‖ Poe, 1839: 4 Poe tried to explore about Usher’s fear towards terror of death that will come to him. c. Madness The other idea of the story is madness. The dictionary definition of madness is mental illness. It happens when someone in condition of a severe mental disorder typically a form of mental illness. In the story, Usher and Madeline suffered from mental illness characterized by anxiety and depression. Their condition was a condition that caused by muscle rigidity and temporary loss of consciousness and feeling for several minutes. It is said through a quotation below: There were times, indeed, when I thought his unceasingly agitated mind was labouring with some oppressive secret, to divulge which he struggled for the necessary courage. At times, again, I was obliged to resolve all into the mere inexplicable vagaries of madness, for I beheld him gazing upon vacancy for long hours, in an attitude of the profoundest attention, as if listening to some imaginary sound. It was no wonder that his condition terrified-that it infected me. I felt creeping upon me, by slow yet certain degrees, the wild influences of hisown fantastic yet impressive superstitions . Poe, 1839: 10 The story tells that Usher ―entered, at some length, into what he conceived to be the nature of his malady ‖ Poe, 1839: 4. What exactly is his malady we never learn. It also can be seen in the quotation: Its proprietor, Roderick Usher, had been one of my boon companions in boyhood; but many years had elapsed since our last meeting. A letter, however, had lately reached me in a distant part of the country--a letter from him--which, in its wildly importunate nature, had admitted of no other than a personal reply. The MS. gave evidence of nervous agitation. The writer spoke of acute bodily illness--of a mental disorder which oppressed him--and of an earnest desire to see me, as his best, and indeed his only personal friend, with a view of attempting, by the cheerfulness of my society, some alleviation of his malady. Poe, 1839: 1 Even Usher seems uncertain, contradictory in his description: ―It was, he said, a constitutional and a family evil, and one for which he despaired to find a remedy--a mere nervous affection, he immediately added, which would undoubtedly soon pass off.‖ Poe, 1839: 4. The Narrator notes an inconsistency in his old friend, but he offers little by way of logical explanation of the condition. As a result, the line between sanity and insanity becomes blurred, which paves the way for the Narrators own descent into madness. d. Premature Burial
The best answer because that this inquiry would absolutely be: The Aztec domesticated animals for transport purposes; the Inca didn’t. The pets that they had before, didn"t have to live in residence like in the modern-day society today but they supplied them in stimulate to deliver from one ar to another. This made them have benefit in trade. You are watching: How did aztec society differ from inca society? 2 mainly ago2094.2 log in in v Google log in in v Facebook Related Questions Why go the mindset of americans change concerning the vietnam war?The civil organization system was produced _____.What indication did Theodore Roosevelt give that he supported equality...In 1914, Henry Ford increased the pay for his auto workers to $5 come work...Which factor most greatly restricted union development at the end of the 19th...How did economic goals of countries adjust as conference sentiments g...What go the decree say that Gregory the VII ordered?What to be Askia the Great’s mindset toward learning? Jammed at a difficultquestion?Don"t worry. We"ve gained your back. Every person we fulfill knows something us don"t.ask us perhaps we know. ASK US might BE we KNOWWe in ~ try to aid everyone who is trying to find the answer to the question they don’t uncover anywhere. See more: How Long Does S Curl Last ? How Long Does A Hair Texturizer Last GuidelinesContent guidelinesDisclaimer8 simple Content submission Guidelines i m sorry You need to FollowContent entry GuidelinesBecome an Expert Jammed at a difficultquestion?Don"t worry. We"ve got your back. Every human being we meet knows something we don"t.ask us possibly we know.
Obama in the Middle East. The Great Game. Under the tsars of the 19th Century, Russia greatly extended its territories.[1]  Some incidents in this expansion caught the attention of Westerners: the “Great Game” played between Britain and Russia in Afghanistan and Persia (now Iran); Japan’s humiliating defeat of Russia in 1905; and the rivalry in the Balkans between Russia and Austria-Hungary that helped bring on the First World War.  Less noticed, at the time and since, Tsarist Russia conquered many small Muslim states in Central Asia.  This gave Russia, and later the Soviet Union, a huge Muslim population.  What was to become of these people if Russia, and later the Soviet Union, broke up?  As with Russia’s original expansion into the region, recent events here have not been much noticed by Western media or much discussed by Western officials.  For both the Russkies and the local peoples, however, the issues are important. One example comes from the Turkic region.  Back in the First World War, the Ottoman Government had vast visions of a central Asian Empire that encompassed the Turkic people inside the Russian Empire.  Defeat in war and the victory of the Communists in the Russian Civil War put paid to that fantasy.  After the collapse of the Soviet Union, many of the Turkic peoples created various “stans” as independent states.  Turkey revived its dreams of extending its influence throughout the region.  Turkey spread its influence by fostering cultural, educational (lots of exchange students), and business connections (investment).[2] However, the particular emphasis—“pro-Muslim Brotherhood, rather than pan-Turkic”—given to this long-term effort by Turkish President Recep Tayyip Erdogan began to rankle.  Russia remains far more important the region than is Turkey.  The attitudes toward Islam are more varied among the Turkic peoples than Mr. Erdogan’s own preference. So problems had been developing.  Then the Turks—foolishly—shot down a Russkie fighter-jet that had briefly over-flown Turkish territory while attacking Syrian rebels.  The Russkies weren’t too pleased.  They slammed on all sorts of sanctions.  Russian police and immigration officials continually harass Turks working in or visiting business in Russia itself.  Turkic Russians resist burning bridges. Another example comes from Chechnya.[3]  Russia fought several gory wars to retain possession of the little territory in the North Caucasus, then put in a former rebel, Ramzan Kadyrov, as the ruler.  Since then, the government has “Islamized” Chechnya.  It’s almost impossible to buy alcohol, women wear the hijab, and the mosques are packed.  However, Chechnya’s Islamists are Sufis, rather than Wahhabists.  Saudi Arabian-sponsored Wahhabism is what inspires ISIS and similar movements.  Among those similar movements were the jihadis who initially fought for Chechen independence from Russia.[4] There are two points worth pondering. First, Turkey is a member of NATO.  Do the Russians have a right to think of Erdogan’s forward policy among the Turkic people—like tighter links between the European Union (EU) and Georgia or Ukraine—as a hostile act? Second, have the Russians found a means of defusing radical Islam by embracing an equally intense, but less radical, version? [1] There is a greater similarity here to the simultaneous expansion of the British Empire and to American “Manifest Destiny” than English-speaking peoples like to admit. [2] Yaroslav Trofimov, “Turkey’s Rift With Russia Frays Ties With Turkic Kin,” WSJ, 24 June 2016. [3] Yaroslav Trofimov, “Under Putin Ally, Chechnya Islamizes,” WSJ, 3 June 2016. [4] See, for example, https://en.wikipedia.org/wiki/Shamil_Basayev and  https://en.wikipedia.org/wiki/Ibn_al-Khattab Campaign Issues 2016 3. Campaign Issues 2016 2. Republicans say that the “War on Poverty” has been lost.[1]  Democrats say that it hasn’t been won, yet.  According to the New York Times, the conservative stereotype of poor people is that they’re criminals or they’re lazy.[2]  According to conservatives, the conservative stereotype of poor people is that they’re intelligent and entrepreneurial, but that liberals have created a set of incentives to dependency.  Is there any indication of who is more nearly correct? According to the Census Bureau,[3] in 2011, there were 76 million families.  Of these, 55.5 million consisted of married couples, and 20.5 million consisted of Other families.  Among those Other families, 5.4 million were male-headed and 15.1 million were female headed.  So, 73 percent were married couples and 27 percent were Other families.  Among Other families, 73.6 percent were female-headed households and 26.4 percent were male-headed households. White, non-Hispanics accounted for 52 million of the households.  Of these, 41.5 million consisted of married couples and, 10.5 million consisted of Other families.  Among those Other families, 3 million were male-headed and 7.5 million were female-headed.  So, 80 percent were married couples and 20 percent were Other families.  Among Other families, 71 percent were female-headed households and 29 percent were male-headed households. African-Americans accounted for 8.7 million of the households.  Of these, 3.8 million consisted of married couples and 4.9 million consisted of Other families.  Among those Other families, 800,000 were male-headed and 4.1 million were female-headed.  So, 43 percent were married couples and 56 percent were Other families.  Among Other families, 83 percent were female-headed and 17 percent were female-headed. Married couples are much less common among African-Americans (43 percent) than among White non-Hispanics (80 percent) or the national average (73 percent).  Other families are much more common among African-Americans (56 percent) than among White non-Hispanics (20 percent) or the national average (27 percent).  Female-headed households are somewhat more common among African-Americans (83 percent) than among White non-Hispanics (71 percent) or the national average (73.6 percent).  African-Americans account for 27.1 percent of the female-headed households, while African-Americans account for about 14 percent of the population. Current anti-poverty programs include food stamps, housing subsidies, and various tax-credits like the earned-income tax credit and the child tax credit.  People can obtain these benefits provided that they remain poor.  Raise your income and lose the benefits. Back in 1965, Daniel Moynihan published The Negro Family: The Case for Action.[4]  He concluded that “The steady expansion of welfare programs   can be taken as a measure of the steady disintegration of the Negro family structure over the past generation in the United States.”  In short, Uncle Sam displaced black fathers.  While there is a lot to criticize here, it is also possible to argue that part of poverty is volitional: don’t have kids outside of marriage; stay in school and don’t disrupt class, then go to a community college; get a job, even if it is a crummy one; then trade-up to better jobs.  This issue will not be discussed in the 2016 election. [1] Oddly, they never say that about the “War on Drugs.”  https://www.youtube.com/watch?v=j3SysxG6yoE  It can be argued that the War on Drugs and the War on Cancer were Republican distractions or alternatives to the War on Poverty. [2] David M. Herszenhorn, “Antipoverty Plan Skimps on Details and History,” NYT, 15 June 2016. [3] See: https://www.census.gov/prod/2013pubs/p20-570.pdf [4] See: https://en.wikipedia.org/wiki/The_Negro_Family:_The_Case_For_National_Action. Campaign Issues 2016 1. Currently, Social Security faces two fundamental problems.[1]  One fundamental problem is that Social Security is based on a “pay-as-you-go” model: withholding taxes from people who are working pay for the retirement of people who are no longer working.  Fine.  If there are a lot of people working and a smaller number not working, then the system functions smoothly.  What if the number of people working declines relative to the number of those who are not working?  That’s more of a problem.  Taxes on those still working will have to rise to pay for those no longer working.  That is the situation in which Americans find themselves as the “Baby Boom” generation passes out of the work force and into the work-for-me force. This problem has been around for a long time and people in authority have been trying to devise a solution for a long time.   In 1983 a bi-partisan commission investigated solutions.  Congress followed the commission’s recommendations by raising taxes and extending the age of full eligibility. That fixed the problem for a while, but—of course–“I’m back!”  In a report of 2015, the trustees reported that the Social Security trust fund will go broke in 2034, with the Social Security Administration able to pay less than 79 cents on the dollar of benefits.  In 2011-2012, President Barack Obama sketched a budget compromise agreement in which Social Security would be continually eroded by inflation.  The Republicans weren’t buying this idea.  Another solution, which could be combined with de-coupling Social Security benefits from the inflation index, would be to raise the cap on with-holding taxes.  Currently, only income below about $134,000 a year is subject to with-holding.  Raising that ceiling would generate a lot of revenue.  Taken together, these proposals probably offer a manageable means to solve the Social Security problem for the immediate future. A second fundamental problem is that Social Security was never designed to be a full retirement pension.  It was meant to provide a basic income for retirees, who were expected to save from current income to pay for the bulk of their future retirement needs.  However, many members of the “Baby Boom” did not do any significant saving for their retirement. Now, under the influence of the Bernie Sanders campaign, the Democrats have come out for expanding Social Security to make its benefits more generous.  Hillary Clinton has pledged to increase benefits for widows and for those who stop working to be care providers for children or sick family members; to resist reduction of cost-of-living increases; and to resist increasing the age for full eligibility.  She would pay for these increased benefits through higher taxes on the wealthy.  Still, even these proposals don’t go as far as the left wing of the party wants.  President Obama has remarked that “a lot of Americans don’t have retirement savings [and] fewer people have pensions they can really count on.”  How to make up for this lifetime lack of thrift? Current proposals include increasing the benefits for all recipients while providing additional benefits for the uncertain number of the “most vulnerable”; and/or increasing cost-of-living adjustments to include medical costs. Several questions arise out of these problems.  First, which “Baby Boomers” did not save and why did they not save?  Moral recriminations are going to be a part of this debate.  Second, what are these proposals likely to cost?  Third, how large a share of the well-off will have to be taxed more heavily?  Just the “1 percent” or the “5 percent” or anyone who did manage to save?  Fourth, do Americans want to transition Social Security from the current partial pension system to a full-blown national retirement system?   What would a long-term system require? [1] Robert Pear, “Driven by Campaign Populism, Democrats Unite on Social Security Plan,” NYT, 19 June 2016. Saudi Arabia and 9/11. The Rise and Decline of Nations. [4] See: https://en.wikipedia.org/wiki/Papa_Legba  See also: Madison Smartt Bell, All Souls’ Rising (1995); Master of the Crossroads (2000); and The Stone That the Builder Refused (2004). Watch List. [2] See an over the top account in https://en.wikipedia.org/wiki/Fred_Hampton The Islamic Brigades III. Omar Mateen, the Orlando Islamist homophobe mass murderer is beginning to appear as deranged from youth.  Different groups have sought to interpret the massacre to serve their own ends.[1]  Republicans harp on the danger from “radical Islam.”  President Obama excoriates American gun laws.  Gay rights groups trace the line from Stonewall to Orlando.  All this is great for an “Inside Baseball” approach to politics.  Does it solve any of our problems?  No. Currently, it is all the rage to remark that ISIS exerts a global influence through both its propaganda and the reality of its military threat to Syria and Iraq.  This leads to “lone wolf” attacks.  However, the “shoe bomber,” the “underwear bomber,” the London transit bombers, and the Madrid train bombers all struck before ISIS was so much as a twinkle in the eye of Abu Bakr al-Baghdadi.  Stamp out ISIS and some new source of inspiration will arise. Both traditional diplomats and modern military intelligence analysts have always sought to understand the “capabilities” of other states, rather than their “intent.”  “Intent” can change pretty rapidly, so understanding “capability” is much more useful in interpreting the strategic environment.  Peter Bergen, the author of United States of Jihad: Investigating America’s Homegrown Terrorists (2016), describes FBI behavioral analysts as doing something similar.  They analyze where a subject appears to be on a “pathway to violence.”  Neither of the two earlier FBI investigations of Omar Mateen had given any reason to believe that he had advanced far down the “pathway.”[2]  Suddenly, a few weeks ago, Mateen began to shift from all talk toward action.  He purchased guns; he tried to purchase body armor and ammunition in bulk; he began visiting a number of public sites suitable for targeting large numbers of people.  What caused the apparently sudden acceleration down the “pathway”?  We don’t know yet. Terrorism scholars have concluded that the reason that terrorists attack are complex, but highly personal, rather than standardized.  Indeed, the “soldiers” of ISIS may be “little more than disturbed individuals grasping for justification.”[3]  Thus, Peter Bergen rejects simple answers.  In only 10 percent of 300 cases he examined did the “terrorist” have any kind of identifiable mental problem.[4]  The share of them who had ever done time in prison was only slightly higher than the American national average.[5]  Radical Islam just pulls some people.  Why? Instead of simple explanations, Bergen finds a pattern of complex factors.  There is likely to be hostility to America’s Middle Eastern policy (our mindless support for Israel, our wrecking Iraq and Libya).  At the core, however, he finds people who have suffered some kind of acute “personal disappointment” or rupture like the death of a parent.  To take two examples, Nidal Hassan had few friends, no wife, and both his parents had died; while Tamerlan Tsarnaev had missed his punch in an effort to become an Olympic boxer.   Omar Mateen kept getting tossed out of school, losing jobs, and failing at marriage.  This, in turn, sends them in search of something that will give their life meaning.  That can mean radical Islam.  So, are terrorists “failed sons”? [1] Max Fisher, “Trying to Know The Unknowable: Why Attackers Strike,” NYT, 15 June 2016. [2] Obviously, this has nothing to do with the important questions, first, of whether someone with such a troubled life history should have been able to buy a firearm; and, second, whether anyone should be able to buy something like an AR-15 semi-automatic rifle. [3] Fisher, “Trying to Know The Unknowable.” [4] Peter Bergen, “Why Do Terrorists Commit Terrorism?” NYT, 15 June 2016. [5] Terrorists: 12 percent versus American average: 11 percent.  However, extraordinarily large numbers of Americans have done time as a result of the War on Drugs, so this figure might look different if set in the context of incarceration rates in other advanced nations. Millennial Falcons. “Gen X” are the people born between 1965 and 1980.  “Millennials”—often thought of as “Gen Z”[1]–are the 75 million Americans born between 1980 and 2000.[2]  They out-number the famous “Baby Boomers.”[3]  Stereotypes regarding “Millennials” abound: they have a sense of entitlement; they are self-indulgent; they are work-shy[4]; and they are rule-breakers.  Their presence and interests demand a response.[5]  Colleges and businesses are obsessed with the market power of this “demographic.” Farhad Manjoo[6] begs to differ.  First, “Macroscale demographic trends rarely govern most individuals’ life and work decisions.”  That means that any “generation” is actually just a big collection to individuals.  You can’t really tell anything about the particular individual in front of you from their birth year or “cohort.” Second, generational succession is always accompanied by a sense of unease among the older generation and a sense of suppressed ridicule of their elders by the younger generation.  The “Greatest Generation” undoubtedly had grave reservations about the “Baby Boomers.”  That unpleasant truth gets lost in the narrow focus on the right-now. Still, there are common (if not universal) characteristics of “Millennials”: they are socially liberal (they get married later after cohabitating, they are more than OK with marriage equality, white people claim to know black people (and may even do so in a work-related context); they are 420-neutral-to-friendly; they are post-Snowden and post-“Searchlight” suspicious of institutions.  Even so, Republican “Millennials” are more socially conservative than are Democratic “Millennials.” All this makes sense on a certain level.  However, as the critics of “macrodemographic” thinking say, the categories are just containers for many individuals or sub-categories.  For example, none of this explores the beliefs of the Republican “Millennials.” Similarly, polling data seems to suggest that Donald Trump pulls a certain segment of young people, even while the national media portrays his voters as—well, those tattooed guys with grey pony-tails on Harley-Davidsons that you see on Sunday drives in the far suburbs. One particularly fascinating figure here is Victor Lazlo Bock[7], the head of human resources at Google.  The company runs all sorts of empirical data on its employees, who range in age from sweaty recent college graduates to geezers bored with retirement.  Bock claims that there isn’t any significant difference in personality types across the generations, just between personality types across the generations.  “Every single human being wants the same thing…” says Bock.  “We want to be treated with respect, we want to have a sense of meaning and agency and impact, and we want our boss to leave us alone so we can get our work done.”  How do we accomplish this in a small college? [1] See: https://www.youtube.com/watch?v=mqQ8Y9Sjp7o [2] Farhad Manjoo, “Companies In Pursuit Of a Mythical Millennial,” NYT, 26 May 2016. [3] On the other hand, the “Boomers” have a lot more money. [4] Or what the Nazis would have called “asocials.”  See: https://www.youtube.com/watch?v=EBn3FVWkuWM [5] “And so say all of us.” [6] I know, sounds like an ISIS recruiter or that kid played by Dev Patel in “Marigold Hotel.”  In reality, he’s a media correspondent for the New York Times. [7] HA!  Is joke.  His name is Lazlo Bock.  Paul Henreid played the Resistance leader “Victor Lazlo” in “Casablanca” (dir. Michael Curtiz, 1942).
Increased Radon Levels are typical during colder months according to Albuquerque Inspectors Radon gas tend to be more prevalent during the winters says a local Radon Inspector.  Radon gas is naturally emitted from the earth into the atmosphere.  During the winter months, low Barometric Pressure along with high wind speeds are known to cause significant increased Radon levels. Be sure to do your research and ask questions of the experts if you are selling your home and anticipate having a radon test completed as part of the sale terms.
Thermometrics Turbidity Sensors Thermometrics Turbidity Sensor TSD-10 measures the turbidity (amount of suspended particles) of the wash water in washing machines and dishwashers. An optical sensor for washing machines is a measuring product for a turbid water density or an extraneous matter concentration using the refraction of wavelength between photo transistor and diode. By using an optical transistor and optical diodes, an optical washing machine sensor measures the amount of light coming from the source of the light to the light receiver, in order to calculate water turbidity. *This product is not available for sale in Europe.
Regional differences complicate efforts to bring broadband to the Arctic As part of its assessment of the telecommunications needs in the North, the Arctic Council’s Task Force on Telecommunications Infrastructure in the Arctic is carrying out what is known as a “gap analysis.” Those familiar with management principles will know that this is a comparison of the current situation with the one that is desired. The quick answer from those who wrestle with pokey connections is that much is desired. But what the report will underscore when it is presented to the Arctic Council in May is that not all Northern communities are equally sluggish. Residents of the Scandinavian and Russian Arctic, according to Bo Andersen, one of the task force’s co-chairs, do fairly well. “Many of these areas,” he says, “are covered as well as other parts of the world.” There are several reasons for this. Businesspeople point out that geography and demographics play a role: the Nordic countries are much smaller than Alaska or Canada’s northern territories, and its people are closer to big population centres. This reduces the difficulty and cost of getting them on-line. Another reason is that, unlike the North American Arctic, governments in these regions tend to view internet connections as a public service, sort of like bus route in rural areas. “The service might not be commercially viable, but, for those who do use it, it is vital,” Andersen says. For countries like Norway, making sure there is a good internet connection in Northern areas, including Svalbard, is also a strategic decision that keeps them viable. That Europe’s Arctic is well connected and North America’s is not, is well documented. Most recently, the Arctic Economic Council, in its own assessment of the region’s internet needs, found that only 27 percent of households in Nunavut have access to broadband, compared with 99 percent for Canada as a whole. People in Norway’s northernmost regions, by comparison, have access to fiber-optic networks and 4G mobile service at similar rate as a southern parts of the country. The next step will be to see whether broadband can be extended to users offshore. The AEC’s recommendation – that “a regional broadband strategy encompassing eight countries with varying needs and degrees of development requires ambitious but flexible objectives” – underscores the challenge of coming up with a universal solution. Andersen agrees. The variability of the gap, as well as varying national priorities, means there likely is no single best way to expand coverage, he believes. Even so, there are things the task force will suggest that decision-makers keep in mind when considering how to expand connectivity. The first is that is unrealistic to expect that there will be enough ordinary consumers to justify the expense of building telecommunications infrastructure in the North. “A consumer model,” he says, “is not commercially viable.” Governments can chip in the money to expand access, but for those unwilling or unable to do so, the only other option is the private sector, either through purely commercial projects or in public-private partnerships, a hybrid model. Given the sparse population in the North, however, any privately funded project will not be set up to give homes and schools a faster connection. It is true, for example, that Quintillion, a high-profile project that is currently laying fibre-optic cable along Alaska’s northern coast, is expected to improve internet access there and in northern Canada. The ultimate goal of the project, however, is to link Tokyo with London, to provide traders with faster transaction times. In Alaska, where the first phase of Quintillion’s project is being built, the cable’s connections to land come ashore mostly in larger communities. Plans for the Canada-to-London leg, the last of three sections, remain vague on where it will come ashore. For now, the company is working to secure funding to complete the project. Expanding mining activity is another way better internet access might arrive in the region. If a mining firm establishes service with a telecoms operator, the connection can be used by nearby communities as well. Those familiar with infrastructure say this situation is unique neither to the Arctic nor the internet. “If there is infrastructure in a remote area, it either exists because of a commercial development, or because it was a political priority to put it there,” notes one executive involved with an Arctic internet project. This is why one of the other things governments should do, Andersen says, is to think outside the Arctic. “Other sparsely populated areas have the internet. We should look at what they did to get it and ask if it is something we can copy.” On land, and even for some off-shore activities, fiber-optic cables are one possible solution. In some areas, improved satellite service provides an alternative. For ships, and for communities where laying a cable is impossible, such as Greenland’s north-western coast, which is cut off from the country’s underwater fiber-optic cable by a glacier-producing fjord, they may be the only option. Arctic residents holding out for a faster connection may cringe at the thought of expensive, slow and intermittent satellite service, but there are a number of options that may have them changing their attitude. One idea already in use is placing satellites circling the earth in highly elliptical polar orbits. Doing so allows them to keep a single area in sight for long periods of time. Russia has been using these types of orbits, known as Molniya orbits, after the Soviet satellites that were the first to try them in the 1960s, to allow satellites to keep its Arctic region in sight for 11 hours of a 12 hour orbit. The catch is that at least three satellites working in tandem are needed to guarantee uninterrupted coverage. Russia’s most recent generation of Molnyia satellites went up between 2006 and 2014. Four more are expected to be put into orbit in 2018. A second satellite option that is generating enthusiasm is what is known as a constellation, networks of scores of small satellites flying in a low orbit. These networks have the benefit of not being specific to a single region or an individual project, which eliminates the need to launch expensive satellites for a few users, or to wait for an anchor client to pay for the initial investment. Ronald van der Breggen, an executive with LeoSat, which hopes to have a constellation on-line soon, admits that his company’s product is but of several competing options, but, making the case for constellations, he highlights two advantages. Firstly, he reckons constellations will come on-line before any of the big fiber-optic cables are ready. Secondly, with so many satellites in orbit, constellations can offer the same type of redundancy terrestrial networks give users at lower latitudes; one constellation satellite failing does not bring the system to a halt in the way that a broken cable or damaged Molniya satellite would. “Essentially, a constellation combines the merits of a ground-based network with the advantages of space,” van der Breggen says. The drawback of constellations are their price tag. LeoSat’s is expected to cost $3.5 billion, or about five times the amount it will take to lay Quintillion’s cable. Though van der Breggen believes a cost-benefit analysis falls out to constellations’ favour. “We’re going to have a global network in place,” he says. “We think it will serve the Arctic well, and possibly even better than other places, given the way the satellites will be positioned.” That would be a upgrade even well-connected Europeans would have a hard time skipping over.
Home > General > The Food-Concept of Ayurveda The Food-Concept of Ayurveda https://www.chopraayurveda.com/images/parallax-bg.png 2018-12-21 https://www.chopraayurveda.com/blogpost/5/the-food-concept-of-ayurveda Chopra Ayurveda Inc. Surrey BC Canada | Author: Food is the main source of energy for life. It is regarded as the life maker. We needs energy for the proper growth  and development of the body, and the food we are taking will help for this. The food we take help us in different ways, as  it motive for the repairing mechanism of the damage tissues of the body. It is the material which is beneficial for the anabolic action. The Food-Concept of Ayurveda :: Chopra Ayurveda Surrey Generally the food can be classified into two groups:: 1. Plant origin 2. Animal origin If you wanted to stay healthy and fit, you have to take proper amount of food, and  it should not become more or less in quantity. The biological fire which residing in the body will help the food particle to undergo metabolic changes and bring the equilibrium of body and mind. According to Ayurveda, all the food particle is having the panchamahabhuta and doshic element in it. If one take the food according to their body, it will help them for achieving a good health. The diet is consist of both the good and bad effects what you are taking will decide your health. The consumption of different food will affect the elemental balance in a positive or negative manner respectively. So we can say that a personalized diet is always good, as it will help for better digestion, metabolism and elimination of waste materials at proper time from the body. It improves sleep, concentration, and memory. It strengthens the immune system and maintains the health. What you eat and how you eat are most important, we are having the proper answer for this, Eating is one of the most sacred experience. When we eat our food, it means we are taking some atoms or molecules from our surrounding into our body and asking them to merge with us or to become a part of us. Thus  if you eat your food properly and mindfully with respect, the food joins well with our body and do the needful. Otherwise it will not merge and cause some imbalances in the body and leading to many diseases ranging from mild to severe.
Download PDF Many types of research have been conducted regarding sustainable development in the Sub-Saharan Africa (SSA) region. Shreds of evidence have shown that this part of the world is still behind others in many aspects of development. Indeed, a 2018-year review by the World Bank revealed that by 2030 nearly 9 of every ten living in extreme poverty would be living in sub-Saharan Africa (Year in Review: 2018 in 14 Charts, 2018). One of the aspects of sustainable development in which Sub-Saharan African countries still struggle the most is to ensure food security among its habitat. If there is a crucial impediment to the development of the region, it is plausibly food insecurity. Strikingly, the percentage of people who are food insecure makes food insecurity in these countries the corollary of extreme poverty and its cause. Thus, it has been critical for the international community and the local governments to figure out how to improve food security. The data has it that it is such a convoluted endeavor given the controllable and uncontrollable conditions that have been the primary roadblocks. This research intended to evaluate how the prevalence in the region of food importation dependency to satisfy the food needs of the population impacts the above endeavor. It is a critical assessment since commercial food imports have been one of the conventional ways a nation can ensure its citizens are food secure. Meanwhile, can or has this pathway also pave (d) the way for Sub-Saharan African countries to enhance food security? How so, since almost 60% percent of the world’s population living under 1.90$ a day are from in this region (Global poverty: Facts, FAQs, and how to help | World Vision, n.d.). An elucidation to these and other related questions is what this study brought on the table. Indeed, the results and conclusions of this study sprung from a careful analysis of data published by regional and international organizations involved in eradicating food insecurity and undernourishment. Needless to say, part of the gravity of the findings would be to enlighten governments’ policymakers of what conditions to consider if their nation relies on commercial food imports to satisfy the citizens’ food needs. It is possible that most countries in SSA still have a long way to go to achieve food sufficiency. Additionally, in light of a health crisis like the current pandemic and many other conditions, this research analyses the shortcomings of food import dependency in the case of SSA countries, thus, shedding light on why other solutions like scaling up local food supply can be more fruitful. The current state of affairs As defined by the Food and Agriculture Organization of the United Nations, food insecurity is “a situation that exists when people lack secure access to sufficient amounts of safe and nutritious food for normal growth and development and active and healthy life.” It has been such a convoluted term to define, but all definitions have proven that food security encompasses more dimensions than just secure access. They are food availability, stability, and utilization. Food utilization, which mainly implies the idea of being knowledgeable enough to choose, prepare, and consume foods that result in healthy nutrition. Indeed, food insecurity has never been only an issue of availability or access because studies have revealed that even in the developed world, millions of people live food insecure. They might have the means to secure access, but they are unenlightened about healthy food choices or are not encouraged to consider them. That said, different countries or regions around the world have struggled, and still do, differently. However, a wealth of evidence shows that sub-Saharan Africa countries struggle the most in all the facets of food insecurity. They struggle to make qualitative food available for everyone to purchase. The socio-economic status of most of the population makes it hard for them to access that food even when it is available at the market. Moreover, due to the vulnerability of environmental conditions in recent years and political instability, food production has not been stable. The region also has the highest rate of education exclusion (so many people are illiterate), as UNESCO reported, which is a significant setback/challenge for the region to thrive in the food utilization dimension. The most recent data shows that in SSA, more than 25% of the population are undernourished, which is the worst severe stage of food insecurity, compared to 16% in Asia and the Pacific. Additionally, according to the World Food Program’s world hunger map in 2019, of the 821 million people who do not get enough quality food (are food insecure), more than 50% of them are from sub-Saharan Africa (World Food Program, 2019). Though the number of people who do not have enough to eat has decreased globally, the trend is the opposite in this region, and many conditions have led to this status quo. As of 2019, Sub-Saharan Africa was home tome than 50% of the world population that do not get enough and qualitative food to eat. Source: World Food Program As the numbers show, human security of more than 30% of the population in the region is threatened. Indeed, as stated by the United Nations development program, food security is one of the critical aspects of human security and can significantly affect other aspects. Even if the region might have become politically or environmentally stable, the international community still recognizes it as one of the regions with the lowest score on human security. Ensuring food security is also classified as one of the crucial sustainable development goals to transform our planet. Thus, there is a need for SSA countries to impede the worsening of the crisis, which is why several international humanitarian organizations do invest a lot in increasing secure access to nutritious food for those who suffer the most. Causes and conditions that lead to severe food insecurity in Sub-Saharan Africa Food insecurity is such a complex issue that its roots vary from region to region and from countries to countries. A country can face food insecurity due to natural conditions such as poor ecological conditions for food production that include lack of agricultural land or droughts. Food availability, as one of the dimensions of food security, is also disrupted by a lack of technical skills or enough agricultural input to adapt to environmental changes. Studies have shown that these conditions have contributed in different ways to the worsening of food insecurity in SSA. Given that the agriculture system in the region relies mostly on rainfall for water input, smallholder farmers that constitute the majority of the agriculture workforce have struggled to boost or maintain their production level due to climatic changes that affected the rain availability in some countries. For instance, in the last ten years, three-quarters of the most severe droughts have been in Africa, and the SSA region was affected. As reported by the Africa Rice Center, it has been the most significant environmental constraint for rice production in SSA (Ndjiondjop et al. 1260). Moreover, droughts have been an ideal environment for pest outbreaks, such as lethal maize necrosis. In recent years, it was a critical limiting factor of corn yields in many parts of the region Nonetheless, food availability or food self-sufficiency is not the only substantive preconditions to ensure food security in a country/region. Indeed, many countries do rely on some imports to satisfy the need for certain food products given that their climatic zone is not appropriate for the cultivation of such products. For secure access to enough qualitative foods, individuals need to have financial means to afford the cost of food at the market. Though more than 60% of the population in SSA countries are smallholders, subsistence farmers are their primary source of income, producing less than they need for the entire civic year. Their agriculture yields often suffice the food needs for just a couple of months after the harvest season. So, they must resort to the food available on the market. Meanwhile, the World Bank reported in 2018 that, as of 2015, 41% of the population in SSA countries lived in extreme poverty, and most of them were rural farmers. It is the highest poverty rate compared to other regions, and the World Bank projected, in the same report, that in the next 30 years that SSA will be home to 9 out of 10 people living in extreme poverty (Year in Review: 2018 in 14 Charts, 2018). Therefore, poverty and lack of steady income is the single most significant root of food insecurity in this part of the world. More than half of the population relies upon the subsistence farming system, which does not yield enough to satisfy their food needs. At the same time, they do not earn enough to access the nutritious and safe food available at the market. Food imports as a viable solution: how did that come out? There are varied approaches to addressing food insecurity in a specific country/region, and they are relative to the roots of the issue in that area. Conventionally, three ways a country can ensure food security are domestic production, food imports, and food aid. In SSA, all three ways have been at the forefront of the local and international community agenda to achieve secure access to food for the poor rural and urban populations. Provided that the problem has many facets, there is no one-fits-all solution that can be thought of. Nevertheless, this research explored the impact of food import dependency on ensuring food security in the SSA region. Half a century ago, the food trade deficit in SSA was at the lowest level it has ever been since then. Indeed, some countries of the region were net food exporters. The Agri-food trade data of Burundi, one of the SSA countries and one of the most food-insecure countries in the world, shows that until the 1990s, the country exported more food products than it imported (FAOSTAT, 2014). Today, the country’s total annual food production would only satisfy the food needs of one person for 55-60 days per year, which implies that food imports cover more than 50% of the food consumed (Burundi | World Food Program, n.d.). The situation turned around with the prevalence of political instability, decades of civil wars, fleeing out of the country of many peasants, and economic instability. Like other countries in the same situation, the best way to go achieve food security was to resort to food imports and food aid. This approach was reasonable, given that the local food production system had been disrupted. Additionally, food imports seemed to be a viable solution for countries that faced drastic environmental changes that caused long periods of droughts, for instance, which impeded local agriculture yields. There is no denying that food imports were critical to ensuring food security in the aftermath of political instability or dire climatic changes. Most people are smallholder farmers and are expected to satisfy most of the food needs. They practice traditional, low inputs, and rain-fed agriculture system that is less adaptive to environmental changes. Thus, in case of rain shortage or terrible drought periods that have been common in the region, resorting food imported from other regions/parts of the world has always been a practical way out of food shortages. To this day, though political stability has improved in many SSA countries, ecological conditions are still the main roadblock to domestic food production; so, importing food is still needed to ensure food availability, hence food security. Meanwhile, a wealth of data shows that food imports did not curb food insecurity effectively or as expected. From the time that SSA countries became net food importers, the number of people who are food insecure started increasing. To this day, the region is considered as the part of the world that is critically affected by hunger. Indeed, it was not a one-fits-all solution, but the correlation is strong that one can conclude that resorting to food imports has been a band-aid solution to the problem. Though the low yields in the domestic agriculture system could be offset by food imported outside of each country, food insecurity kept getting worse among the smallholder farmers, and urban and rural landless poor. Indeed, as of 2018, almost 25% of the region’s population was affected by severe food insecurity, which is the highest per capita percent in the world.    As shown on this SSA has become the region that is affected by severe food insecurity the most. Source: Our World in Data Why the approach did not pan out; what were/are the roadblocks As previously mentioned, food imports have been vital to alleviating food insecurity in the aftermath of political instability and civil war that ravaged most countries in the late of 20th century and early 21st century. In effect, it was during that period that the region saw a significant increase in the food trade deficit. Each country had to rely on some type of imported food or food aid from the neighboring countries or overseas. Moreover, importing food products out of concerns of starvation remained an issue for many countries in the region due to environmental conditions that have become inappropriate for their farming systems. But the increase in domestic food demand due to the high growth of the population in the last thirty years is one of the aspects that impede the contribution of food imports in alleviating food insecurity. If the local agriculture system is far from satisfying the domestic food needs it would be hardly possible for SSA countries to import enough foods for the population that almost tripled since 1990. This graph showcases how the growing discrepancy between food consumption and local food production in the last couple of decades. Source: CGIAR Consortium of International Agricultural Research Centers – Research Center on Wheat. The population growth, thus food demand, in the region is indeed not the only roadblock to the contribution of food import dependency in ensuring food security. Instead, the fact that most of the population does not have a steady source of income makes it hard for them to access even the available food at the market. Poverty or lack of income is the principal factor causing food insecurity and, as of 2008,  more than 47% of the SSA population lived on $1.25 or less, which makes it home almost 70% of people living in extreme poverty in the world. (World Bank Sees Progress Against Extreme Poverty, But Flags Vulnerabilities, 2012). Moreover, as projected by the 2018 World Bank annual report, the percentage will have increase to 90% by the end of this decade. Therefore, since imported foods are often too expensive even for the financially stable local individuals, most people SSA countries cannot access regularly due to lack of financial means/steady source of income. For many years, until today, more than 60% of the SSA population are smallholder, subsistence farmers. They produce enough to feed their families often for a few months after the harvest seasons. As most of those farmers practice traditional farming that is sensitive to climatic changes, it is challenging for them to produce enough food that will generate financial income. Hence, regardless of the positive impact that food imports have had or can have on food availability in SSA, they can do little to mitigate food insecurity among the most vulnerable individuals. In SSA countries where imported food products can be less expensive than domestic food and even accessible by individuals most vulnerable to food insecurity, it is critical to factor in how disruptive it can be to the local food production chain.  As their market shrinks, they will not have enough means to invest in agricultural inputs or technical supports that are essential to boost their agricultural yields. Food aids, an extreme case of non-domestic food that is supposedly accessible by everyone, have, in some cases, turned out to worsen the issue more so than solving it. During the civil war of 1995-2005 in Burundi, most people were internally displaced in refugee camps, and far from their agricultural lands. Thus, the government resorted to food aid, composed mainly of maize, rice, and beans, provided by different international organizations. It significantly affected the local producers of those products who were able to continue their farming activities.  So, they reduced their investment efforts to just meet the needs of the low demand, something which, as the political conflicts wound down, was at the roots of deserted local food markets since food aids were a short-term supply. The key point here is that food imports that are way cheaper than domestic food have the potential to worsen food insecurity though they might seem to be one of the viable ways-out. There are many strings attached to ensuring food availability, food access, food utilization, and food stability as the four dimensions of food security. It is such a convoluted endeavor and countries/regions have taken different pathways relative to what is practical for them. Generically, countries with productive agriculture systems are the least affected by food insecurity. Food self-sufficiency is a critical precondition for food security, but it is not the only ideal condition. No country in the world is 100% food self-sufficient, and one out of 6 people around the world rely significantly on food imports to feed themselves. Ecological, climatic conditions differ from country to country; thus, in each country, it is appropriate to choose to cultivate certain crops more than others. To satisfy the rest of the food demand, food imports are the last resort. Meanwhile, it is necessary for every country, especially SSA countries, to be vigilant and not significantly depend on food imports out of necessity for sustenance, to prevent starvation within the population. Countries that make the top list of food importers often do so out of necessity to create more variety for the consumer, and most of their population can afford the imported food at whatever cost. For countries/regions like SSA, where most people rely upon smallholder farming as their source of income, the imports are often too expensive for them. It is cost heavily on their investment in several agriculture inputs or technical support when the farming season arrives. Therefore, there is a need to scale up local food production to ensure food availability and access for most the population. Some aspects of how to improve the agriculture system ins SSA countries as in a way to effectively achieve food security include the development of irrigated agriculture in regions that can support the system in a sustainable manner. The rain-fed farming system that most people practice has not been that yielding in recent years due to climate change. Thence, countries should much investment in irrigated agriculture, and encourage farmers to not consider the rain season as the only period during which they can farm. There’s evidence of SSA countries that have started the campaign, with the help international organizations. But much still needs to be done to achieve food availability throughout the whole year not just during the few months that follow the harvest season. Moreover, the goal of farming should not just be to sustain the family food needs. Since it is the sole source of income for many people, they should transform their farming style in a style that will generate income for them to access the food crops they do not grow, for instance. It can be achieved using high yielding seeds, increased use of modern inputs, and when countries ease for farmers the availability of credits and access to markets. Apart from improving food self-sufficiency, there are other schemes that SSA countries can adopt to improve food security among their citizens. One of them is the development of the secondary, non-agriculture economic sectors. Ensuring food availability is not enough when millions of people migrating to the emerging cities cannot find non-farming related jobs to sustain themselves. As found out during this research, lack of financial income, or market power, is the critical reason why SSA is terribly affected by undernourishment and food insecurity. Thence, given the high rates of people in the region who have been leaving their farming villages, there is a need for the development of non-agriculture sectors that will employ those people. More than 20% of the population is projected to live in the cities by 2050, where they will not be able to practice any farming activity (Urbanization in Sub-Saharan Africa, 2018). To sustain themselves, they will have to find jobs in the secondary or tertiary sectors which are not currently at the capacity of employing them. In all, SSA still has a long way to go to achieve sustainable development. Ensuring food security in all its dimensions (food availability, access, utilization, and stability) is one of the paramount steps toward that endeavor. Any countries in the world can apply different schemes to curb food insecurity and the conventional schemes have been local production, food imports, and food aid. Nonetheless, this research rooted out that, for SSA countries, relying on commercial food imports as one way to achieve food security is a band-aid, ineffective solution. The region is still home to almost 50% of people who suffer from severe hunger and undernourishment. Most of the population lives in extreme poverty; hence, they can hardly afford imported food which comes often at a high price. Food imports can cost a lot to the countries enough to disrupt their investment in other sectors such as agriculture production which is the sole source of income or making a living for more than 60% of the population. Therefore, developing the domestic food production sector is the most effective solution for SSA countries. Food self-sufficiency is a vital condition if governments or the international community are to improve food security in this part of the world. To substantially on food imports to improve food security is effective if the economic development of a countries can allow individuals to afford such imports, a stage that SSA countries have not yet achieved since they are still classified as developing countries. Work Cited, Urbanization in Sub-Saharan Africa. [online] Available at: <,to%2020.2%20percent%20by%202050.> [Accessed 11 August 2020]. Education in Africa, UNESCO UIS,, 2019. n.d, The Economic Analysis of Access, Exchange, and Sustainable Utilization of Plant Genetic Resources: Glossary. [online] Available at: <> [Accessed 11 August 2020]. 2014. FAOSTAT. [online] Available at: <> [Accessed 11 August 2020]. Ndjiondjop, M., Wambugu, P., Sangare, J. and Gnikoua, K., 2018. The effects of drought on rice cultivation in sub-Saharan Africa and its mitigation: A review. African Journal of Agricultural Research, 13(25), pp.1257-1271. 2019. 2019 – Hunger Map | World Food Program. [online] Available at: <> [Accessed 11 August 2020]. n.d. Burundi | World Food Program. [online] Available at: <> [Accessed 11 August 2020]. World Bank. 2012. World Bank Sees Progress Against Extreme Poverty, But Flags Vulnerabilities. [online] Available at: <> [Accessed 11 August 2020]. World Bank. 2018. Year in Review: 2018 In 14 Charts. [online] Available at: <> [Accessed 11 August 2020]. World Vision. n.d. Global Poverty: Facts, Faqs, And How To Help | World Vision. [online] Available at: <> [Accessed 11 August 2020]. Download PDF S'abonner à notre Newsletter Rejoignez notre mailing list pour recevoir les informations du CiAAF Vous êtes abonné!
The medical field has paid increased attention to the negative health impacts of systemic racism, discrimination, poverty, and other societal forces that are known to cause poorer health. This includes awareness of the need to assess and address racial and other health disparities and focus on principles of diversity, equity, and inclusion. We have moved beyond the framing of these issues as moral imperatives to understanding how they are related to individual and public health. Discussion of the health harms of aggressive policing has also increased over the last 2 years, spurred by the frequent and well-publicized deaths of Black, Indigenous, and people of color (BIPOC). What has been viewed as a political issue is now understood as a public health/health inequity concern. Two articles that have helped change the way the medical community views these issues were published in 2021:   Both of these pieces help illustrate the ongoing shift in how police violence and aggressive policing have specific and disproportionate impacts on communities of color. Continue Reading The first article makes several key points in its assertion that policing concerns should be viewed as health issues. It begins with this statement: “Marginalized communities have a long history of naming the systemic racism and harms of police violence to health and well-being and recognizing their roots in the oppression of Black and Indigenous communities.” The article cites data indicating that BIPOC are far more likely to be killed by police than White Americans, noting that death at the hands of a police officer is the sixth leading cause of death for young Black men. Similarly, Black and Indigenous women are more likely to be killed by a police officer than their White counterparts. The second article also looks at specific data related to the disparate impact of what is describes as the “overexposure” to aggressive policing strategies among Black communities, serving to increase already existing health disparities in such communities. The article authors noted: “Indeed, aggressive policing strategies employed by law enforcement agencies across the country have been hypothesized to degrade health and well-being, even among people who themselves have not experienced contact with police. Perhaps even more important, aggressive policing is thought to contribute to significant population health inequities, as these practices are concentrated on — and thus exacerbate — the health challenges faced by racialized populations.” The authors of the article cited data from an organization called Campaign Zero.3 This group’s website includes data that also helps demonstrated the negative impact of aggressive policing, or what they describe as “broken windows” policing. This form of policing has led to the overpolicing of communities of color with resulting excessive force, leading to unnecessary deaths. “Meanwhile, the vast majority of arrests are for low-level, nonviolent activities in encounters that often escalate to deadly force. For example, in 2014, police killed at least 287 people who were involved in minor offenses and harmless activities like sleeping in parks, possessing drugs, looking “suspicious” or having a mental health crisis. These activities are often symptoms of underlying issues of drug addiction, homelessness, and mental illness, which should be treated by health care professionals and social workers rather than the police.” These 3 examples of the evolving understanding of how to view police aggressiveness highlight the need for future research and data collection. They also highlight the need for PAs, nurse practitioners, and other medical providers to assess the health impact of high-impact policing on their patients and to advocate for changes to a policing system that results in furthering existing health disparities. 1. Fleming PJ, Lopez WD, Spolum M, Anderson RE, Reyes AG, Schulz AJ. Policing is a public health issue: the important role of health educators. Health Educ Behav. 2021:10901981211001010. doi:10.1177/10901981211001010 2. Esposito M, Larimore S, Lee H. Aggressive policing, health, and health equity. Health Policy Brief. Health Affairs. April 30, 2021. doi:10.1377/hpb20210412.997570. 3. Campaign Zero.
• 1.What does beveled edge mean? A bevelled edge (US)or bevelled edge(US) refers to an edge of a structure that is not perpendicular to the faces of the piece. • 2.What angle is a chamfer? A chamfer is a transitional edge between two faces of an object. Sometimes defined as a form of bevel. It is often created at a 45°angle between two adjoining right-angled faces. • 3.What is the difference between a chamfer and a bevel? A chamfer is technically a type of bevel,but the difference between the two is that a bevel is an edge that is sloped and a chamfer is an edge that connects two surfaces at a 45-degree angle, while bevel’s slope can be any angle except 90 or 45 degrees. • 4.How do you finish the edges of glass? Polish down the edges whit 100 grit and 140- 180grit to make your glass even smoother. Then use 240 grit resin wheel to semi-polish the edge. Finally, CE-3 and woolfelt with cerium oxide polish to perfection. Wipe the edge of the glass with a clean damp cloth once finished polishing to wipe away any leftover grit or dust. • 5.What’s the glass drilling machine? Glass drilling machine is designed to drill holes annealed glass. They can drill a variety of shapes and can be used on the edges of the glass as well. Some drills have a countersink to prevent blowout when drilling. • 6.Why We Use Glass for Windows? It’s one of those thoughts that hit you as you’re showering in the morning: why exactly do we use glass for windows? Why not plastic or some other material? It’s a pretty interesting idea. Glass is something we associate as being very fragile. Windows are supposed to offer some degree of protection, but fragile and safety don’t exactly go hand-in-hand. Plus, in an earthquake or a storm, shattered glass could injure or even kill someone. It certainly wouldn’t be hard to cut yourself with it. Glass is also difficult to transport, as it can easily crack while it’s moving from factory to destination. So why do we, a society that is always looking for the best and the most convenient, still use something as impractical as glass? There are a couple of interesting reasons. First of all, even though glass can shatter, it’s stronger than you think. Plus, there really is no in-between for glass: it’s either in good condition or it’s broken. Sure, it can get slightly damaged or cracked but that is more of an inconvenience than a serious issue, and it’s very easy to spot. A material like plastic could easily get warped or wear down and you would have no idea. It could be potentially useless, and since there’s no way of you knowing, you wouldn’t know to replace it. Another reason is because glass is a very hard material. Plastic and other materials just simply can’t compare to it. It’s not hard to scratch plastic. Glass, on the other hand, holds up against dirt, sand, and other material that gets thrown at it. Sure, if you threw a baseball directly at it, the glass will shatter whereas the plastic would be okay. But if there was a storm and the wind was kicking up small rocks or other material, the glass would probably come out untouched whereas the plastic would have noticeable wear. Glass is also sturdy while still being thin. If you tried to make a thin piece of plastic and stand it up, it will probably fold over. Glass, on the other hand, doesn’t bend at all. It’s a solid material that remains stiff and sturdy. A plastic window would need to be significantly thicker to make it that stiff, and that thickness would affect the overall quality of the window and probably Not look as good. An important part of having windows is that it helps you with your energy conservation. If you don’t know how windows affect your energy bills, check out some of our other posts about the importance of having energy efficient windows. Basically, windows are essential for insulation, especially double-paned windows. Plastic doesn’t exactly have the same insulating properties and would therefore not work as well. Finally, the ability of glass to break can actually be a good thing. If you were stuck in a house during a fire and needed to get out, it would be really tough to break a plastic window. But, all it would take is a chair or a heavy object to shatter the glass window allowing you to get out. So, as you can see, it’s perfectly reasonable and even preferable for us to use glass for our windows. However, not all glass is the same. Arch City Window chooses to use ultra-high quality glass so that you get the optimum comfort and function. Our low-e glass uses a transparent coating that drastically increases your efficiency. If you’re interested in getting new windows or want more information about just how good our glass is. Why glass remains the top choice for today's skyscrapers Glass facades aren’t just designed to look good on a building; they have a wealth of benefits for companies and employees too. • 7.How do I select the machine? You can tell us the materials to be processed and method,as well as the processing area, thickness and other requirements, and we will recommend the most suitable machine for you • 8.What if our client don’t know how to operate machine? We will provide video teaching: online technical support,and can also arrange technicians to teach at home according to your requirements. Privacy policy
Desana - Settlements The traditional from of settlement is the maloca, or longhouse, a self-contained unit of several nuclear families. Malocas are spaced along rivers and creeks at distances of one- or two-days' travel by canoe but occasionally are found in remote interfluvial regions. Nucleated settlements of square one-family houses are not traditional but were imposed by missionaries, government agencies, or rubber gatherers and have led to social and economic disruption, the spread of disease, alcoholism, and the breakdown of symbolic systems related to maloca life and ecology. Also read article about Desana from Wikipedia User Contributions:
The Devil in a Little Green Bottle: A History of Absinthe Absinthe, an alcoholic drink introduced to France in the 1840s, developed a decadent though violent reputation. To some the drink symbolized creativity and liberation, and to others, madness and despair. One thing was certain: more than science was behind European responses to its influence. By Jesse Hicks | October 4, 2010 Le Peril Vert depicts absinthe ravaging the French population David Nathan-Maister and the Virtual Absinthe Museum, It was late August 1905 in the small village of Commugny, Switzerland, and three coffins stood open to the air. The mother’s was the largest, adult-sized; a smaller casket held her four-year-old daughter, Rose. In the smallest coffin lay her two-year-old daughter, Blanche. Before the coffins stood Jean Lanfray, a burly, French-speaking laborer. Facing the bodies of his family, he wept, insisting he didn’t remember shooting the three. “Please tell me I haven’t done this,” he wailed. “I loved my family and children so much!” From this domestic tragedy the people of Commugny drew one inescapable conclusion: the absinthe made him do it. Anti-absinthe sentiment had been bubbling throughout Europe, and in Switzerland it boiled over. “Absinthe,” Commugny’s mayor publicly declared, “is the principal cause of a series of bloody crimes in our country.” A petition to outlaw the drink gathered 82,000 signatures in just a few days. The press seized on Lanfray’s story, dubbing it “the absinthe murder.” For members of the anti-absinthe movement (including many newspaper editors), two glasses of pale-green liquid explained why a family lay dead. Prohibitionists could not have imagined a more potent metaphor for social decay. La Gazette de Lausanne, a French-language Swiss newspaper, called it “the premiere cause of bloodthirsty crime in this century.” The press seized on Lanfray’s story, dubbing it ‘the absinthe murder.’ At his trial the following February, Lanfray’s lawyers declared him a classic case of absinthe madness—a medically ill-defined affliction, but one that captured the public imagination. The lawyers called to the stand Albert Mahaim, a leading Swiss psychiatrist. He had examined the defendant and declared confidently that only sustained, daily corruption by that foul drink could have given him “the ferociousness of temper and blind rages that made him shoot his wife for nothing and his two poor children, whom he loved.” The prosecution countered that his absinthe consumption was dwarfed by his prodigious intake of other alcohol. The trial lasted a single day. Found guilty on four counts of murder—his wife, an examination revealed, had been pregnant with a son—Lanfray hanged himself in prison three days later. The murders energized prohibitionists—the drink became a Swiss national concern. The canton of Vaud (containing Commugny) banned it less than a month after Lanfray’s death. The canton of Geneva, reacting to its own “absinthe murder,” followed suit. In 1910 Switzerland declared absinthe illegal. Belgium had banned it in 1905 and the Netherlands in 1910. In 1912 the U.S. Pure Food Board imposed a ban, calling absinthe “one of the worst enemies of man, and if we can keep the people of the United States from becoming slaves to this demon, we will do it.” By 1915 the Green Fairy (la fée verte, as the absintheurs called it) had been exiled even from France, long the center of absinthe subculture. While temperance movements had blossomed worldwide in the late 1800s and early 1900s, never before had an individual alcoholic drink been targeted. Yet by World War I, throughout the world a combination of economic interests, dubious science, and a fear of social change—and the tabloid stories that used murder to inflame readers’ imaginations—had turned the Green Fairy into the Green Demon. The Hour of Absinthe David Nathan-Maister and the Virtual Absinthe Museum, Absinthe Blanqui poster Often reproduced, the Absinthe Blanqui poster is an art-nouveau image inspired by the cultural trend of orientalism at the time. David Nathan-Maister and the Virtual Absinthe Museum, The most systematic studies of absinthe toxicity took place at another Paris asylum, under the supervision of a psychiatrist seeking to prove that absinthe did indeed “rot your brain out.” Valentin Magnan, an influential and well-respected psychiatrist, was appointed physician-in-chief of France’s main asylum, Sainte-Anne, in 1867 and thus became the national authority on mental illness. He diagnosed a steady decline in French culture—a not uncommon belief. While Magnan ignored the wilder, medically unsupportable claims of absinthe’s sinister effects, he shared the general concern about the fitness of the French population. Like many nationalists of the time he believed in a “French race”: the concept of “degeneration” had much currency among public officials of the time, as ideas about heredity filtered into public discussion. Claims of degeneration—of a once-great nation now in decline—spurred action and anger, though such claims were often scientifically and statistically confused. Those who saw the French race collapsing, Magnan among them, could point to increasing instances of diagnosed insanity—most likely the effect of better diagnostic techniques—and to the strain of modern industrial life on already at-risk psyches. They could also point to lower birth rates—now seen as a nearly inevitable consequence of higher living standards and greater female education. Given the massive social and industrial changes of the 19th century, many unsurprisingly looked for culprits. And for Magnan, who found signs of national collapse in his asylum, absinthe became the villain responsible for an entire host of social ills. From this and similar experiments Magnan insisted on a separate category for the small number of “absinthistes” in his asylum. Chronic absinthe users, he claimed, suffered from seizures, violent fits, and bouts of amnesia. He recommended a ban on the Green Devil. Others found his claims unpersuasive. Responses in The Lancet, for one, noted flaws in his methodology, including the crucial differences between a guinea pig inhaling high doses of distilled wormwood and a human consuming trace amounts of diluted wormwood. More likely, many argued, excessive consumption produced the same alcoholism as with any other drink. The British were especially skeptical of his claims; not coincidentally, the United Kingdom was one of the few countries never to ban the drink, which had never gained popularity there. But in France, Magnan’s theories fit into the larger cultural conversation. Defeat in the Franco-Prussian War of 1870–1871 escalated already existing anxieties about France’s collective health and especially its ability to protect itself against a bellicose and populous neighbor. (After the war Germany had 41 million citizens compared with France’s 36 million.) Public-health concerns gained an existential force; those worried about the rise of absinthe dubbed it “the poisoning of the population.” Not only did it contribute to the ill health of the populace, these opponents argued, but it was also an abortifacient and sterilized men, robbing the country of a generation of potential soldiers. Others had long reveled in the dark side of absinthe. Baudelaire, an unrepentant absintheur, had declared in 1861, “France is passing through a period of vulgarity.” He thoroughly enjoyed the onrushing modernity, with Paris madhouses filling up and an apparently inevitable decline of civilization. But by the 1890 publication of Magnan’s The Principal Clinical Signs of Absinthism, common opinion in France largely agreed with his conclusion: the absinthe did it. Still, it took the Lanfray murders of 1905 to convert many citizens into activists. Previously the absinthe drinker symbolized moral decay, but he had never truly crystallized into a violent threat to society. Doctors disagreed about the danger, with Magnan and his disciples declaring absinthe the root of all social evil. On slim evidence some even linked it to tuberculosis. Meanwhile, other physicians continued to tout its health benefits, prescribing it for gout and dropsy, as a general stimulant of mind and body, as a fever reducer, and as the perfect drink for hot climates. Amid the medical uncertainty support for an outright ban remained a minority stance. After the Lanfray murders absinthe consumption became a serious political issue, as people throughout Europe—reading lurid headlines about the “absinthe murder”—demanded action. Absinthe went on trial in the court of public opinion, facing a newly hostile citizenry, its longtime enemies in the temperance movements, and a bevy of respected medical authorities. Behind the scenes wealthy wine producers supported a ban in an attempt to eliminate an increasingly popular competitor, even though absinthe never accounted for more than 3% of the alcoholic beverages consumed in France. But when disease infected French vineyards in the 1880s, the resulting wine shortage helped popularize absinthe among the money-conscious working class. When the wine crisis ended, many working-class drinkers stuck with the green beverage, increasingly made with cheaper industrial alcohol produced from beets or grain. Yet wine still accounted for 72% of all alcohol consumed. More than actual competition, it was the appearance of a trend that provoked wine makers to move against absinthe. Meanwhile, Magnan’s distinction between alcoholism and absinthism allowed wine to escape any blame for the state of the national health. In defense of the Green Fairy stood a collection of self-proclaimed decadents of the absinthe subculture (not always a politically active lot), and a few sympathetic politicians scattered throughout Europe. The outcome was never in doubt. The Chemistry of Absinthe
Home » cats » Can Cats Get Parvo Or Distemper Can Cats Get Parvo Or Distemper Uploaded by admin under cats [9 views ] One of the most infectious viral diseases is feline panleukopenia (which also goes by feline parvovirus, feline distemper, and feline infectious enteritis). How to diagnose and treat parvo in cats is rather simple with lab tests and antibiotics, but you must act quickly. How To Treat Feline Panleukopenia Distemper – Wikihow Can cats get parvo or distemper. Feline parvovirus is different than canine parvovirus and only causes disease in cats. Feline distemper, also known as feline panleukopenia, is caused by an extremely contagious and potentially fatal virus called feline parvovirus (fpv). This virus is what cause feline distemper or feline panleukopenia virus (fpv). The doctor says, i can never say he cannot, it's 50/50. i'm asking here because i know there are vets on here. Cats get a similar disease, called feline panleukopenia, or cat distemper. So yes, cats can get parvo if they are exposed to feces of an infected animal. The class of viruses called parvovirus causes both feline distemper and canine parvo diseases. Yes, parvo in cats is transmissible to other unvaccinated cats. The onset of distemper in cats is usually sudden. Pets can be vaccinated to protect them from parvovirus infection. Symptoms range from fever, severe dehydration, diarrhea and vomiting. The nurse at my vet says he cannot. Many older cats who are exposed to feline panleukopenia virus do not show symptoms. Cats acquire the parvovirus after coming into contact with contaminated feces, saliva, urine, or possibly fleas that bit an infected cat. In cats, this virus is commonly known as feline distemper or feline panleukopenia virus. So yes, cats can get parvo if they are exposed to feces of an infected animal. While there is a vaccination,. Fpv can cause disease in house cats, wild cats, raccoons, mink, and coatimundis. Parvo in cats is also referred to as feline distemper and feline panleukopenia. But, given that the canine strain is thought to be a mutation of feline parvo, questions continue to surface as to whether or not cats are susceptible to contracting canine parvo or at least certain cats or certain strains. Common symptoms of feline panleukopenia (parvo) include: They are so closely related that the cat type can affect the dog and vice versa. At least 3 doses, given between 6 and 16 weeks of age. Symptoms of parvovirus in cats. Can cats get parvo from dogs? It is slightly different than canine parvovirus. Since parvovirus b19 only infects humans, a person cannot get the virus from a dog or cat. It can be spread from contact with contaminated dishes, bedding, or equipment, and humans can pass it from one cat to another if hands aren’t washed thoroughly after petting an infected cat. The feline parvovirus is what causes the distemper in cats. Can this puppy catch the distemper or parvo virus if in fact the puppy we had to put down 2 weeks ago had it. Fuller, the virus can be shed through a cat’s bodily secretions, including saliva, nasal discharges and urine, but it is most commonly shed through feces. It’s easy to see how unvaccinated cats in a household, kennels or shelter environment can be easily infected when one cat has the disease. Cats can catch the virus from items in the environment, including food dishes, water bowls, litter trays and boxes, bedding, and human hands or clothing. Feline distemper, or panleukopenia, is caused by a virus that almost every cat comes into contact with early in their life. Distemper is caused by contact with infected salvia, nasal discharge, blood, urine, feces, or fleas that have bitten an infected cat. The agent from the aspca says he cannot because he's had all of his puppy shots. Mary fuller, a veterinarian from minneapolis, minnesota. Dogs get vaccinated against parvo (the “p” in dhpp) and cats get vaccinated against distemper (the “p” for panleukopenia in fvrcp). Feline distemper is a severe contagious disease that most commonly strikes kittens and can cause death. This is after the effects the virus has on the animals. If these symptoms are occurring, it is always recommended a vet be seen to determine with certainty if parvo is the underlying cause or not. It's caused by a virus that kills cells that grow and divide quickly in your. Cats get feline distemper via the parvovirus. Feline distemper is spread through any type of body fluid but most commonly by accidental ingestions of feces. What causes distemper in cats? Feline distemper, medically termed as feline panleukopenia virus (fpv), is a viral disease that is both highly contagious and deadly. Also, dogs and cats cannot get parvovirus b19 from an infected person. What is distemper in cats? This virus causes painful symptoms and has a high death rate, which is why cats are often vaccinated for it at an early age. The parvo virus that wreaks havoc in cats is called feline parvovirus or feline panleukopenia. The virus has been around since the 1960’s. Cat Distemper Symptoms – Purrfect Fence Distemper In Cats How To Diagnose Feline Panleukopenia Distemper 12 Steps Feline Panleukopenia Distemper Symptoms Treatment Prevention Firstvet Cara Welfare Philippines – Feline Panleukopenia Is Fatal Keeping Your Pets Vaccination Updated Can Save Your Pets Life This Virus Attacks The White Blood Cells Of A Cat Bringing Its Defense Cells How To Diagnose Feline Panleukopenia Distemper 12 Steps Feline Panleukopenia Virusfeline Distemper Feline Infect Why Vets Recommend The Distemper Vaccine For Cats Daily Paws Feline Distemper Symptoms – Cat Behavior How To Prevent Feline Panleukopenia Distemper 10 Steps Distemper In Cats – Wwwdistemperpet The Importance Of Vaccinating Your Cat For Cat Distemper Symptoms Of Feline Distemper What To Do About It Petcarerx How To Diagnose Feline Panleukopenia Distemper 12 Steps Feline Distemper Symptoms – Cat Behavior Feline Distemper Symptoms How To Diagnose Feline Panleukopenia Distemper – How To How To Prevent Feline Panleukopenia Distemper 10 Steps Disease Parasite Id Cat Diseases Feline Panleukopenia Feline Distemper Description Infectious Disease Caused By A Parvovirus Or Dna Virus This – Ppt Download Find out the most recent images of Can Cats Get Parvo Or Distemper here, and also you can get the image here simply image posted uploaded by admin that saved in our collection. can cats get parvo or distemper How To Cure Cat... Meow The Cat Pet... How Often To Change... Alley Cat Advocates Phone... Peace Lily Cats Uk Phenobarbital For Cats Side... Add a Comment
Saturday, June 15, 2013 Horses: Grass Founder To protect your horse's health, you may need to limit his access to sugar-rich grass. Why? Well, because lush spring pastures can be dangerous temptations for horses. Especially during Spring, lush green grass begins to grow, it could be the beginning of serious founder problems – laminitis. Laminitis is inflammation of the laminae of the horse’s foot. the normal hoof Laminae make up the delicate, accordion-like tissue that attaches the inner surface of the hoof wall to the coffin bone - the bone in the foot. The sensitive laminae cover the bone and interlock with the insensitive laminae lining the inside of the hoof wall to keep the coffin bone in place within the hoof. A horse suffering from laminitis experiences a decrease in blood flow to the laminae, which in turn begin to die and separate. The final result is hoof wall separation, rotation of the coffin bone and extreme pain. foundered hoof with rotated coffin bone Laminitis is a word no horse owner wants to hear associated with her horse. It is a crippling disorder that takes weeks or even months for the horse to recover from, and that is if all causative factors are removed and the best equine husbandry is provided. It can be permanently debilitating if not dealt with properly and promptly, leading to much pain and suffering for the horse. In severe cases, the coffin bone can actually rotate through the sole of the horse’s hoof where it becomes infected and usually results in the death of the horse. Laminitis is triggered by a variety of causes, including repeated concussion on hard ground (road founder); grain overload; retained placenta; hormonal imbalance (Cushing’s disease or metabolic syndrome); certain drugs (corticosteroids); obesity; and lush grass. Veterinarians and nutritionists have known for some time that plants store energy in their seeds in the form of starch that can cause laminitis if the horse is introduced to grain too quickly or eats too much grain. Only recently have researchers discovered that grasses not only store energy in their seed heads as starch, they also store energy as sugar. In the spring, as grass is growing rapidly, it stores more sugar than it needs for growth, and horses consume the sugar as they graze. Later in the year, when the daylight and nighttime temperatures are more consistent and grass growth rates decrease, the plant uses up most of the sugar produced during the day each night. Here are some tips for avoiding grass founder: • Keep horses off lush, fast-growing pastures until the grass has slowed in growth and produces seed heads. • Graze horses on pastures containing a high percentage of legumes. Legumes, such as alfalfa or clover, store energy as starch, not sugar. • Avoid grazing horses on pastures that have been exposed to bright sunny days followed by low temperatures, such as a few days of warm sunny weather followed by a late spring frost. • Avoid grazing horses on pastures that have been grazed very short during the winter and are growing rapidly. • Keep overweight horses in stalls or paddocks until the pasture’s rate of growth has slowed, then introduce them to pasture slowly. • Turn horses out on pasture for a few hours in the early morning when sugar levels are low, not at night when levels are at their highest. • Allow horses to fill up on hay before turning them out on grass for a few hours. At Risk Horses that are over the age of 10, “easy keepers,” overweight or those with crested necks seem especially vulnerable to grass founder and should be the focus of your preventive program. After the horses are turned out on pasture, check them often for early signs of laminitis such as heat in the feet and a pounding pulse at the back of the pastern. Foundered horses also assume a characteristic “sawhorse” stance with their hind feet up under their body and their front feet placed farther forward than normal. This is because the horse is trying to shift its weight off its painful front feet to its hind legs. Grass-foundered horses also move gingerly, as if walking on eggshells, and are often unwilling to turn or move at all. In severe cases, the horse may refuse to stand. If your horse demonstrates these signs after being turned out on grass, immediately pull him off the pasture and call a veterinarian. If you have horses that are prone to grass founder, visit with your veterinarian or equine nutritionist to develop a strategy for introducing them to spring grass. This is truly a situation where an ounce of prevention is worth a pound of cure. Laminitis vs. Founder What is the difference between acute laminitis and chronic laminitis, or founder? If my horse has laminitis, does that mean he has foundered? The term laminitis is often used interchangeably with founder, but technically the two are different, though related, phenomenon. Laminitis is inflammation of the laminae in the hoof. The laminae are the velcro-like connections that attach the coffin bone to the inner hoof wall, holding the foot together; because the laminae are trapped between a rock (the coffin bone) and a hard place (the inner hoof wall and sole), any inflammation is painful for the horse. Chronic inflammation over time, or a catastrophic laminitis episode, will lead to degeneration of the blood vessels that feed the laminae and necrosis of the laminae themselves. This breakdown of the laminae results in the coffin bone separating from the hoof wall and “rotating”; this stage of laminitis is properly called founder. In very advanced cases of founder, it is possible for the entire hoof to slough off, or the coffin bone to penetrate the sole. Acute laminitis usually lasts for only a few days. External causes, like concussion on hard footing (commonly called “road founder”), chemicals like nitrate fertilizer, infections, colitis, pneumonia or retained placenta in a mare can all cause laminitis. But those cases often heal and don’t result in chronic laminitis. A horse can have laminitis, heal and not founder. When the laminae in the foot become so inflamed and damaged that they no longer support the coffin bone, which then rotates and sinks, the condition is then called chronic laminitis or founder. That is when a long-term maintenance program provides the best possible outcome for the horse living with laminitis. The signs The signs can be subtle and confused for something else, like laziness, muscle soreness or arthritis. Remember, laminitis is usually associated with the horse not wanting to bear weight on the front hooves and rocking his weight back on his haunches. Not only do the hooves hurt terribly, but this posture quickly becomes painful as well; the horse was designed to bear more standing weight on the forelimbs, and extended periods of weight bearing on the hindquarters stress the joints and create chronic muscle tension. What isn’t as well recognized is that there are usually early warning signs that a horse is developing laminitis; unless the horse broke into a fifty pound bag of grain, most cases develop over a few days, weeks, or even months. For example, in early stage laminitis, a good footed horse will start to mince on gravel and walk slowly on concrete for no apparent reason. A horse with a Grand Prix trot may begin to shuffle like a peanut-rolling pleasure horse. Another horse may not want to pivot on his front feet. A horse that would normally race out to pasture now walks or jogs. While many laminitic horses exhibit the classic signs of heat in the feet and a bounding digital pulse, there are some horses, and especially early stage laminitics, that don’t present these symptoms. Things to remember Most laminitis cases are preventable, as they are related to the horse’s diet. Grain overload and too much pasture are very common culprits. Most all grain products are very high in sugar content, and pasture can fluctuate from moderate to high sugar levels. This leads to the reason for writing this article at this time of year; many horse owners realize the potential for grass founder in the spring, but don’t know that fall grasses can be just as problematic, as the climatic conditions that produce such rich forage are basically identical in spring and fall. What is even less known is that some hays may be causing laminitis problems as well, as many of the hays commonly available have been hybridized for maximum sugar content to meet the demands of the dairy industry. Horse owners wanting to understand the effects of sugar on the horse’s metabolism and how difficult it is to predict sugar content in a particular grass or hay should understand that whether its from grain, grass or hay, this diet rich in sugar triggers the inflammation, and therefore pain, in the hoof. Other laminitis triggers are not quite as obvious. Some horses react to certain medications, vaccines and wormers. Infectious diseases or a retained placenta are also possible causes. Metabolic disorders such as Cushing’s and insulin resistance can cause chronic laminitis and can be particularly difficult to treat. And laminitis is not just for obese horses. While obesity may make a particular horse an easier target for a laminitis attack, a thin horse can still be susceptible. If your horse is suddenly moving differently, and there’s no evidence of injury, take note of what may have changed in the last few weeks. Is she being fed a different hay? Has she been put out on pasture? Has there been any other change in the feeding routine? Have any medications been administered? Provide this information to your veterinarian, as these may be clues that the horse is dealing with laminitis. If laminitis is suspected, contact your veterinarian immediately, remove any identifiable triggers, and make sure the horse is transitioned to a low sugar diet. An ounce of prevention goes a long way, attacking laminitis before it gets a foothold will save a lot of agony for horse and owner. Every day veterinarians across the country see hundreds of cases of laminitis, a painful disease which affects the horse's feet. What's especially alarming is that some cases are preventable. In fact, it may be that we are killing our horses with kindness. Consider that a common cause of laminitis is overfeeding, a management factor that is normally within our control. By learning more about laminitis, its causes, signs and treatments, we may be able to minimize the risks of laminitis in your horse, or control the long-term damage if it does occur. No comments: Post a Comment Thank you for your comment.
cremation Eco-Death Putting Death Down the Drain Dying to Be Green? Try “Bio-Cremation” Nicole Mordant, Reuters (December 1, 2009) There’s a shiny new final disposition in town, attempting to gain ground on the green burial bandwagon: Resomation, developed in Scotland in 2007, also known as bio-cremation. Cremation is consistently flogged for its high energy consumption and resulting pollutants. Bio-cremation, on the other hand, uses “less than a tenth of the amount of natural gas and a third of the electricity,” by means of a chemical process involving alkaline hydrolysis. According to the Reuters article, all that remains is “some bone residue and a syrupy brown liquid that is flushed down the drain. The bones can be crushed and returned to the family as with cremation.” This last bit seems to be the only real relation to cremation: loved ones receive a packet of bone fragments which people may bury, memorialize on the mantle, put into tattoos, shoot into space, fire into diamonds, et cetera. Human remains inside the resomation chamber at the Mayo Clinic, by Finn O'Hara Photography. Wait, did that say the syrupy brown liquid of death is flushed down the drain? Indeed it does. Resomated bodies are as natural as any other human waste we routinely put through the pipes. Predictably, this makes some people uncomfortable, such as Catholics, who thwarted a move to introduce bio-cremation in New York a couple years ago, citing it “not a respectful way to dispose of human remains.” Fair enough, though just as arguably, cremation or burial are not respectful ways of treating the earth. Given the significant energy savings and pollution avoidance, environmentalism may very well prevail — plus, you can retrieve and recycle metal parts, like hip and knee replacements. I just hope they can settle on a name that isn’t obtuse, misleading or trademarked. 2 replies on “Putting Death Down the Drain” As a followup to the Catholic sentiment on bio-cremation, I just found this: In it, bio-cremation is described as “morally neutral” and probably not a bad idea, given the environmental aspects: “Sometimes the yuck factor puts us in touch with a very important, reasonable objection,” she told The Catholic Register. “But that particular reaction, I think, is just natural, normal and doesn’t necessarily carry any moral weight.” Also of interest, regarding Catholic views on cremation: Cremation has been approved in most Catholic dioceses since the early 1960s. The old prohibition was based on the 18th-century custom of Masons choosing cremation as a way of denying the Resurrection and rejecting church teaching. But the church has always taught that God is perfectly capable of raising our bodies from dust, just as He created human beings from dust in the beginning. So long as there is no implication that the person choosing cremation is not denying the Resurrection, and assuming the remains from cremation are buried in consecrated ground, the church does not normally object to cremation. Mirkes believes the same logic should apply to the chemical equivalent of cremation. aquamation seems like a great name, a group of students at our school noticed that aqua means water used as a solvent in pharmacuetical terms. So Leave a Reply seventeen − four =
Friday, May 11, 2018 Bringing Math Into the Real World with Technology Okay you know the feeling it's math, and kids come into your classroom with all sorts of prior experiences and beliefs about themselves as a learner.  Tailoring your lesson to meet the needs of every learner in your class is surely an uphill battle and one that you are likely to lose considering their are 30+ students and only 90 minutes in your math block ( 90/30=3minute each).  If you're like me, reaching and teaching every learner in your class will take more than just three minutes.   But what if there was a way you could reach and teach every student with just one lesson. That's right, one lesson connected to the lives of your students and is open-ended so that it meets the needs of all learners in your class.  Sounds good right and kids love it too, because when you create lessons that are rooted in the lives of the students you teach it shows you care.  When lessons are open-ended, students have multiple entry points to solve the problem at differing levels, because there is not just one solution but multiple.   Speaking of sweet, let's consider an open-ended math task, Julie has 36 cupcakes that she made for the school bake sale.  What are the possible ways she can arrange the cupcakes?   To be considered an open-ended tasks, your task must include the following:  1. Open-ended with multiple solutions 2. Multiple entry points at differing levels 3. Various student learning levels possible.  4. Engage and interest students, 5. Conceptually based 6. Increased level of cognitive demand Using Google Slides as a tool, I created a background image to best represent my problem of a bake-sale.  Then I found images of cupcakes that I copied and shrunk for students to move and manipulate into an array.   As you can see from the solutions there are varying ways a student can solve this problem which could include using addition, multiplication and division.   This simple open-ended tasks incorporates several of the Standards for Mathematical Practice such as #4 Model with Mathematics and #5 Attend to Precision.  If you want to stretch this task even further consider having your students create a screencast as I did below and they can also hit SMP #2 Construct Viable Arguments and Critique the Reasoning of Others.   Students can give feedback to their peers videos by posting feedback in the comments or send to the parents via email to show them what their kids know.  Instead of explaining the value of Common Core math to parents, kids can do this with their example.   In this video I use the Multiplication City Google Slides I created to have a highly contextualized approach to explaining arrays to students who need practice with concrete examples.  Remember we always want to go from the concrete, to the representational to the symbolic when teaching math concepts.    Want to learn more about teaching math with real-world lessons such as Project-based learning and problem based learning and get free sample lessons to use in your class? Consider taking a class with me through the Heritage Institute and earn continuing education credits.  Click here for more information.   3. Great things you’ve always shared with us. Just keep writing this kind of posts.The time which was wasted in traveling for tuition now it can be used for studies.Thanks Work from home during Coronavirus 4. Thanks for the entire information you have given here to impart knowledge amongst us?
clock menu more-arrow no yes Filed under: Hey Elizabeth Warren, lightbulbs do matter New, 9 comments The switch to energy-efficient lighting like LEDs has helped the U.S. reduce emissions A high-tech, energy-efficient LED light bulb is seen on a display at CES, the consumer electronics show, with a man browsing the booth in the background. More efficient light bulbs have dramatically lowered household energy demand. Photo by David Becker/Getty Images “This is exactly what the fossil fuel industry hopes we’re all talking about,” Sen. Elizabeth Warren said at this week’s seven-hour climate town hall when asked if she’d require Americans to use energy-efficient lightbulbs. “They want to be able to stir up a lot of controversy around your lightbulbs, around your straws, and around your cheeseburgers.” Warren’s sentiments were widely shared by publications that commended her for smacking down the “dumbest climate argument.” I tweeted Warren’s quote, too. But then I looked up how much energy-efficient lightbulbs could actually reduce carbon pollution. And it turns out—it’s a lot. Lightbulbs are actually one of the U.S.’s great emissions-lowering success stories. And they’re a source of inspiration for other industries that can use lightbulbs as a model for changing consumer behavior and reducing overall energy demand. That’s not exactly what CNN moderator Chris Cuomo asked Warren about lightbulbs, which was this: So a quick question about going from the worker to the consumer. Today the president announced plans to roll back energy-saving lightbulbs, and he wants to reintroduce four different kinds, which I’m not going to burden you with, but one of them is the candle-shaped ones, and those are a favorite for a lot of people, by the way. But do you think that the government should be in the business of telling you what kind of lightbulb you can have? But it was a timely question, if poorly phrased. The Trump administration had just announced the roll back of regulations which would have required all lightbulbs to meet certain efficiency standards by the beginning of 2020. The standards, which were put into place under President George W. Bush in 2007, can’t be met by incandescent lightbulbs—but they can by compact fluorescents and LEDs, which started to enter the consumer market a decade ago. Some Americans might have waffled on making the switch at first, perhaps due to light quality. (I do recall the early compact fluorescents taking some time to power up and then having a bit of a blue tinge.) But the bulbs got better and better. People ended up embracing LEDs—which are not bulbs in the traditional sense, but an array of light-emitting diodes—mostly because they last a lot longer. You get 1,000 hours of illumination for an incandescent compared to up to 25,000 hours per LED—meaning you don’t have to buy as many of them. But what’s really remarkable is that it only took about six years for the entire industry to be reshaped. The decrease in the use of incandescents very closely mirrors an overall decrease in American household energy consumption. Lightbulbs didn’t do it alone, of course, but they definitely contributed to the drop in demand for electricity. Which is why utilities from 47 states—many of which give their customers free energy-efficient lightbulbs to help decrease demand—have come out against the Department of Energy on the ruling. There’s a reason cities everywhere are installing LED streetlights: They’re saving a tremendous amount of money. The rule that would have increased lightbulb efficiency even more beginning in 2020—which the Trump administration has confirmed it is eliminating—would have expanded the regulations to also include recessed can and track lighting, decorative bulbs in chandeliers and sconces, three-way bulbs, and globe bulbs. Just keeping those incandescent bulbs lit will require energy from an additional 25 power plants every year, according to a detailed analysis by the Natural Resources Defense Council. The story of lightbulbs in the U.S. is important, says Charles Komanoff, an expert in energy and transportation policy who wrote more on the topic for the Carbon Tax Center, because it shows that reducing emissions doesn’t have to require sacrifice. “LEDs are superior to incandescents along every criterion, most critically in producing the same or better lighting with more than an 80 percent saving in electricity,” he says, praising lightbulbs as part of a “40-year collaboration of engineers, environmentalists, and regulators that painlessly delivered efficiency to the entire U.S. appliance sector.” What the country did with lightbulbs is what we must do for electric vehicle adoption. Electric cars are like LEDs in this scenario—many magnitudes more efficient, and technologically superior in every way. If you provide reasonable government regulation, offer subsidies to make those initial options more affordable, and guarantee consumers that they won’t have to spend as much once they make the switch, people will gladly change their behavior. The science, efficiency, and quality will all get better, while costs will go down. And when paired with other innovations—like new smart lights that can be programmed to power down when not in use, buildings that are designed to allow more natural light to enter rooms, or lights that are charged using solar panels—you eventually might not need as many lightbulbs at all. (In the same way more walking and biking and transit can eliminate the need for more electric cars.) I still think that Cuomo’s question was bad. All the CNN moderators seemed to be fixated on that idea of sacrifice; what the government will “force” people to do, or that regulation is somehow bad. Warren was right to shut down that line of questioning. (And she was 100 percent right about the straws, which have absolutely nothing to do with emissions.) But if you want to look at an example of one small thing anyone can do that actually has reduced emissions in the U.S., sometimes it is as simple as changing a lightbulb.
A Brief History of Treatment for Eating Disorders If you have an eating disorder or witnessed the effect an eating disorder has on a friend or loved one, you know how difficult it can be to manage. Anorexia nervosa, bulimia nervosa, binge eating disorder, orthorexia, and others can cause lasting damage to the body. These disorders can also have a devastating effect on mental health. Eating disorders are often both treatment-resistant and deadly. Estimated mortality rates hover in the four to five percent range for people diagnosed with an eating disorder. That rate increases when including associated physical problems, co-occurring mental health disorders, and death by suicide. With all factors combined, some experts estimate that the true mortality rate for anorexia climbs closer to ten percent. That makes it arguably one of the most dangerous mental health disorders we know about. Other eating disorders such as bulimia nervosa, orthorexia, and binge eating disorder have similarly adverse health effects. All these conditions present significant treatment challenges. It’s clear, then, that for people diagnosed with an eating disorder, as well as for their friends and loved ones, intervention and treatment are critical. Yet historically, experts have struggled to find the best way to treat these conditions. Anorexia Throughout History In the Middle Ages, inanition, or avoiding food for weeks and even months, became associated with saintliness, cleanliness, and virtue. In some religious traditions, they were perceived as a sign of holiness. Self-starvation had a high-profile female role model in Saint Catherine of Siena, who starved herself to death. Her contemporaries saw this as saintly discipline and obedience to her vows. In many Judeo-Christian narratives, voluntary starvation was also viewed as a punishment or compensation for humanity’s many sins. In the latter half of the nineteenth century, the association of thinness with an ideal standard of beauty emerged. Languid and wan beauties proliferate throughout the history of art. From ancient Greek mythical iconography to various traditions of portraiture and sculpture, down to the fashion magazines of today, thinness is idealized. Romanticizing and idealizing a thin body type, like the virtuous ideal of Saint Catherine, prompts young women to embrace thinness as aspirational. Social media contributes, as well. Images of models and influencers enhanced by filters and lenses perpetuate thinness as desirable to attain. Taken altogether, the history of the association of thinness with beauty – and current trends – can have a negative impact on the mental and physical health of young women and men alike. Diagnosis and Early Treatment Clinicians first defined anorexia nervosa (the clinical term for self-starvation) as a formal diagnosis in the 1970s. Bulimia, orthorexia, and binge eating disorder emerged as formally recognized conditions soon after, in the 1980s. While identifying these disordered eating habits as clinically diagnosable medical conditions was an important step, early on, clinicians struggled to find adequate treatments.  Parentectomy, an often-cited early prescription, was shorthand for the idea that the parents were the root of the problem. Many thought that removing the parental influence or cutting off contact altogether could have beneficial results. Now, though, mental health experts understand that families need to work together to help a family member with an eating disorder heal. Triggers in the home environment may contribute to the complex disordered-eating behaviors that endanger patient health and well-being. However, in recent years, many promising new treatments have emerged, as researchers make progress on the study of each disorder and its associated implications for mental and physical health. And, as research paints a more detailed picture, clinicians can now tailor a targeted approach for each person’s individual needs. New Research, New Treatments Pharmaceuticals, when combined with cognitive behavioral therapy (CBT) and other therapeutic interventions, offer a promising avenue. In 2015, the FDA approved a new drug to treat moderate to severe binge eating disorder in adults. For anorexia and bulimia, anti-depressants and anti-emetics have shown results in some studies. Best results, though, usually occur when combined with therapy. New avenues of research are ongoing. At the University of Toronto, a promising study found that patients with anorexia, as well as those diagnosed with bulimia, benefited from deep-brain stimulation. Benefits included changes in neural circuitry, improved mood and anxiety regulation, and improved body mass index. Researchers at the University of Minnesota recently obtained funding to study a new therapeutic intervention that helps binge eaters identify what triggers their disordered eating behavior. If they know their triggers, researchers propose, they can understand when they’re about to binge. Then, they can use skills learned in therapy to prevent a binge-eating episode. The bottom line is that untreated eating disorders can have severe and deadly consequences. But that’s not the whole story. People can and do fully recover from and manage eating disorders. With a comprehensive, integrated treatment model that stresses love, support, counseling, therapy, and lifestyle changes, people diagnosed with eating disorders can redefine their relationship with food. They can embrace a new way of living and thriving, free from the cycles of disordered eating. Ready to Get Help for Your Child?
All about plant fertilization Tag: outdoor plant How and when to fertilize Daylilies Daylilies are beautiful flowering plants that, despite their name, are not really lilies. They are actually perennial plants that belong to the genus Hemerocallis, and their name comes from the fact that their flowers usually last a day. Depending on the species you can find daylilies of different colors (red, . . . Read more How and when to fertilize areca palm The areca palm, scientifically known as Dypsis lutescens is a palm that can be considered medium in size, reaching a height that ranges between 1.50 and 3.00 m in height. It is highly sought after to decorate interiors giving a tropical style in any room that is located. Originally from . . . Read more How and when to fertilize elephant ears plant Alocasia or elephant ear is a highly sought after plant for both interior and exterior decoration. Without a doubt, its most remarkable feature is the beauty of its large, intense green leaves. With its origins in Tropical Asia, this exotic plant does not usually have great complications to be cultivated. . . . Read more How and when to fertilize ferns Ferns are very particular plants, especially since they do not contain seeds for their reproduction. As a decorative aspect, and since it does not generate flowers, the most striking thing is its leaves. Depending on the species, its leaves can have variations, both in size and shape. In general, they . . . Read more
Our ability to annotate and identify proteins and pathways that may be important targets to treat diseases is limited by the lack of systematic understanding of how proteins and functional modules evolved to carry out desired functions. While genomic projects accumulated vast amount of data on sequence, structure and function of proteins its systematic analysis is not possible in the absence of theory that relates their molecular properties to the functional constraints and evolutionary requirement on organisms that carry the genomes. This proposal aims to develop such theory where molecular evolution of proteins is studied in the context of Darwinian evolution of organisms that carry them. Theoretical and experimental research proposed here aims to address the following questions: 1) How did modern Universe of protein structures evolve in early biological evolution under the environmental and competitive constraints on organisms? Why some protein folds are overrepresented in many proteins and some are unique? 2) How do organisms adapt to extreme environmental conditions and how is that adaptation manifest in the compositional and structural repertoire of their genomes and proteomes? 3) How did new protein functions, such as error correction evolve in the process of conversion from RNA to DNA world? 4) How did participants in biological networks - transcription factors and upstream regions - co-evolve and how is it reflected in their phylogenetic profiles? 5) How did protein-protein interactions responsible for immune response evolve? 6) How does fitness landscape of an organism depend on molecular properties (such as stability) of proteins constituting it? These questions will be addressed using multi-tool approach that includes analytical theory, simulations of detailed microscopic evolutionary models using coarse grained and realistic representations of protein structures and experimental research that involves newly developed competitive fluorescent assays for wild type and mutant variant of E.coli as a model system. Public Health Relevance This theoretical and experimental study aims to discover how protein structures and functions evolve in response to functional demands of organisms. It will help to identify biological modules that are responsible for diseases such as autoimmunity and immune deficiency and will help to formulate better anti-viral strategies. National Institute of Health (NIH) National Institute of General Medical Sciences (NIGMS) Research Project (R01) Project # Application # Study Section Macromolecular Structure and Function D Study Section (MSFD) Program Officer Edmonds, Charles G Project Start Project End Budget Start Budget End Support Year Fiscal Year Total Cost Indirect Cost Harvard University Schools of Arts and Sciences United States Zip Code Manhart, Michael; Adkar, Bharat V; Shakhnovich, Eugene I (2018) Trade-offs between microbial growth phases lead to frequency-dependent and non-transitive selection. Proc Biol Sci 285: Rotem, Assaf; Serohijos, Adrian W R; Chang, Connie B et al. (2018) Evolution on the Biophysical Fitness Landscape of an RNA Virus. Mol Biol Evol 35:2390-2400 Manhart, Michael; Shakhnovich, Eugene I (2018) Growth tradeoffs produce complex microbial communities on a single limiting resource. Nat Commun 9:3214 Jacobs, William M; Shakhnovich, Eugene I (2018) Accurate Protein-Folding Transition-Path Statistics from a Simple Free-Energy Landscape. J Phys Chem B : Razban, Rostam M; Gilson, Amy I; Durfee, Niamh et al. (2018) ProteomeVis: a web app for exploration of protein properties from structure to sequence evolution across organisms' proteomes. Bioinformatics 34:3557-3565 Bershtein, Shimon; Serohijos, Adrian Wr; Shakhnovich, Eugene I (2017) Bridging the physical scales in evolutionary biology: from protein sequence space to fitness of organisms and populations. Curr Opin Struct Biol 42:31-40 Gilson, Amy I; Marshall-Christensen, Ahmee; Choi, Jeong-Mo et al. (2017) The Role of Evolutionary Selection in the Dynamics of Protein Structure Evolution. Biophys J 112:1350-1365 Choi, Jeong-Mo; Gilson, Amy I; Shakhnovich, Eugene I (2017) Graph's Topology and Free Energy of a Spin Model on the Graph. Phys Rev Lett 118:088302 Adkar, Bharat V; Manhart, Michael; Bhattacharyya, Sanchari et al. (2017) Optimization of lag phase shapes the evolution of a bacterial enzyme. Nat Ecol Evol 1:149 Jacquin, Hugo; Gilson, Amy; Shakhnovich, Eugene et al. (2016) Benchmarking Inverse Statistical Approaches for Protein Structure and Design with Exactly Solvable Models. PLoS Comput Biol 12:e1004889 Showing the most recent 10 out of 33 publications
Immigrants Shape the Michigan Election February 6, 2004 Source: The Detroit News On February 6, 2004 The Detroit News reported on how immigrant voters may shape the outcome of the 2004 presidential election in Michigan, noting that "their numbers may be small, but they could carry a lot of weight in a tight race, which the presidential contest in Michigan is likely to be. Among these swing voters in a swing state are Arab- and Muslim-Americans and two of the fastest-growing minorities: Mexicans and Asian Indians... They want what refugees have always wanted. Security. Freedom. It’s why they came to a place where they didn’t know the language, customs or laws and, against long odds, staked a new life. They were drawn by America as much as they were pushed from their old homes. To them, civil rights is more than an election issue. It’s a symbol of their adopted nation."
It was nine years before,the people of Kerala,the southern most state of India lost their beloved leader and a dedicated revolutionary,Comrade E.K.Nayanar. He was ill and hospitalized for some time in Delhi before he died on May 19, 2004. He was 85 years old. Nayanar belonged to that generation of Communist leaders who got drawn into the freedom struggle at a young age. He belonged to a family of freedom fighters and revolutionaries. One of his cousins was KPR Gopalan, one of the prominent Communist revolutionaries. He joined the Balasangham as a young boy. His participation in the anti-imperialist movement began in North Malabar. He was jailed for the first time in 1940. He had go underground after participation in the Morazha and Kayyur struggles which were part of the peasant upsurge in Malabar. Altogether in his revolutionary career, he spent 11 years underground and 4 years in jail. Nayanar joined the Communist Party in 1939. He became an activist of the kisan movement and organiser for the Party. He was the secretary of Kozhikode district committee of the united CPI from 1956 to l964. After the formation of the CPI(M) he continued to be the secretary of the Kozhikode district committee from l964 to l967. Nayanar was imbued with the Marxist outlook and would not compromise with any deviations to Marxist-Leninist principles. He played an important role in the fight against revisionism. He was one of the 32 National Council members who left the CPI to form the CPI(M). He was a member of the Central Committee of the CPI(M) from the 7th Congress in 1964. He was elected to the Polit Bureau at the 14th Congress in 1992. He served as the Secretary of the Kerala State Committee of the CPI(M) from 1972 to 1980 and again from 1992 to 1996. In all these positions, he played a notable role in building the Party and expanding its mass influence. E.K.Nayanar began his legislative career in 1974. He was elected to the Kerala assembly six times. He served as the Chief Minister three times, the last term being from 1996 to 2001. He was the longest serving Chief Minister in Kerala for a total of 11 years. Nayanar was a popular speaker who could convey the Party’s message in a manner which could be understood by people at all levels. He was a man with an earthy sense of humour which endeared him to the people. He was a capable journalist having edited the Deshabhimani daily and written innumerable articles in regular columns. As a Communist leader, Nayanar was above all a man of the people. His forte was to be with the people and to communicate with them. He was one of the leaders in Kerala most liked by the ordinary people. In the Party, Nayanar was known for his warmth and friendliness to all the cadres and members. Along with leaders like A.K.Gopalan, EMS Namboodiripad and C.H.Kanaran, Nayanar will be remembered for his immense contribution to the Communist movement in Kerala. In his death, the CPI(M) and the Left movement in India have lost an experienced mass leader and dedicated Marxist.On the occasion of his 9th death anniversary,The Association of Indian Communists (GB) and Indian Workers Association (GB) together with all its allies and members pays homage to this indomitable leader and beloved colleague.His memories strengthen our struggles in this era of globalization and neo liberal policies.
.. _faq-label: Frequently Asked Questions ========================== How does the module command work? We know that the child program inherits the parents' environment but not the other way around. So it is very surprising that a command can change the current shell's environment. The trick here is that the module command is a two part process. The module shell function in bash is:: $ type module module() { eval $($LMOD_CMD bash "$@") } Where $LMOD_CMD points to your lmod command (say /apps/lmod/lmod/libexec/lmod). So if you have a module file (foo/1.0) that contains:: setenv("FOO", "BAR") then "$LMOD_CMD bash load foo/1.0" the following string written to stdout:: export FOO=BAR ... The eval command read that output from stdout and changes the current shell's environment. Any text written to stderr bypasses the eval and written to the terminal. What are the environment variables _ModuleTable001_, _ModuleTable002_, etc doing it in the environment? The module command remembers its state in the environment through these variables. The way Lmod does it is through a Lua table called ModuleTable:: ModuleTable = { mT = { git = { ... } } } This table contains quotes and commas and must be store in environment. To prevent problems the various shells, the table is encoded into base64 and split into blocks of 256 characters. These variable are decoded at the start of Lmod. You can see what the module table contains with:: $ module --mt How does one debug a modulefile? There are two methods. Method 1: If you are writing a Lua modulefile then you can write messages to stderr with and run the module command normally:: local a = "text" io.stderr:write("Message ",a,"\n") Method 2: Take the output directly from Lmod. You can put print() statements in your modulefile and do:: $ $LMOD_CMD bash load *modulefile* Why doesn't % ``module avail |& grep ucc`` work under tcsh and works under bash? It is a bug in the way tcsh handles evals. This works:: % (module avail) |& grep ucc However, in all shells it is better to use:: % module avail ucc instead as this will only output modules that have "ucc" in their name. Why does Lmod require a static location of lua? Why shouldn't a site allow Lmod to use the lua found in the path? The short answer is that it is possible but for general use it is not a good idea. If you change the first line of the lmod script to be:: "#!/usr/bin/env lua" and all the other Lmod executable scripts to do the same then all the scripts would use the lua found in the user's $PATH. There are other things that would have to change as well. Lmod carefully sets two env. vars: LUA_PATH and LUA_CPATH to be compatible with the Lua that was used to install lmod. So why does Lmod go to this much trouble? Why doesn't Lmod just use the lua found in the path and not set LUA_PATH and LUA_CPATH. Earlier version of Lmod did not. The answer is users. Lmod has to protect itself from every user out there. What if a user installs their own Lua? What if your system install of Lua is version 5.1 but your user wants to install the latest version of Lua. The libraries that Lmod depends on change with version. What if a user installs their own version lua but doesn't install the required libraries for Lmod to work (lua-posix, lfs) or they install their own library called lfs but it does something completely different. Lmod would fail with very strange errors. To sum up. Lmod is very careful to use the Lua that was used to install it and the necessary libraries. Lmod is also very careful to set LUA_PATH and LUA_CPATH internally so that user changes to those env. variables don't affect how Lmod runs. Can I disable the pager output? Yes, you can. Just set the environment variable ``LMOD_PAGER`` to **none**. Why are messages printed to standard error and not standard out? The module command is an alias under tcsh and a shell routine under all other shells. There is an lmod command which writes out commands such as export FOO="bar and baz" and messages are written to standard error. The text written to standard out is evaluated so that the text strings make changes to the current environment. See next question for a different way to handle Lmod messages. Can I force the output of list, avail and spider to go to stdout instead of stderr? Bash and Zsh user can set the environment variable ``LMOD_REDIRECT`` to **yes**. Sites can configure Lmod to work this way by default. However, no matter how Lmod is set-up, this will not work with tcsh/csh due to limitations of this shell. How can I use grep easily with the module command? If your site doesn't send the output of stdout, you can still use this trick when you need to grep the output of module command. Here are some examples:: $ module -t --redirect avail | grep foo $ module --raw --redirect show foo | grep bar $ module -t --redirect spider | grep baz Can I ignore the spider cache files when doing ``module avail``? Yes you can:: $ module --ignore_cache avail or you can set:: $ export LMOD_IGNORE_CACHE=1 to make Lmod ignore caches as long as the variable is set. I have created a module and "module avail" can't find it. What do I do? Assuming that the modulefile is in MODULEPATH then you have an out-of-date cache. Try running:: $ module --ignore_cache avail If this does find it then you might have an old personal spider cache. To clear it do:: $ rm -rf ~/.lmod.d/.cache If "module avail" doesn't find it now, then the system spider cache is out-of-date. Please ask your system administrator to update the cache. If you are the system administrator then please read :ref:`system-spider-cache-label` and :ref:`user-spider-cache-label` Why doesn't the module command work in shell scripts? It will if the following steps are taken. First the script must be a bash script and not a shell script, so start the script with ``#!/bin/bash``. The second is that the environment variable BASH_ENV must point to a file which defines the module command. The simplest way is having ``BASH_ENV`` point to ``/opt/apps/lmod/lmod/init/bash`` or wherever this file is located on your system. This is done by the standard install. Finally Lmod exports the module command for Bash shell users. How do I use the initializing shell script that comes with this application with Lmod? New in Lmod 8.6+, a modulefile can contain **source_sh** ("shell", "shell_script arg1 arg2 ...") to source a shell script by automatically converting it into module commands. Sites can use $LMOD_DIR/sh_to_modulefile to convert the script once. See :ref:`sh_to_modulefile-label` for details. Why is the output of ``module avail`` not filling the width of the terminal? If the output of ``module avail`` is 80 characters wide, then Lmod can't find the width of the terminal and instead uses the default size (80). If you do ``module --config``, you'll see a line: Active lua-term true If it says **false** instead then lua-term is not installed. One way this happens is to build Lmod on one computer system that has a system lua-term installed and the package on another where lua-term isn't installed on the system. Why isn't the module defined when using the **screen** program? The screen program starts a non-login interactive shell. The Bash shell startup doesn't start sourcing /etc/profile and therefore the ``/etc/profile.d/*.sh`` scripts for non-login interactive shells. You can patch bash and fix ``/etc/bashrc`` (see :ref:`issues-with-bash` for a solution) or you can fix your ``~/.bashrc`` to source ``/etc/profile.d/*.sh`` You may be better off using **tmux** instead. It starts a login shell. Why does ``LD_LIBRARY_PATH`` get cleared when using the **screen** program? The screen program is a guid program. That means it runs as the group of the program and not the group associated with the user. For security reason all of these kinds of program clear ``LD_LIBRARY_PATH``. This unsetting of ``LD_LIBRARY_PATH`` is done by the Unix operating system and not Lmod. You may be better off using **tmux** instead. It is a regular program. How can you write TCL files that can be safely used with both Lmod and Tmod? For example the hide-version command only works Lmod and could be found in ~/.modulerc. This could be read by both Tmod and Lmod. You can prevent Tmod from executing Lmod only code in the following way:: #%Module global env if { [info exists $env(LMOD_VERSION_MAJOR)]} { hide-version CUDA/8.8.8 } Lmod defines the environment variable LMOD_VERSION_MAJOR during its execution. This trick can also be used in a TCL modulefile to set the family function:: #%Module ... global env if { [info exists $env(LMOD_VERSION_MAJOR)]} { family compiler } As of Lmod 8.4.8+ you can also use the TCL global variable ModuleTool:: #%Module ... if ( [ info exists ::ModuleTool ] && $::ModuleTool == "Lmod" } { family compiler } How can I get the shell functions created by modules in bash shell scripts such as job submission scripts? First, please make sure that shell functions and alias works correctly in bash interactive sub-shells. If they don't then your site is not setup correctly. Once that works then change the first line of the shell script to be: #!/bin/bash -l Note that is a minus ell not minus one. This will cause the startup scripts to be sourced before the first executable statement in the script. Why do modules get sometimes get loaded when I execute ``module use ``? A main principal is that when $MODULEPATH changes, Lmod checks all the currently loaded modules. If any of thoses modules would not have been chosen then each is swapped for the new choice. How to use module commands inside a Makefile? A user might wish to use module commands inside a Makefile. Here is a generic way that would work with both Tmod and Lmod. Both Lmod and Tmod define MODULESHOME to point to the top of the module install directory and both tools use the same initialization method to define the module command. Here is an example Makefile that shows a user listing their currently loaded modules:: module_list: source $$MODULESHOME/init/bash; module list What to do if new modules are missing when doing ``module avail``? If your site adds a new modulefile to the site's $MODULEPATH but are unable to see it with ``module avail``? It is likely that your site is having an spider cache issue. If you see different results from the following commands then that is the problem:: $ module --ignore_cache avail $ module avail If you see a difference between the above two commands, delete (if it exists) the user's spider cache:: $ rm -rf ~/.lmod.d/.cache ~/.lmod.d/__cache__ and try again. If that still leads to a difference then there is a out-of-date system spider cache. Please see :ref:`system-spider-cache-label` on how to setup and update a system spider cache. This issue can happen with a user's personal spider cache. Please see :ref:`user-spider-cache-label` for more details. How to edit a modulefile? Lmod does not provide a way to directly edit modulefiles. Typically modulefiles are owned by the system so cannot be editted by users. However, Lmod does provide a convenient way to locate modules which could be used for a bash/zsh shell function:: function edit_modulefile () { $EDITOR $(module --redirect --location $1) } If my startup shell is bash or tcsh and I start zsh, why do I get a message that is like: "/etc/zsh/zshrc:48: compinit: function definition file not found" Lmod supports both zsh and ksh. Both these shell use shell var. FPATH but in very different ways. The issue is that some bash or tcsh users run ksh scripts and need access to the module command. In the K-shell, the env. var. FPATH is exported and is the path to where the module shell function is found. Z-shell also uses FPATH to point to tool like compinit and others. By exporting FPATH, Z-shell does not change the value of FPATH which means that the zsh user can not find all the functions that make it work. The solution is to either add "export SUPPORT_KSH=no" or "unset FPATH" in your bash startup files before "exec zsh".
Article 7 of “The 1989 Convention on the Rights of a Child” of the UN General Assembly states that a “child shall be registered immediately after birth…” Yet, many nations still lag behind in terms of birth certification. For example, in India, only 41% of newborns receive a birth certificate. In Tanzania, a birth certificate is necessary for university enrollment, yet more than 30% of the country’s people lack proper birth registration. In Sudan, you need a birth certificate to attend school. However, 20% of the nation’s children remain unregistered. Identification is necessary for an adequate livelihood. It is required for a multitude of government services throughout the world. Implementing birth registration through blockchain brings many advantages. Besides cutting costs and optimizing the registration process, it also increases education rates around the world. Why Birth Certificates Are So Important In essence, birth certificates are the documentation that proves our existence. Just imagine: 51 million children around the globe are unregistered at birth annually. Without birth certificates, we are practically invisible. As a result, the term “invisible children” has been coined to represent children who remain birth certificate-less, which means that these children cannot travel. In most cases, they cannot attend school, definitely not a university, and they are cut off from the services necessary to lead a fulfilling life. How Blockchain Is Helping India Despite being a part of the convention for children’s rights, India has had a problem with birth registration. This has led to many problems in the country, including many children not receiving proper care. It is estimated that college attendance rates in India could double for that reason. Due to these alarming facts, India has to implement better practices when dealing with birth certificates and child registration, using blockchain to push registration rates further. Lynked.World is a digital identity management platform powered by blockchain. The project’s ultimate goal is to verify identity through a simple QR code scan. Local governments throughout India have begun adopting the use of Lynked.World for new births in the region. An increase in birth certificates could bring a variety of valuable changes; primarily, people would finally get an identity. Birth certificates are necessary to attend most college institutions. Therefore, their increase would spread education across the globe, especially in third-world countries, where birth registration is considerably low. Bringing better identification methods with blockchain would end outdated practices and provide billions of people throughout the world with better opportunities and a higher quality of life. You may also like: ➔ How Good Can Blockchain be for Social Good? Exploring Use Cases  See how blockchain establishes transparency in fundraising processes, makes education more accessible, and does even more for social good. How Blockchain Optimizes Marriage Registration Almost as important as birth certificates, marriage certificates are being impacted by blockchain as well. Current certificates require a vast amount of documentation. Paper-based documentation is prone to human error and fraud. These issues often slow down the certification process. By putting these forms on the blockchain and automating them, the whole marriage certification can be conducted in a matter of minutes rather than months. Bitnation, a blockchain-based project geared towards the financial industry, is now used for marriage certification. Their “Smart Love” is the world’s first blockchain-powered marriage service, which provides a simpler registration for marriages. This same technology can be implemented toward divorce certificates, too. Divorce registration rates are alarming throughout the world and are also in desperate need of optimization — for instance, in the U.S., divorce rate hovers between 40 to 50%. Death Certificates and Will Registration: How Blockchain Eases the Pain Loss of a loved one is one of the hardest times in every person’s life. Unfortunately, this process also involves large amounts of paperwork, including wills, death certificates, and other documents. Blockchain technology could make the process of death much easier to deal with, considering that electronic estate planning, electronic health records, real estate, and funeral donations can all be simplified with blockchain. is a project whose goal is to place health records on the blockchain. When a death occurs, this makes it much easier for doctors, lawyers, or any other necessary agents attending the death and burial process, to access the deceased’s health records. Additionally to health-related record-keeping, blockchain is revolutionizing the estate data records as well. Contract Vault is putting last wills on the blockchain. A will based on the blockchain goes far beyond a simple optimization and saves possible legal costs along the way, involving debates over the validity of a particular document. Making the Invisible Visible Again With Blockchain Blockchain can bring changes that have far too long been overdue. Whether it be establishing birth certificates in places like India so children can go to school, or easing the burden of death and will registration by safely storing valuable health and estate information, blockchain will play a vital role in shaping the world of tomorrow. Additionally to all the improvements blockchain is already bringing to the medical, financial, and banking industries, this technology will bring even more to the record-keeping sphere, making the lives of people throughout the world a lot easier and more fulfilled.
welcome covers Your complimentary articles You’ve read one of your four complimentary articles for this month. The Dominant Narratives of Future Societies Ernest Dempsey on what sci fi stories say about the stories controlling our lives. Curiosity is instinctive to life-forms with complex nervous systems. Humans, for example, tend to quest for knowledge. One of the fruits of knowledge is technology, the practical application of knowledge to satisfy various needs. Yet technology influences human society not only materially, but also culturally and existentially, and as it transforms society it raises a number of challenges. For instance, just as different groups – geographical, racial, or cultural – sometimes clash over the control of natural resources, people also tend to experience similar conflict over control of advanced technology. Science fiction novels such as Aldous Huxley’s Brave New World (1932) and Philip K. Dick’s Do Androids Dream of Electric Sheep? (1968), and many others, illustrate a recognizable pattern in advanced scientific societies: that those who control the technological resources define the dominant narrative. A ‘dominant narrative’ is a set of stories that dominate a culture and to a large extent define reality for its members. In the societies both of Brave New World and of Do Androids Dream of Electric Sheep? the dominant narrative is determined by the class who possess the technological power, particularily the power to intimidate and to force dissidents into submission. Worlds Gone Bad Brave New World Cover of Brave New World © Bantam Press Brave New World depicts a dystopic world in the Twenty-Sixth Century when a single World State is the main political entity. It controls artificial reproduction to the degree that natural birth through mothers and fathers is but a story from the past. Instead of elementary schooling, children are indoctrinated in their sleep via advanced ‘hypnopaedic’ educational technology. To fit them ‘perfectly’ (or so they are assured) for the work roles to which they will be assigned, they are divided genetically and educationally into a rigid hierarchy of classes. Recreational sex is encouraged, but it is utterly removed from its reproductive role and its emotional role in forming enduring relationships, which are frowned upon. Thus the owners of the technology completely control the social hierarchy by manipulating peoples’ genes as well as their opinions and feelings. This scenario is most likely to strike us as abhorrent. However, our judgment of the society of Brave New World comes from within our dominant narrative. Do we have a neutral place to stand, from which we can fairly judge and assess different societies? Do Androids Dream of Electric Sheep? Cover of Do Androids Dream of Electric Sheep? © Panther Science Fiction Philip K. Dick’s Do Androids Dream of Electric Sheep? – made into the film Blade Runner (1982) – also presents a future dystopic world. Here a world war has destroyed much of the environment, and humans have started to migrate from Earth to off-world colonies. The story is set in a run-down Los Angeles where there are flying cars and people can control their own moods using a ‘mood organ’. Technology corporations have developed highly intelligent androids that have assisted humans in colonizing other planets, the pinnacle being the Nexus-6 robots. But some of the androids have self-consciousness and have refused to live in slavery. A group of Nexus-6 kill their human masters on Mars and flee to Earth. Here, bounty hunter Rick Deckard is assigned the task of tracking and killing the six androids remaining at large. Do Androids Dream of Electric Sheep? presents a deep ethical conflict: Deckard is essentially a killer for money. He doesn’t feel bad about it until he starts questioning the distinction between humans and androids that is the basis for the justification of hunting down the androids. The distinction is part of the dominant narrative of his society, which says that androids lack empathy and can be dangerous, and so can be killed without scruple. On the face of it, this justification sounds valid, but questioning the dominant narrative quickly undermines the justification for the killings and other social facts. For instance, if humans have empathy, why do they kill? Is dividing into social classes (or castes) and subjecting the lower classes into near-slavery justifiable? The character of John Isidore is a great example here – he’s been nicknamed ‘chickenhead’ because he cannot pass a certain intelligence test, and is therefore put into a lower status class and made to work bad jobs and endure poor living conditions, even though he is more capable of empathy than members of the ruling class. Another interesting aspect of both dystopic worlds is the presence of a religion followed by the masses but actually defined and promoted by the owners of the technology. In Brave New World, Henry Ford (of Model-T fame) is the major religious figure idolized by the masses, whom the state encourages them to believe in and look up to as a prophet of the perfectibility of life through technology. In Do Androids Dream of Electric Sheep?, Mercer is a prophetic figure who can be accessed using the technology owned by the state. Both figures play a key role in maintaining the dominant narrative of their respective societies. Blade Runner cityscape The world of Blade Runner The Power of Counter-Narratives As is well-known in sociology, in any society there is often at least one counter-narrative to the dominant narrative – a story that claims that reality is different, or even quite the opposite, to what the dominant narrative claims it to be. Such counter-narratives are followed by small groups of individuals that have different cultural or historical backgrounds, or who are cognitively different, from the mainstream. Brave New World and Do Androids Dream of Electric Sheep? both present interesting counter-narratives. In Brave New World, the character John (the ‘Savage’) is a personification of a counter-narrative. Born on a reservation away from the influence of the World State, through a natural birthing process – from a mother and a father – he is brought like a freakshow curiosity back to ‘civilization’. He is disliked and rejected by the state controlling the means of (mis)information, but he at least gets a handful of followers, such as Bernard, who use him to challenge the political authority. Rather than Fordism, John has learned his values from a tattered old copy of Shakespeare’s collected plays, which is the source defining reality for him. And in Do Androids Dream of Electric Sheep?, Buster Friendly presents the counter-narrative. A radio/TV anchor, Buster mocks the technologically-defined religion of Mercer and its followers; and he himself is followed by a minority who have deviated from the dominant narrative, including John Isidore. Blade Runner Rutger Hauer Android Roy Batty (Rutger Hauer) has a final moment of empathy in Blade Runner Stills from Blade Runner © Warner Bros 1982 Sceptics may wonder what we can learn from these books given that they are works of fiction; after all, the societies they portray don’t even exist. However, they have value in two ways. Firstly, they identify and critique certain nascent tendencies in our own society. The technique is a little like the philosophical manoeuvre known as reductio ad absurdum: the tendencies are extrapolated to their logical conclusion, which is then shown to be unappetising. Secondly the stories allow us to examine in general terms how dominant narratives work and are sustained, and to do so much more easily than if we were to look at those of our own society. It is never easy to step outside your own world’s narrative and see it from the outside. Brave New World and Do Androids Dream of Electric Sheep? are works of deep social and philosophical implication. They both consider the issue of the defining criteria of reality. The relation between technology and dominant cultural narratives is a point of concern because after all, technology is a part of our lives too. However, shall we allow a part of life to define the whole of reality for us? Or should we listen to, perhaps even actively look for, other narratives, whereby we may get a different picture of reality? Works like Brave New World and Do Androids Dream of Electric Sheep? provide us with starting points to encourage us to ask brave new questions. © Ernest Dempsey 2015 Writer and editor Ernest Dempsey has authored five books. He is the chief editor of the quarterly Recovering the Self (recoveringself.com). He runs a popular blog Word Matters! (ernestdempsey.com).
Essential Truths That You Should Learn about Recipes. Creating recipes is an useful skill for anybody involved in the food industry, especially in the food service industry. There are several benefits to writing recipes, consisting of increased sales and much better consumer fulfillment. For example, you can end up being a cookbook writer, e-newsletter editor, and even an area nutritional expert, which will permit you to offer individuals from all backgrounds. To be an efficient dish writer, you must know exactly how to compose guidelines that are clear and easy to comply with. Your recipes ought to be evaluated as well as particular to your target market. Additionally, they must be visually attractive, as they are the most important part of the prep work. The word “dish” is stemmed from the Latin word, recipere, meaning “to take”. Today, it is most frequently abbreviated R or Rx. The word recipe originated in Whitman, Massachusetts, as well as was named for the Toll Residence. The recipe is widely known as a tollhouse cookie, due to the fact that the Toll Home was the initial resource of the recipe. To put it simply, a dish is a set of directions for preparing a food item. When it comes to food preparation, the word can describe a food preparation book, formulary, or medical prescription. Before the nineteenth century, the word “recipe” was a lot more commonly utilized in the English language. It derives from the Latin word recipre, which suggests “to take”. Words dishes is abbreviated as Rx. One more name for a recipe is a recipe book. As a result, a recipe may additionally be described as a dish publication. A cookbook is a collection of instructions that you can follow to prepare a meal. Dishes are part of the public domain and also belong to nobody. This means that they are offered for business usage as well as are not possessed by any individual. This is the reason that they are so prominent today. The dishes you discover in cookbooks have been given with generations. You can also make some recipes with the help of your own household, however don’t use somebody else’s recipe. It will not be the same. You should constantly look for the very best active ingredients for your meal. A dish is a collection of directions that you need to prepare a recipe. This is a guide to the prep work of any kind of food. A dish is a good way to showcase your skills in the cooking area as well as raise your sales. The key is to create a setting where you boast of your cooking. It is very important to have a feeling of possession. This will aid you stay clear of blunders. When you buy a publication, it’s important to seek a duplicate that is without piracy. Words recipe is additionally a word used to refer to a dish. A recipe is a record with guidelines for how to prepare a details food. The word dish means “to consume.” In the cooking globe, a dish is a food item that you make on your own, yet a person can also cook it. A recipe book has recipes of different type of foods. It will provide you info on the appropriate prep work of numerous ingredients. A recipe book is a book that is a created record with details instructions. As an American food writer, I have actually involved comprehend the relevance of dishes. I’m a large follower of food preparation as well as I like to eat, so I have actually spent years discovering the art of recipe writing. I’ve also worked in a kitchen for a few years, so I recognize what it requires to produce an impressive recipe. I’ve found that numerous readers locate the process exceptionally rewarding and also I’m proud to share my tricks with you. A recipe is a collection of guidelines that a chef follows to prepare a certain meal or beverage. It can be as easy as complying with a dish or as made complex as following a complicated treatment. It can be utilized as a memory help, a training tool, as well as an ethnographic record. A cookbook can function as a marketing tool for a chef, dining establishment, or cooking school. As well as it can assist advertise an organization. It can also aid advertise a chef or restaurant. Dishes are a common kind of literary expression. A fine example is a food blog site. These internet sites often consist of a personal story that accompanies the recipe. A great dish can have a lot of individual significance. A personal tale or description can help viewers connect to the author. It can likewise be useful for a relative or close friend. So what are the benefits of recipes? So what makes a good recipe? Let’s check out the numerous kinds of recipe books. A recipe can be shielded under the copyright laws. In some cases, it is feasible to utilize an idea to create something brand-new. However, a dish that is a by-product of another individual’s suggestions is likely to be thought about copyrighted. If you do not assume the dish is your very own, do not stress. Rather, take into consideration a recipe as a creative artwork and also follow your heart. The results are guaranteed to be tasty and also initial. Recipes are a written procedure for preparing a dish. It typically includes a listing of active ingredients as well as in-depth guidelines. The guidelines might consist of treatments to put together, blend, prepare, or cool the active ingredients. The recipe might even have paddings or garnishes that include taste to the dish. The word dish was initially used as a verb in medical prescriptions. It was only afterwards time that the word became a part of usual usage. recept na palacinky In English, dishes are not copyrighted. They are just written instructions for a dish. As a matter of fact, a recipe can be a piece of art or a written record. For instance, a cookbook can contain a story. A recipe can be a recipe book or a news article. It can likewise be a blog. The writer of the cookbook can share the story behind it. It may additionally contain an image of the dish, a photo, or a video clip of the dish. Leave a Reply
SocietyAugust 9, 2018 What’s still worth recycling these days? The state of recycling in New Zealand is back in the news after China announced it will no longer take much of our used plastic. But that’s no reason to give up on recycling entirely. We sent Gareth Shute to find out which materials you can most fruitfully keep out of landfill. There’s nothing like seeing a huge pile of plastic mounting up at a recycling processing centre or transfer station to make you think “why should I bother even putting stuff in my recycling when it’s just going to end up in landfill anyway?” But whatever the difficulties of maintaining a recycling system in a small country like New Zealand, that shouldn’t negate the fact that some material is still easy to give a second life. To get a bit of perspective, I spoke to Glen Jones, commercial manager from EnviroWaste, one of New Zealand’s largest waste management companies, to get a sense of where our efforts are best directed. Avoid unnecessary packaging in the first place Before we get started, let me make the most obvious point – if you can purchase an item in a reusable container then that’s always going to be better than recycling. While supermarkets might be pushing reusable bags, there are many other options in this area. How about buying your beer by bringing your own glass bottle and having it filled up from the taps? Or taking your own mesh bags for fruit, vegetables, and items from the bulk food bins? Nonetheless, some waste is always going to be generated. But, as Glen Jones points out, some materials have more possibility of being re-used than others. At the top of the list are ‘most organics (green waste/food waste), glass, fibre (cardboard and paper), some plastics (PET and HPDE) and aluminium.’ Let’s go through each of these to understand why this is the case. Organic waste When you think about it, the idea of getting fruit and vegetable scraps and putting them in plastic bags to be buried underground is quite insane (at least, it is to any gardeners out there). If these same scraps were properly composted, within a few months they’d become valuable fertiliser for growing plants. Of course, having a compost heap isn’t the most practical solution for apartment dwellers (though there are at least a few who use a bokashi bin to breakdown their food scraps at home, then empty it every couple of weeks into the compost of a nearby community garden). Fortunately, an easier option is on its way to those in our biggest city, with Auckland Council rolling out a kerbside food scraps collection service; lucky Papakura already has one. Jones says that removing organic material from refuse is always going to be the best way to minimise your landfill footprint. But if you aren’t composting right now, not all is lost. There’s always your Insinkerator: “Food waste can be disposed of via your waste disposal system – waste disposal manufacturers even claim that there are environmental benefits to this at some secondary processing centres – and via green waste and food waste collections which are offered by some councils and recycling companies.” “However, if a householder disposes of organic material in refuse and it goes to landfill, not all is lost from a re-use perspective. As rubbish at a landfill degrades, it releases methane gas. At modern landfills, this gas is extracted via a network of wells and converted into electricity. For example, EnviroWaste’s Hampton Downs Landfill currently produces sufficient electricity to power the operations at the landfill, as well as supplying the national grid with enough electricity to power approximately 5,600 households.” The remarkable thing about recycling the cans that hold soft drink and tinned fruit and vegetables is that the process for turning the metal back into a usable product actually requires less energy than primary production. Recycling aluminium from a can takes only 5% of the energy than extracting it from ore. Clean aluminium foil can be recycled too. This makes it very useful when compared to other items that might be in your kerbside recycling bin. As Jones points out, “aluminium is the most valuable commonly recycled commodity on a per tonne basis.” PET and HDPE plastics The fact that China no longer takes some varieties of plastic (grades 3-7) shouldn’t obscure the fact that two other types are still in demand – specifically PET (recycling symbol #1) and HDPE (recycling symbol #2). In fact, HDPE garners a per tonne return that is second only to aluminium. The most likely you’ll come across these types of plastics as packaging is as soft drink and milk bottles. Basically if you have a plastic bottle then it’s probably worth recycling. (For a table showing the different plastic grades and what kinds of items each plastic type is used for, click here) A month ago, The Spinoff ran a piece by the owner of Happy Cow Milk Company explaining why he was so focused on selling his product in glass rather than plastic bottles. You might wonder why he’s so worried about it, when as we’ve just discussed, the two types of plastic used for bottles are actually quite recyclable. Basically the answer goes back to the production stage: the creation of plastic is far more harmful to the environment than glassmaking. The analysis in this Guardian piece explains the issue and argues for the use of glass over plastic. Another point in favour of glass is that it is usually recycled within New Zealand. Not only is it made into new jars and bottles, but it can be also used in the construction of roading (as ‘glasscrete’ and ‘glassphalt’). Cardboard and paper Last but not least are fibre products like cardboard and paper, which can also have a second life as newspaper, tissues, or cardboard trays. However, before you chuck your pizza box in the recycling bin, do scrape the last bits of food off it. If the leftover toppings begin to rot in the bin then the cardboard is less likely to make good material for recycling. Recycling bin etiquette The simple truth is that for recycling in New Zealand to work effectively, it takes a little bit of effort from all of us. This means knowing what can be taken in your kerbside bin in your area. For example, in Auckland it causes a lot of problems when plastic bags are put into your recycling bin since they jam up the processing machines (instead these should be collected and put in soft plastics recycling bins at your local supermarket or other retailer). Nor can organic material be put in with standard recycling. Auckland Council video showing the types of products that can be recycled in its jurisdiction. NB this applies to Auckland only, and predates the introduction of kerbside compost collection in Papakura Despite this, Jones often finds problematic materials coming through their system, including “organic material, nappies and plastic bags. From a safety perspective, gas bottles and lithium batteries are also problematic because they have the ability to cause significant damage when placed in a collection vehicle. “So treat your recycling bin as a recycling bin. Any organic or banned material has the ability of contaminating good recyclable product and can therefore risk the recyclable material having to be thrown away. Washing plastic material prior to placing it in a recycling bin is also a good way of helping to ensure that it’s recycled.” While there’s no question that the way we approach recycling in New Zealand will require continued work over the decades to come, it’s clear that there are plenty of materials that can be effectively diverted from landfill right now. So any time you see an empty bottle, pizza box, or aluminium can lying fallow, it’s certainly worth your while to scoop it into a recycling bin, safe in the knowledge that it won’t go to waste. Mad Chapman, Editor Get The Spinoff in your inbox
×Close Menu Treaty with the Chickasaw, 1852 June 22, 1852. | 10 Stat., 974. | Ratified Aug. 13, 1852. | Proclaimed, Feb. 24, 1853. Articles of a treaty concluded at Washington, on the 22nd day of June, 1852, between Kenton Harper, commissioner on the part of the United States, and Colonel Edmund Pickens, Benjamin S. Love, and Sampson Folsom, commissioners duly appointed for that purpose, by the Chickasaw tribe of Indians. The Chickasaw tribe of Indians acknowledge themselves to be under the guardianship of the United States, and as a means of securing the protection guaranteed to them by former treaties, it is agreed that an Agent of the United States shall continue to reside among them. That the expenses attending the sale of the land ceded by the Chickasaws to the United States, under the treaty of 1832, having, for some time past, exceeded the receipts, it is agreed that the remnant of the lands so ceded and yet unsold, shall be disposed of as soon as practicable, under the direction of the President of the United States in such manner and in such quantities, as, in his judgment, shall be least expensive to the Chickasaws, and most conducive to their benefit: Provided, That a tract of land, including the grave-yard near the town of Pontotoc, where many of the Chickasaws and their white friends are buried, and not exceeding four acres in quantity, shall be, and is hereby set apart and conveyed to the said town of Pontotoc to be held sacred for the purposes of a public burial-ground forever. It is hereby agreed that the question of the right of the Chickasaws so long contended for by them, to a reservation of four miles square on the River Sandy, in the State of Tennessee, and particularly described in the 4th article of the treaty concluded at Oldtown, on the 19th day of October, 1818, shall be submitted to the Secretary of the Interior who shall decide, what amount, if any thing, shall be paid to the Chickasaws for said reservation: Provided, however, That the amount so to be paid shall not exceed one dollar and twenty-five cents per acre. The Chickasaws allege that in the management and disbursement of their funds by the government, they have been subjected to losses and expenses which properly should be borne by the United States. With the view, therefore, of doing full justice in the premises, it is hereby agreed that there shall be, at as early a day as practicable, an account stated, under the direction of the Secretary of the Interior, exhibiting in detail all the moneys which, from time to time, have been placed in the Treasury to the credit of the Chickasaw nation, resulting from the treaties of 1832, and 1834, and all the disbursements made therefrom. And said account as stated, shall be submitted to the Chickasaws, who shall have the privilege, within a reasonable time, of filing exceptions thereto, and any exceptions so filed shall be referred to the Secretary of the Interior, who shall adjudicate the same according to the principles of law and equity, and his decisions shall be final and conclusive on all concerned. It is also alleged by the Chickasaws that there are numerous cases in which moneys held in trust by the United States for the benefit of orphans and incompetent Chickasaws, have been wrongfully paid out to persons having no right to receive the same. It is therefore further agreed, that all such cases shall be investigated by the Agent of the United States under the direction of the Secretary of the Interior. And if it shall appear to the satisfaction of said Secretary, that any of the orphans and incompetents have been defrauded by such wrongful payments, the amount thus misapplied shall be accounted for by the United States, as if no such payment had been made: Provided, That the provisions of this article shall not be so construed as to impose any obligations on the United States to reimburse any expenditures heretofore made in conformity with the stipulations contained in the treaties of 1832 and 1834: And provided further, That the United States shall not be liable to repay moneys held in trust for the benefit of orphans and incompetent Chickasaws, in any case in which payment of such moneys has been made upon the recommendation or certificate of the persons appointed for that purpose in the Fourth Article of the Treaty of 1834, or of their successors, and in other respects in conformity with the provisions of that article: And provided further,That the United States shall not be held responsible for any reservation of land or of any sale, lease, or other disposition of the same, made, sold, leased, or otherwise disposed of, in conformity with the several provisions of said treaties of 1832 and 1834. The Chickasaws are desirous that the whole amount of their national fund shall remain with the United States, in trust for the benefit of their people, and that the same shall on no account be diminished. It is, therefore, agreed that the United States shall continue to hold said fund, in trust as aforesaid, and shall constantly keep the same invested in safe and profitable stocks, the interest upon which shall be annually paid to the Chickasaw nation: Provided, That so much of said funds as the Chickasaws may require for the purpose of enabling them to effect the permanent settlement of their tribe as contemplated by the treaty of 1834, shall be subject to the control of their General Council. The powers and duties conferred on certain persons particularly mentioned in the 4th article of the treaty of 1834, and their successors in office, shall hereafter be vested in and performed by the General Council of the Chickasaws, or such officers as may be by said council appointed for that purpose; and no certificate or deed given or executed by the persons aforesaid, from which the approval of the President of the United States has once been withheld, shall be hereafter approved unless the same shall first receive the sanction of the Chickasaw Council, or the officers appointed as aforesaid, and of the agent of the United States for said Chickasaw nation. No claim or account shall hereafter be paid by the Government of the United States out of the Chickasaw fund; unless the same shall have first been considered and allowed by the Chickasaw General Council: Provided, however, That this clause shall not effect payment upon claims existing contracts made by authority of the Chickasaw General Council, or interfere with the due administration of the acts of Congress regulating trade and intercourse with the Indian tribes. It is further agreed, that regular semiannual accounts of the receipts and disbursements of the Chickasaw fund shall be furnished the Chickasaw Council by the Government of the United States. The sum of fifteen hundred dollars shall be paid the Chickasaw nation, in full of expenses incurred by their commissioners in negotiating this treaty. And it is further stipulated, That in no case hereafter, shall any money due or to be paid under this treaty or any former treaty between the same contracting parties be paid to any agent or attorney; but shall in all cases be paid directly to the party or parties primarily entitled thereto. In witness whereof the contracting parties have hereto set their hands and seals, the day and year above written. Kenton Harper, Commissioner for the United States. [SEAL.] Edmund Pickens, his x mark, [SEAL.] Benjamin S. Love, [SEAL.] Sampson Folsom, [SEAL.] Commissioners for the Chickasaws. In presence of-- Charles E. Mix, chief clerk, Office Indian Affairs L. R. Smoot, T. R. Cruttenden, H. Miller, Aaron V. Brown, interpreter.
Working with APIs Imagine a world where information on the internet stayed on its own website. YouTube videos lived only on YouTube, news articles only lived on news websites, and cat memes only lived on photo sites, among many other differences. Social media, for example, would be a shell of its current self. Interestingly enough, this is the world that most browsers are built to expect. Because of the Cross-Origin Resource Sharing (CORS) rules, browsers, by default, block information sharing across different domains (e.g. sharing something from to This is included as a security measure, so the data owner can control who can access their information and how. The ways different websites allow resource sharing makes the internet much more interesting and collaborative. An API (or Application Programming Interface) is the way that information can be shared over the internet. For one project at Hack Reactor, I made a website (client) that loads YouTube videos based on a user-defined search (so, like, YouTube). To do this, I used the YouTube API. It was quick and easy to set up — I went to the Google APIs page, applied for an “API key” and voila! I was able to execute search queries on YouTube’s database to get information about their videos! You might be wondering, “why do we need API keys? Why can’t YouTube just send any list of videos that I want?” The short answer is: “because it’s their data, and they can do what they want with it.” Imagine, for a second, that someone had a genius idea to make a new website, let’s call it, MeTube, where people could search up any videos they want, play them from the browser, and while they’re at it, maybe consume some ads that help pay the executives over at MeTube. Imagine also that all of these videos actually came from YouTube, but MeTube was reaping all of the benefits. Of course YouTube doesn’t want that — they want you to watch ads on their platform so they get the money! To help protect YouTube from such a nefarious plot, they giveth API keys. Unsurprisingly, they can also taketh away API keys. During my aforementioned project building a YouTube client, I wound up on YouTube’s “naughty list.” My page was reloading far more often than I expected on both mine and my pair’s computers, so we racked up hundreds of requests to YouTube’s database — to which they promptly said, “no thank you, you can…er…need to…stop now.” We were locked out and finished the project with sample data :) APIs are very powerful — they let users view information on their platform of choice and contribute to the open, collaborative nature of the internet. Of course, they belong to the data owners, so they need to be used carefully and respectfully if you wish to continue using them.
Revision as of 13:37, 10 August 2015 by Rjpatruno (Talk | contribs) Jump to: navigation, search This wiki describes hardware systems and software algorithms for rapidly detecting misprints in high-speed digital presses. (G. Leseur, N. Meunier, P. B. Catrysse, and B. A. Wandell with support from an HP Labs Innovation Research Award) Modern high-speed digital presses print a relatively small number of copies of a document, such as books or brochures, and they typically do this much faster than traditional rotary presses. Digital presses also permit the printing of short-runs, rather than the very large volumes that are needed to justify a traditional (rotary) press. When only a small number of copies are printed, the proportional cost of each error is much higher. If an error is caught after 50 copies in a short-run of 100 copies, the cost is increased by 50%. If the error is caught after 50 copies of a typical 1,000,000 copy run then the increase in cost is much lower. Hence, detecting misprints is particularly important on short-runs. In this project, we study a high-speed digital sensing solution for real-time misprint detection in printing presses. Click here for an overview of the project and methods, and a simulation of the misprint detector. Background and Prior Art There seems to be very little academic literature on misprint detection. This is not very surprising as it is an engineering problem more likely to be tackled inside firms. Moreover each method has to be specific to a printer. Nevertheless, we will see that assuming that we know the properties of the printer, it is possible to develop a fairly general method to detect misprints, and this method can then be applied to the printer, using its particular properties. This paper1 deals with a quite similar problem in pad printing, they developed a real time method, but they are using a template method, which is what we want to avoid here because of the small volume constraint. On the other hand, there exist a lot of patents dealing with misprint detection. Most of work pertains to misprint detection with rotary presses and relies on the fact that you can first take an picture of a correct print. Assuming correct print, these methods can compare the images that you get with the image that you know to be correct. This can a perfectly acceptable solution when you have a lot of copies, but if you want to print only a few dozen copies, this becomes more and more useless. Examples include patents like this one2, or this one3. Some other patents are dealing with self-testing printers: the printer prints a specific known pattern and then the system tries to decide if the printer is working correctly here4, but this is not a general approach: the printer may be working and there could still be a problem with the paper after some pages. The only found one a fully automatic method in an HP patent, but it does not seem to take into account the real time constraint for the sensing, or the computational cost of the detection algorithm (see this page5). It is also assuming that the sensed image is accurate enough to be compared with the original digital file after applying some transformation, and therefore does not directly simulate what the image should be. Here, in contrast, we are developing a fully-automated, real-time method for the misprint detection. System Implementation We used ISET to model a high-speed document sensing system. We modeled the optical imaging system and one CCD and one CMOS sensor that measure the printed page. We also developed algorithms to convert the original image into a reflectance map of the printed page. Our main objective while developing those models was to be able to process them in real-time. With both this transformation and the previous model, we have an entire pipeline to simulate the information we should receive on our sensor given our original image. From this pipeline, we created a large model of conversion ( a look-up table ) to avoid using ISET to resimulate the process for every images. This simulation process provide us two type of information : the first one is the expected image on the sensor. The second one is a typical standard deviation that describe the variation of the value on the sensor that we can expect for a well-printed image given the different noises we modeled. Using this simulation model, we could develop some tools to detect misprints. First, we created a toolbox to simulate different types of errors (color removal, color shifting, local misprints...). Then, we are using the information from the original image to control our error search algorithm in the sense that we are comparing the difference between the simulated image (I_sim) and the sensed image (I_sensed) with respect to what the noise level should be in this particular area. Finally, we developed some statistical tools to decide whether or not an image contain an error. For this, we used the different information we computed from our original image and comparing our final sensor image with the expected one and taking into account the classical standard deviation that could appear from a well-printed image. Using a learning method, we could compute conditional probabilities distributions that we can then use to detect errors in our printed images. Our system, as we implemented it, works according to the diagram below: Diagram of a misprint detection system Diagram of a misprint detection system The sytem take one image as input: I_in, and it generates two images: 1. The right column represents the actual image (I_sensed) of the printed page: the digital file is printed by the printer, then we add inside the printer a light (whose properties we know perfectly) and we add a sensor. In our case, this is a line sensor. The sensor is also monochromatic: we are not using any filtering to help discriminating the colors. Though this limitation may seem to make the problem much difficult, it has one main advantage: this limits the computation need to simulate the images, as well as to process them. 2. The left represents the same process, but fully simulated: as we know the file that should be printed, and the printer properties (particularly the reflectance of the ink), we can calculate for each pixel of the printed paper a reflectance. • This gives us a reflectance map. • We know the light that we added in the printer, by combining its properties with the reflectance map, it gives us a radiance map. • We also know the properties of the optics and the sensor, so for from the radiance map, we can compute an output (I_sim). This simulated output represents a possible output of the sensor, as by definition, we cannot simulate exactly what the noise will be; but we can simulate one instance of this noise. However for the misprint detection algorithm that takes these I_sim and I_sensed as input, it is better to have the simulated average image, which is our best guess as -obviously- we cannot predict the what the noise will be. You will find more detailed information on the sensing system on this page From these two images, a misprint detection algorithm computes the probability of having a misprint according to what we have learned and measured. How to detect the misprints and reduce the computation Based on intuition, it seems that the representation of data, or the representation of the impact of misprints on a page should be very sparse and then, a lot of useless computation could be avoided by exploiting this sparsity. For example, if we want to detect a missing color plane from an image, let's say a missing cyan plane, it is useless to check areas where cyan is not available on the original image. From this intuition, we first developed a simple approach to find area in which a color is present more importantly than the other colors. This simple operations allows to check for a misprint only using a small area of the original image thus leading to potential huge reductions in computation. Another solution that we did not study in details was to obtain a form of sparse representation of the data directly from the hardware. Our goal is to find a way to measure only a limited number of characteristics, instead of the entire (line) array of information. Then, it seems a good idea that the hardware should be able to measure those “projections” itself, and we should only process those features vectors. There has been some interesting papers around this application, but none on them are able to perform general type of transforms that we could reprogram at will. People interested in those applications can look at the references given at the end of the wiki concerning compressive sensing applications. The drawbacks with this approach is that we need to develop sparsity models for each type of misprints we would like to detect. So, it may seam an interesting approach at first to considerably lower the total detection algorithm complexity but there are two problems with this method: 1. It supposes that we know every possible type of misprint, which would lead to huge number of cases, and each case requiring a specific method and specific calculation, it finally requires a lot of development time and reduces strongly the gain in the total computation (probably it is even worst). 2. The repartition of certain type of misprints cannot be predicted: for instance you cannot predict where the paper could have a scratch given the image that should be printed. Therefore we have to check the whole page. Given these two remarks, it appears to be a better idea to develop a general algorithm that could catch different types of misprints and checks the whole page only once. This is the kind of method that we developed. The idea is to compare the simulated sensor output with the actual output, and see how likely it is to have such a outcome given what it should be, and the noise level for this value. From this consideration we compute the probability of a correct print. You can find the details of this method on this page, it uses a learning process to find the different parameters of the problem, and then infer the final probability from these and the measurements. At first we developed a method for a monochromatic sensor, and then we generalized these results to a multi-sensor algorithm. We have carried out a series of computational misprint experiments using the simulator. These tell us how well the method performs on different misprints, and how the algorithm behaves with respect to noise, or changes in parameters. We also ran some comparison between the different flavours of the algorithm an evaluated their efficiency with respect to their computational cost. To be able to run these experiment we implemented of dozen of possible misprints, reflecting a wide range of possible effects. These results show that the method is really effective in detecting these misprints. In this project, we developed an entire model for simulating and evaluating misprints in real-time for fast printing presses. We created a simulation pipeline involving a model of an hardware CCD sensor, an optical system, conversion algorithms from CMYK representation to reflectance maps as well as different strategies to efficiently detect misprints and a toolbox to simulate a wide class of misprint errors. After a first complete version of the model, we developed a new approach to speed up the entire simulation by a large factor. The idea was to use ISET to build a look-up table allowing a fast conversion from the original paper to an estimated sensor measurements. Using this strategy, we divided by approximately 100 the computational cost of the simulation. For detecting misprints from the simulated measurements, we used a learning approach by simulating hundreds of misprinted images and learning from them classical conditional probability distribution observed after comparison with the original image. With those learnt distributions, we could then test new simulated images for misprints and obtained very high probability of misprint detection. This shows that it is actually possible to achieve very good results on this particular problem with only a monochromatic sensor. This also proves that we can have a general algorithm to detect misprints, that it can run fast (all these computations seem reasonable in real time), and still be efficient, even with a monochromatic sensor. This is very encouraging for a more general, and more accurate system. We tested this system under several different conditions and it is performing very well. We also implemented a generalised method to apply a similar algorithm to a multi sensor system. Future directions The simulation can make use of multiple chromatic sensors; this may provide a more efficient detection algorithm and detect a wider class of misprints.Multiple sensor yield multiple measurements of the signal, and give results that are likely to be much more accurate and precise, as they would probably allow the algorithm to work at a finer scale. We haven't fully tested this mode so far, and it will be interesting to see how the misprint detector behaves with several sensors. One possible extension of this project is to use different tables for different types of misprint (see this section for more details). Some types of misprint are indeed having the same effect on the features that we are extracting. The idea would be to group according to this criterion for more efficiency; probably the wide spread misprints on one side and the localized misprints on the other side. Studying the trade-off between efficiency and computation is also something that would be interesting, as adding more sensors, and computing with more and more tables may give better results, but it also comes with a cost. Finally it is also possible to improve the results with a more complex model, or adding more features. Other issues adressed During this project we address other peripheral issues, like managing look-up tables, or 3D visualization, and you can find some comments about these on this page Software Overview The Misprint software is stored in an SVN repository. The software begins with various types of RGB input images, converts the images to CMYK, and simulates the printing process and the sensor. The Misprint software page describes the Matlab, VTK and ImageMagick functions. In this part, you can find the different steps used in the whole pipeline, and some code to run the files. 1. Printing Quality Control Using Template Independent NeuroFuzzy Defect Classification, 7th European Conference on Intelligent Techniques and Soft Computing (EUFIT), Aachen (Germany), 1999 2. European patent EP0554811 (Misprint detection device in a rotary printing machine) 3. USPTO Application 20080260395 (Image Forming Apparatus and Misprint Detection Method) 4. US Patent 6,003,980 (Continuous ink jet printing Apparatus and method including self-testing for printing errors) 5. US Patent 7,519,222 (Print defect detection) Development notes You can find some notes written during the development of this project on this page Personal tools
(Latin: through, across, over; beyond, by means of) Ad astra per aspera. (Latin) This motto suggests that we achieve great things only by encountering and overcoming adversities. It will be rough going, but we will make it. Ad augusta per angust. (Latin) Translation: "To honors through difficulties." Augusta refers to holy places and angusta to narrow spaces, therefore sometimes we cannot achieve great results without suffering by squeezing through narrow spaces. Ad perpetuam rei memoriam. (Latin) Translation: "For the perpetual remembrance of the thing." These words are traditionally used to open papal bulls. Ad virtutem per sapientiam. (Latin) Translation: "To virtue through wisdom." antiperspirant (s) (noun), antiperspirants (pl) impervious (adjective), more impervious, most impervious 1. Pertaining to something which does not allow passage or entrance; impenetrable: Jerry wore his new jacket which was supposed to be impervious to wind, rain, or snow and to keep him dry and warm. 2. Referring to something which or someone who is not capable of being disturbed, damaged, or harmed: Doug was so sure of himself that no one was able to mention the flaws in his undertaking and he seemed to be impervious to any criticism from anyone. 3. Etymology: from Latin impervius, "that which cannot be passed through"; from in-, "not, opposite of" + pervius, "letting things through"; from per, "through" + via, "road, way". Not allowing criticism to bother a person. © ALL rights are reserved. Not letting ideas enter a person's thoughts. © ALL rights are reserved. Not letting threats enter one's mind. © ALL rights are reserved. Go to this Word A Day Revisited Index so you can see more of Mickey Bach's cartoons. 1. By means of; through. A person authorized to sign someone else's name to any document should add his or her own signature preceded by per. 2. According to; by: "He acted per his supervisor's instructions." "The work was done per his directions." Per actum intentio. The intention [must be judged] by the act. Per actus conamine. You attempt by doing it. Per actuta belli. Through the asperities [hardships] of war. Per angusta ad augusta (Latin motto) Through difficulties to honor because sometimes we can not achieve great results without suffering by squeezing through narrow spaces. per annum per annum; p.a., per an. By the year.
Alternative Uses of Body Cameras Leave a comment Last year, the deaths of Breonna Taylor and George Floyd sparked protests rallying against police brutality and the lack of accountability. In response to allegations of officers abusing their power, some police forces have implemented the use of body worn cameras. These devices are surveillance cameras that are placed over the chest or torso to record a better account of events. However, body cameras are not just for police officers. Body cameras are increasingly being used in many other different industries for other purposes. Many factories and manufacturing plants have been turning to body cameras to ensure that all safety procedures and processes are being followed correctly. In pharmaceutical production plants, millions of dollars worth of medication can be wasted if there’s any suspicion that proper safety measures were not met. These surveillance devices are proving to be useful for plants to review and verify whether or not proper steps were being followed. This has allowed companies to save money on medication that would’ve been thrown away. Body cameras are also being used to provide evidence that deliveries were made properly. For example, if a grocery store has a complaint that the right amount wasn’t delivered or that the produce was bad, they can review the footage from the camera to confirm. Walmart has also been using these cameras on their employees who make home deliveries. Using this device, the employee can gain access to the customer’s home to put away groceries while the client monitors through their smartphone. Retail Training Many retail stores have started to use body cameras to record customer interactions to improve employee training and behavior. By reviewing footage that the camera records, businesses can evaluate how the employee can better their service. They can also teach new employees how to handle different real-life situations. Emergency First Responders Like police officers, emergency first responders are held liable for their actions and could face legal issues if they are found to have handled a situation improperly. Body cameras provide a first-hand account of events for investigations of negligence or misconduct. In addition, they can be used to better train the employee on how to handle the challenging situations they will face in the near future. COVID-19 Safety Body cameras have seen a surge in popularity since the beginning of the COVID-19 pandemic. The reason for this is because businesses have to enforce safety measures such as mask usage and social distancing, but there are some clients who refuse to abide by the new regulations. Employees who are trying to enforce the rules may become targets of customer abuse and situations could escalate. The body cameras record these situations to keep the customers accountable which better protects the employees. Wrap Up In the past, body cameras were mostly used by police officers to provide more transparency, but they’re increasingly being used in other industries. Like regular surveillance cameras, people will continue to find more uses of these cameras and they will become more prevalent in our world. Many people are concerned that our lives are becoming less and less private, but I believe that these cameras are making us safer. As always, don’t hesitate to contact us if you have any more questions. You can call us at 877-926-2288 or connect with us on social media Leave a Reply Call Now Button Add address
Rowman & Littlefield, 499 Pages, Paper, $59.95 Here’s an up-to-date assessment of religious freedom, country by country. Each country is given a numerical ranking to indicate its level of freedom. The 101 countries included constitute more than 95 percent of the earth’s population. At the beginning of each entry by country, certain facts are given: population, breakdown of religions by percentage, etc. Then a background of the country as it relates to religious freedom follows, and the present-day situation completes the section. Several appendixes offer further discussion. This book would be of particular help to anyone involved with missions work and research, or anyone interested in religious freedom, government regulation of religious activity, or religious persecution.
• Kyodo Atomic bombing survivor and peace advocate Setsuko Thurlow called for action for the good of society in a graduation speech at her alma mater in Canada on Tuesday. Thurlow, 87, who survived the 1945 U.S. atomic bombing of Hiroshima, told graduates at the University of Toronto that she has acted to warn people of the danger of nuclear weapons out of her moral obligations as a hibakusha. Thurlow, who previously delivered a speech at the 2017 Nobel Peace Prize award ceremony, called on the audience to “get involved, take action, make things happen” and “persist and persevere” as part of her advice to them. “Rather than pity myself as a victim of the atomic bomb I have tried to understand the meaning of my experience and what can be done to prevent it from ever happening to anyone else,” she said. “To this end I have educated myself, worked with like-minded people, spoken out, and advocated for change. And in this ultimate David and Goliath scenario I have persevered through the difficult times,” she added. For her advocacy work with the International Campaign to Abolish Nuclear Weapons, or ICAN, Thurlow was one of the representatives who accepted the Nobel Peace Prize on behalf of the campaign in 2017. Thurlow often spoke out at the United Nations about her experiences as an atomic bomb survivor, urging governments to ratify the U.N. treaty outlawing nuclear weapons. It marked the first time since 2009 that an entity or person had received the prize for work related to nuclear abolition, following then-U.S. President Barack Obama receiving the award for outlining his vision of a nuclear-free world. “The year 2017 brought a historic victory in a long struggle for nuclear weapon abolition,” Thurlow said, referring to the adoption that year of the Treaty on the Prohibition of Nuclear Weapons at the United Nations. “This ground-breaking treaty would outlaw nuclear weapons as a first step toward their total elimination.” By subscribing, you can help us get the story right.
The Key Difference Between Obstructive And Central Sleep Apnea Man sleeping in bed wearing CPAP mask If you don't suffer from sleep apnea, you may know very little about it. But the chance that you or someone you love will have issues with sleep apnea grows greater every day. Affecting roughly 22 million Americans today, one of the most common sleep disorders is sleep apnea. This condition exhibits the following: Referred to as apnea events, while you sleep, your breathing is interrupted in the form of repetitive pauses. Sleep apnea does, however, come in several forms. Obstructive and central sleep apnea are the two most prominent types. Of those two, the most common is obstructive. Depending on which disorder you suffer from, there are different treatments used to assist with the condition. That's why it's important to know just which condition you have. Let's examine the two types of sleep apnea and how they differ. Central Sleep Apnea With CSA (central sleep apnea), a lack of respiratory movements is driven by the cessation of respiratory drive. Your breathing is disrupted regularly during sleep due to the manner in which your brain functions. Unlike those suffering from OSA (obstructive sleep apnea), you actually are able to breathe. The thing is, your muscles aren't being told to breathe by your brain. So, you simply don't try. Severe illness is frequently associated with CSA. Especially if the lower brainstem is involved in the illness. When that part of your brain is affected, so may the control over your breathing be influenced. Note: Pauses in breathing, lasting as long as 20 seconds each, can be caused by CSA in newborns. Obstructive Sleep Apnea OSA (obstructive sleep apnea) occurs while you sleep, and your upper airway gets either completely or partially blocked. In order to draw air into your lungs, your diaphragm and chest muscles must work harder during this obstruction to open up the blocked airway. There are several general and/or behavioral measures that you can take to help avoid this occurrence. These are as follows: • Instead of sleeping on your stomach or back, try sleeping on your side. • Weight loss. • Anywhere from 4 to 6 hours before you go to bed, avoid partaking in alcohol. The above are nonsurgical treatments considered relatively conservative. Weight loss and positional therapy are, more or less, guidelines. There is evidence that they have been a successful strategy in the care of patients suffering from OSA in any number of cases. Do I Have Sleep Apnea? Thanks to the technologies we have access to today, there are helpful devices on the market that can assist individuals curious as to whether or not they are suffering from sleep apnea. As an example, the LOOKEE Health-Technology company has come out with monitors that can keep track of your sleep and pulse ox patterns. The information can be downloaded so that you will have a more accurate idea of what's going on while you sleep. If the results point to sleep apnea, you can bring this information to a doctor who will help you determine a course of action. Sleep Apnea Devices and More at LOOKEE LOOKEE Health-Technology offers sleep apnea sufferers assistance through our selection of products. We also carry hand exerciser grip strengtheners, infrared thermometers, blood pressure monitors, and personal ECG monitors. Could you use a little bit of assistance in keeping track of your temperature, blood pressure, heart rate, or pulse ox? With our high-tech but easy to use devices, you can do that in the privacy of your own home, at the gym, or in your hotel room on vacation. We make keeping track of your health a simpler process. Check us out today! Please feel free to contact us with any questions.
Criminal Defense & Social Security Disability Law Can police officers legally lie to you? The presumption criminal suspects are innocent until a judge or jury finds them guilty is a bedrock principle of the American legal system. Unfortunately, when investigating possible criminal conduct and interrogating suspects, police officers often already have their minds made up about the guilt of suspects.  When interrogating suspects, officers often use the Reid interrogation technique. Regrettably, this technique has resulted in countless confessions from individuals who simply did not commit crimes. Giving a suspect misleading or downright false information is a tried-and-true tactic of the Reid method.   Officers can lie to you without violating the law During custodial interrogations, officers may have significantly fewer details about the crime or its circumstances than they want you to believe. Generally, officers may lie to you about physical evidence, witness statements, approved warrants or even the potential severity of your sentence upon conviction.   Because officers intentionally set up interrogations to be inherently stressful events for criminal suspects, you may not be able to distinguish between truth and fiction. After prolonged questioning, you may believe you are guilty of a crime even if you are innocent.   Not all lies are legally acceptable While it may be legally acceptable for officers to lie to you about many matters, they cannot mislead you about your constitutional rights. The Fifth Amendment to the U.S. Constitution gives you the right to remain silent. It also affords you the right to have an attorney present for police questioning.  If officers lie to you about these matters, any statements or confessions you make may be inadmissible in court. Still, because officers are likely to have an advantage when questioning you, invoking your Fifth Amendment rights may be an effective way to level the playing field.
Famous German General Practitioners Vote for Your Favourite German General Practitioners  1 Heinrich Cornelius Agrippa Heinrich Cornelius Agrippa Famous As: Physician Birthdate: September 14, 1486 Sun Sign: Virgo Birthplace: Cologne, Germany Died: February 18, 1535 Sixteenth-century German scholar Heinrich Cornelius Agrippa was known for his expertise in philosophy and the occult. He also taught at the universities of Pavia and Dôle. His De occulta philosophia suggested magic as a way to reach God. He was eventually branded a heretic and imprisoned.  2 Johann Friedrich Blumenbach Johann Friedrich Blumenbach Famous As: Physician Birthdate: May 11, 1752 Sun Sign: Taurus Birthplace: Gotha, Germany Died: January 22, 1840 A pioneer of physical anthropology, Johann Friedrich Blumenbach laid down one of the first racial classification systems for humans after studying human skulls, dividing mankind into five racial groups. Born into a family of academics, he was a prodigy. He was against scientific racism, though his theory promoted the degenerative hypothesis.  3 Hans Münch Hans Münch Famous As: Physician Birthdate: May 14, 1911 Sun Sign: Taurus Birthplace: Freiburg im Breisgau, Germany Died: 2001 AD Recommended Lists:  4 Paul Schäfer Paul Schäfer Famous As: Medic Birthdate: December 4, 1921 Sun Sign: Sagittarius Birthplace: Troisdorf, Germany Died: April 24, 2010 Paul Schäfer Schneide was a Nazi era German colonel, who at the end of WWII founded an orphanage in West Germany. Charged with child molestation, he fled to Chile, where he established an isolated colony. But charged with child abuse, he had to flee once more before being arrested and convicted on twenty-five counts. He died while serving his term.  5 Johann Friedrich Struensee Johann Friedrich Struensee Famous As: Physician Birthdate: August 5, 1737 Sun Sign: Leo Birthplace: Halle, Germany Died: April 28, 1772 Eighteenth-century German physician Johann Friedrich Struensee was the official physician of King Christian VII of Denmark, who was mentally unstable. He later started dominating the court and also began an affair with Queen Caroline Matilda. In spite of introducing several reforms, he was eventually beheaded, following a coup.  6 Otmar Freiherr von Verschuer Otmar Freiherr von Verschuer Famous As: Human biologist Birthdate: July 16, 1896 Sun Sign: Cancer Birthplace: Wildeck, Germany Died: August 8, 1969 German biologist and eugenicist Otmar Freiherr von Verschuer was an advocate of racial hygiene and the mandatory sterilization of the physically and mentally disabled. He also led the Nazi experiments on twins based on body parts made available to him from the inmates of various concentration camps.  7 Emin Pasha Emin Pasha Famous As: Physician Birthdate: March 28, 1840 Sun Sign: Aries Birthplace: Opole, Poland Died: October 23, 1892 Eduard Schnitzer, or Emin Pasha, was born into a German Jewish family in modern-day Poland. A qualified physician, he moved to Constantinople after being disqualified in Germany. He not only served the Ottoman rulers but also surveyed and explored Africa extensively. He was eventually killed by Arab slave raiders. You May Like  8 Maja Einstein Maja Einstein Famous As: Romanist Birthdate: November 18, 1881 Sun Sign: Scorpio Birthplace: Munich, Germany Died: June 25, 1951 Maja Einstein is remembered as Albert Einstein’s younger sister and only sibling. After acquiring a Ph.D. in romance languages and literature from Bern, Switzerland, she got married. However, at the beginning of World War II, she fled to the U.S. and remained estranged from her husband till her death.  9 Georg Wilhelm Steller Georg Wilhelm Steller Famous As: Botanist Birthdate: March 10, 1709 Sun Sign: Pisces Birthplace: Bad Windsheim, Germany Died: November 14, 1746 German-born zoologist and botanist Georg Wilhelm Steller traveled to Russia on a troop ship. He was later part of the Great Northern Expedition, aboard the St. Peter, aimed at locating a sea route from Russia to North America. The Steller’s sea cow, discovered by him, went extinct later.  10 Paul Rée Paul Rée Famous As: Writer Birthdate: November 21, 1849 Sun Sign: Scorpio Birthplace: Bartelshagen, Germany Died: October 28, 1901 German author and philosopher Paul Rée, whose writings influenced much of his friend Friedrich Nietzsche’s works, was born to affluent Jewish parents. While he initially studied philosophy and law, Rée later became a physician. He died while hiking on the Swiss Alps, though some feel he had committed suicide.  11 Engelbert Kaempfer Engelbert Kaempfer Famous As: Naturalist Birthdate: September 16, 1651 Sun Sign: Virgo Birthplace: Lemgo, Germany Died: November 2, 1716 Seventeenth-century German physician and traveler Engelbert Kaempfer had been on trade missions across the world, including places such as Russia, Iran, Java, and Japan. His written experiences about his stay in Japan became a valuable source of information on the flora and fauna of the country.  12 Andreas Libavius Andreas Libavius Famous As: Physician Birthdate: 1555 AD Birthplace: Halle, Germany Died: July 25, 1616 Andreas Libavius was a German professor and physician. He was a renaissance man known for practicing alchemy. He wrote a book called Alchemia, one of the first chemistry textbooks ever written. He taught history and poetry at the University of Jena and later became a physician at the Gymnasium in Rothenburg. He also founded the Gymnasium at Coburg.   13 Paul Fleming Paul Fleming Famous As: Poet Birthdate: October 5, 1609 Sun Sign: Libra Birthplace: Hartenstein, Germany Died: April 2, 1640 Seventeenth-century lyrical poet Paul Fleming was also a skilled physician. A disciple of Martin Opitz, he composed love poems and religious hymns. He was also the first German to make use of the sonnet form effectively. He had also been a merchant in Russia and Iran for several years.  14 Ernst Wynder Ernst Wynder Famous As: Medical doctor Birthdate: April 30, 1922 Sun Sign: Taurus Birthplace: Herford, Germany Died: July 14, 1999 The founder of the American Health Foundation, physician Ernst Wynder was born to Jewish parents in Westphalia and had fled to the US with his family during the Nazi regime. Ironically, though he had devoted his life to cancer research, he eventually succumbed to thyroid ​​​​​cancer.  15 Friedrich Theodor von Frerichs Friedrich Theodor von Frerichs Famous As: Physician Birthdate: March 24, 1819 Sun Sign: Aries Birthplace: Aurich, Germany Died: March 14, 1885 Pathologist Friedrich Theodor von Frerichs, considered the founder of experimental pathology, had initially been an optician and had also taught at several universities. His contributions include studies in kidney and liver diseases and research on multiple sclerosis. He also released the first German book on nephrology.
 Zeolite granular can be adjusted water ph of aquaculture farm: ZeoliteMin natural zeolite minel UZ-Min® Clinoptilolite Zeolite Granules ACG003 benefits of Clinoptilolite Zeolite in shrimp pond Shrimp ponds involve ponds containing freshwater, saltwater, and brackish water. Shrimp is one type of filter animal so that water quality is crucial for the results obtained by farmers. Potential shrimps to be cultivated in ponds are tiger shrimp and vaname shrimp. Both shrimps can tolerate salt levels between 0 to 45 percent. UZ-Min® Zeolites are natural minerals made from aluminum silicate groups that are hydrated by alkali metals and alkaline earth. The zeolite mineral is gray to bluish. Clinoptilolite is a type of natural zeolite mineral that has many uses.In aquaculture, Clinoptilolite can be used to help control the quality of the soil at the bottom of a pond. Clinoptilolite forms like crystal and has a variety of colors, namely white, yellow, green, and pale brown based on different qualities.  How does zeolite absorb ammonia nitrogen? In this article, the researchers discuss the benefits of zeolite for shrimp ponds. they have also published related articles, Zeolite as ammonia adsorber in the pond. Please refer to the article. Pond water quality is an important thing that must always be considered. According to research, if environmental conditions such as water quality are not following the standards for cultivation it will cause death and ultimately losses in aquaculture. Water quality management is a way to maintain water quality parameters following quality standards for cultivation. These parameters are an indicator to see the quality of water, such as dissolved oxygen (DO), free carbon dioxide (CO2), pH, temperature, brightness, salinity, ammonia, and nitrite. Zeolite benefit for shrimp pond Dissolved oxygen should be sufficient. Scientists generally agree that aquatic animals need to be dissolved oxygen at a concentration of 5.0 mg / L or more to be able to live and develop. However, the amount of oxygen needed can also vary depending on how large or complex the animal is and where it lives. The greater the dissolved oxygen value, the better the water quality. The highest difference in dissolved oxygen concentration is found in waters that have high plankton density and vice versa. Most of the waters that have low oxygen levels are caused by a variety of complex factors from natural to man-made factors. The solubility of oxygen in water is influenced by several factors including temperature, the salinity of the waters, movement of water on the surface of the water, the surface area of open waters, atmospheric pressure, and the percentage of oxygen around it. how does zeolite absorb ammonia When the concentration of dissolved oxygen is low, carbon dioxide levels can inhibit the entry of oxygen into pond water. The normal range of carbon dioxide is from 1 to 10 mg / l. If carbon dioxide exceeds 10 mg / l, the water quality is not good. Too high pH is not good, a pH above 8.5 causes ammonia in the pond to be toxic and raises hydrogen sulfide around it which is also a toxic substance, so don’t overdo it.Temperature or temperature is one indicator of the success of shrimp farming. For this reason, temperature fluctuations must always be watched out for by farmers, because sudden spikes or decreases in temperature can inhibit shrimp growth and can even make shrimp die. It should be noted by farmers that the optimal temperature for shrimp to grow and develop is in the range of 26 to 30 degrees celsius. The drastic temperature change that can be overcome by shrimp is no more than 2 degrees celsius. If the pond temperature decreases until it reaches 25 degrees Celsius, it can cause the digestibility of food by shrimp will be hampered, this will later influence the growth of shrimp. Vice versa, if there is a surge in temperature to reach 30 degrees Celsius or more, it will trigger stress on shrimp. The stress experienced is due to high-temperature changes that cause shrimp oxygen demand to increase.To avoid stress on shrimp, pond entrepreneurs are required to always check the level of water salinity routinely. In general, the ideal shrimp ponds are shrimp ponds that have a salinity level of around 10-30 ppt. Zeolite is useful for conditioning pond water quality to conform to shrimp pond standards. Various benefits of zeolite minerals for ponds, namely: •  Zeolite minerals can bind heavy metals in water or pond bottom soils that can threaten the survival of fish/shrimp, such as Pb, Fe, Hg, Sn, Bi, and AS. •  Increase the level of dissolved oxygen in the water •  Because it has a high absorption power, zeolite minerals can reduce gases in the remaining shrimp feed (not eaten), as well as gases originating from the metabolism of other organisms that live at the pond bottom. •  Maintaining the stability of the water temperature, as well as maintaining the degree of acidity (pH) of water in a pond. •  Because zeolites have a high calcium content, shrimp in ponds can be prevented from soft skin diseases. •  Helps the growth of phytoplankton in ponds, so that natural food for shrimp is always maintained. As water treatment agent in aquaculture,"UZ-MIN" Clinoptilolite Zeolite for Aquaculture has many pores inside , uniform tubular channels and internal surface with large pores which has unique adsorption, screening, exchange of anion and cation and catalytic performance, and can adsorb a large number of toxic substances (such as NH3, NH4 +, CO2, H2S, etc.) Clinoptilolite Zeolite Granules how does zeolite absorb ammonia Benefits of UZ-MIN Clinoptilolite Zeolite Granular  •  Absorbing ammonia nitrogen, organic matter and heavy metal ions in water •  Good Micronutrient Fertilizer •  Effectively reduce the toxicity of H2S at the pond bottom •  Regulate pH •  Removing ammonia, increasing dissolved oxygen in water, Provide sufficient carbon for phytoplankton to grow, Increase the intensity of photosynthesis in water, •  Optimizing the breeding ecosystem and promote the growth and development of aquatic animals. Scope Of Application In Aquaculture •  For open type system: Regularly spray zeolite powder into the aquaculture water body, and feed fish, shrimp, sea cucumber, crab, scallop, turtle, eel, etc., both fresh and sea waters . •  For Closed type systems: Used in aquaculture wastewater circulation systems,  clinoptilolite can be used as the filter media in the circulation system. (See below Figure) •  For every 1 kg of zeolite applied into the pond, 200ml of oxygen can be brought in and slowly released in the form of microbubbles to prevent deterioration of water quality and fish floatation. •  When the zeolite powder is used as water quality improver, the dosage should be 13 kg per mu of 1m water depth all over the pool. •  Use alone. In general, every mu of water surface every use 6 - 8kg, 7 - 10 days once again, high temperature season or serious disease, to double use. •  Fresh water culture: 15 – 25g zeolite powder per cubic body during normal feeding period, preferably separated from quicklime for better effect; 25 - 35g zeolite powder per cubic body of water before freezing is beneficial to safe wintering and increase survival rate in winter. Mariculture: 75 - 90g zeolite powder per cubic body of water. ZeoliteMin Poultry ZeoliteMin Aquaculture ZeoliteMin SoilAmendment Mesh Size and Mesh Chart Cation Exchange and Cation Exchange Capacity
Archive | January 2016 Summary #2 Tapestry Of Space: Domestic Architecture And Underground Communities In Margaret Morton’s Photography Of A Forgotten New York Gallery of Morton’s Pictures The article, Tapestry of Space: Domestic Architecture and Underground Communities in Margaret Morton’s Photography of a Forgotten New York, written by Irina Nersessova talks about Margaret Morton’s photographs of New York’s homeless populations’ home life and it’s parallel with the life of someone who is housed. Morton’s photographs captures the everyday life of the displaced in New York. According to Nersessova, homelessness is as nothing different as someone who is housed. Morton’s pictures depicted that the homeless have a home. The only difference between the housed and the homeless is the level of stability of their home. In reference to Morton’s photographs, the homeless use their space as a creative guide just like a homeowner would. The decoration acts as an indicator that the space is theirs. An example would be that a homeowner might name their home with their last name. The same goes for a homeless person who puts their name above their space. Unlike a homeowner, the homeless will use material scraps that they find whereas a homeowner can go out and purchase letters to put on a mailbox or a fence that will be placed in front of the entrance. Nersessova goes on to say that “…the displaced best represent the universal relationship between space and the splintered identity.” Ideally, a home is a place where you often feel safe. A homeowner will feel safe in their home because it isn’t likely that an unwanted guest will intrude. Even more, it can act as a refuge to get away from the outside world. This idea can apply to a homeless person as well. The public are usually unfamiliar with a developed homeless society like New York’s tunnel. Consequently, they will ultimately end up ignoring them, making the place perfect protection from the outside world. The only difference is that the homeless have definite security; it’s a paradox, because a place so open, compared to a house, should be less safe. However, as Nersessova explains, “The absolute darkness of the tunnel prevents danger from entering it, which explains how it is possible to have the highest feeling of safety in a place that is perceived as most dangerous.” Most often, people will think that the homeless are undesirable that don’t contribute to society; however, they are mistaken. The displaced persons in Morton’s pictures, live in the world of mass production and capitalism just as housed people do. The only difference is how their contribution has an effect on their psychological attitude. A homeowner contributes by maybe owning a business or purchasing products from a privately owned business. Nersessova goes on to describe how consumerism can consume a person, forcing them to demand for excess. A homeless person contributes by using the thrown-away products by the product-obsessed population. Some may use those products to keep them warm, to decorate their space, or help them make a little change in order to buy supplies they may need. Nersessova voices how their mentality can’t be reduced to commercialism due to their lack of resources. As described above, the homeless life isn’t that much different from the ones whose home is more stable. A homeless person’s space can feel just as safe as a house to a homeowner, if not more. A homeless person can definitely decorate their space to indicate ownership just as a land owner can. In addition, a homeless person’s contribution is very much desired in society as a homeowners does, even if society; itself, doesn’t realize it. Furthermore, the parallels between the housed and the homeless demonstrated in Morton’s photographs aren’t very noticeable at first but as Nersessova breaks down those misconceptions in this article, a reader can begin to visualize Morton’s purpose. NERSESSOVA, IRINA. “Tapestry Of Space: Domestic Architecture And Underground Communities In Margaret Morton’s Photography Of A Forgotten New York.” Disclosure 23 (2014): 26. Advanced Placement Source. Web. 26 Jan. 2016. Summary #1 Architectural Exclusion: Discrimination And Segregation Through Physical Design Of The Built Environment Sarah Schindler is determined to inform and evaluate the idea of architectural regulation of urban areas in her article, “Architectural Exclusion: Discrimination and Segregation through Physical Design of the Built Environment.” She describes architectural regulation as an unrecognizable threat to socioeconomic and racial exclusions from certain areas. She explores how this idea is overlooked by lawmakers and the courts. In the process of explanation, she also provides excellent examples. As stated by Schindler, “Exclusion through architecture should be subject to scrutiny that is equal to that afforded to other methods of exclusion by law.” Her main reason for why lawmakers aren’t discontinuing this issue is because they fail to see the effect of many architectural elements in city projects. Although, not all legal scholars are blind to this issue, they misinterpret it as a metaphor to build on the idea of hidden regulatory systems. Schindler; however, insist that it isn’t just a metaphor; but, in actuality, a real regulation as well. According to Schindler, regulation through architecture is powerful but is also less indefinable, making it harder for legislators and common people to roar for a change. Most architectural things about places are seen as An example she describes is a physical barrier like fence or bridge. She discusses how a ten-foot high, 1,500 feet long fence separates the suburb of Hamden, Connecticut and the housing projects in New Haven. The divider makes it difficult for people living in the projects to have access to the outside community. A trip to the grocery store made housing residents “…have to travel into New Haven to get around the fence, a 7.7-mile trip that takes two buses and up to two hours to complete.” The common reason for construction of the fence was primarily to keep violence out however, the underlying intention was to keep undesired people, like poor citizens or people of color, from having access to the surrounding city. Another example is the placement of public transportation. She reminds us that most low income citizens rely heavily on public transportation whereas wealthier citizens have private automobiles. Some places like the mall refuse to have transit stops because they don’t want a certain kind of people having access to that location. Therefore, the only people being denied access are the lower socioeconomic class. She also brings up the point that this issue can have a larger impact because then it makes it problematic for those people to have jobs in those areas. This would mean that employers would have to pay a higher wage because citizens of that community are not likely to accept a job with minimum wage as a low income citizen would. She even states that even if the lower- income citizen did have a car, some communities require a parking permit. If that person did not live there or didn’t know a friend who could offer a guest pass, they would be out of luck; making their trip even more hectic. Schindler continues to say that legal decision makers can’t recognize this problem and address it. They are too busy paying attention to what the courts consider to be physical exclusion instead of conducting their own research. This isn’t to say that they are totally blind to the idea. They have shut down or put limitations on cities’ attempts to practice racial zoning, exclusionary zoning, and racially restrictive covenants. However, architecture is unlikely to be seen as a way to keep someone out because it isn’t as obvious as a law. To a common pedestrian, the different features about a place is just that; a feature. In conclusion, Schindler offers solutions that can be made in the judicial process however; doesn’t believe it will do much good. She mostly recommends “forcing reformation of certain existing discriminatory infrastructure…” in the legislative process. Just a regular bench                Just a regular bench To anybody else this will just seem like a regular bench, however that person is sadly mistaken. What if the designer made it this way so that a homeless person couldn’t sleep comfortably on it? What if it was only to limit how many people could sit on the bench? These are the types of questions that won’t be asked by a regular pedestrian. Hello world! Welcome to your brand new blog at! To get started, edit or delete this post and check out all the other options available to you. For assistance, visit the comprehensive support site, check out the Edublogs User Guide guide or stop by The Edublogs Forums to chat with other edubloggers. For personal support, you can attend Georgia State’s training on Edublogs or stop by The Exchange for one-on-one support. You can also reference the free publication, The Edublogger, which is jammed with helpful tips, ideas and more.
When Giving A Speech, It’s All About First Impressions In order to deliver a successful speech, you have to make a good first impression Image Credit: blendersfun As speakers, we know that if we want to give a good speech we need to take the time to prepare for it. This means that we need to create a theme for our speech, we need to create a great opening, work in some physical gestures, and include the proper amount of vocal variety. However, even after doing all of that, we might not be successful. It turns out that your audience will be forming an impression of you within the first 12 seconds of seeing you. Even worse, they probably won’t change their impression of you even after they know you for a bit. How can a speaker make a good first impression? How Do You Approach The Stage? The first impression that any audience is going to have of you is how you choose to approach the stage. You have two basic ways to do this. The first is where you remain in your seat while you are being introduced. Once the introduction has been completed, you can get up from your seat and move to the front of the room. Alternatively, you can enter the front of the room from the side, perhaps from an adjoining room. No matter how you choose to do it, if you put some energy in to your stride as you come to the front, then your audience will get the message that you are excited about addressing them. The Introduction When we give a speech, more often than not we are introduced by someone before we begin to speak. This will be the first time that your audience has had a chance to lay eyes on you. You are going to want to shake hands with the person who just got done introducing you. Do this by extending your hand a few steps before you connect with the person who introduced you. By doing this you will be creating the impression that you are in control of the moment and eager to begin your speech. Make sure that you smile at your introducer in order to show that you are happy to be speaking. Where You Stand Often when we are giving a speech we are provided with a lectern. These can be handy if you are nervous about giving a speech because they provide you with something that you can hide behind. You can also place any notes that you have brought along on top of the lectern and refer to them during your speech. However, if you really want to be able to connect with your audience, then you are going to have to move it aside. You are going to have to be willing to stand before your audience and let them know that you are in control and you don’t need the help of any lectern. Just Before You Start Talking The time just before you start to speak is the most critical part of the time that your audience will be making their minds up about you. What should you do during this brief window? One key thing is to make sure that you keep smiling. You want to communicate confidence to your audience. Your smile will tell your audience that you are preparing to deliver a great speech. You’ll be able to keep your smile because you know that you’ve taken the time to practice your speech. Use your facial expressions to communicate with your audience. Use your eye contact to get and hold the attention of your audience. What All Of This Means For You It turns out that how your next speech is going to turn out may be determined even before you start to speak. Your audience is going to be making up their minds about if they want to spend any time listening to you even before you open your mouth. What this means for you as a speaker is that you are going to have to take steps to make the most of the time that you have before you start to speak. You’ll have to start to consider how you want to approach the front of the room. You have a couple of different ways to go about doing this, but doing it with energy will be important in order to send a positive message to your audience. When you arrive at the front, you’ll have to greet the person who just introduced you. Do it with a big smile and let your audience know that you are happy to be there. Often we are provided with a lectern that will be standing between us and our audience. You need to take steps to move it out of your way so that you can better connect with your audience. The final few seconds before you start to speak are some of your most important seconds. Make sure that you use them wisely. Once we know just how important the time before our speech starts is, we can start to take steps to make the best use of it. What we want to do is to find ways to make sure that our audience is excited about what we are going to be saying and wants to hear us. If we carefully manage the time before we speak, then every speech that we give can provide us with a real opportunity to connect with our audience. – Dr. Jim Anderson Blue Elephant Consulting – Your Source For Real World Public Speaking Skills™ Question For You: If nobody is going to introduce you, how should your speech start? What We’ll Be Talking About Next Time Speakers know that there is power in laughter. If we hear our audience laughing (with us) then it will provide us with a sense of confidence. Everyone likes to hear it when an audience starts to laugh after you have gotten done delivering the punchline of your joke. This is exactly what it takes to provide you with a shot of energy and it can launch you into the main point of your speech. However, the challenge that many of us have is using the importance of public speaking to get our audience to laugh when we want them to. What we need are strategies that we can use to get our audience to laugh.
Insted - Tolerance, Equality, Difference Tolerance, Equality, Difference Commission on British Muslims and Islamophobia Islamophobia and race relations Source: adapted and abbreviated from Islamophobia - issues, challenges and action, Trentham Books 2004. There is background information, plus also a copy of the full report, at This paper notes that Islamophobia has been present in western culture for many centuries. It has taken different forms, however, at different times and in different contexts. The current context in Britain includes the international situation, concerns about asylum and refugees, and widespread scepticism and agnosticism in relation to all religious beliefs. The paper then discusses the arguments for seeing Islamophobia as a form of racism and notes that most race equality organisations have not yet adequately responded to the challenges that Islamophobia poses. It closes by discussing the concept of institutional Islamophobia. At the end, there are notes on the soures of quotations. A new word for an old fear Hostility towards Islam and Muslims has been a feature of European societies since the eighth century of the common era. It has taken different forms, however, at different times and has fulfilled a variety of functions. For example, the hostility in Spain in the fifteenth century was not the same as the hostility that had been expressed and mobilised in the Crusades. Nor was the hostility during the time of the Ottoman Empire or that which was prevalent throughout the age of empires and colonialism. It may be more apt to speak of 'Islamophobias' rather than of a single phenomenon. Each version of Islamophobia has its own features as well as similarities with, and borrowings from, other versions. A key factor since the1960s is the presence of some fifteen million Muslim people in western European countries. Another is the increased economic leverage on the world stage of oil-rich countries, many of which are Muslim in their culture and traditions. A third is the abuse of human rights by repressive regimes that claim to be motivated and justified by Muslim beliefs. A fourth is the emergence of political movements that similarly claim to be motivated by Islam and that use terrorist tactics to achieve their aims. In Britain as in other European countries, manifestations of anti-Muslim hostility include: Contextual factors Islamophobia is exacerbated by a number of contextual factors. One of these is the fact that a high proportion of refugees and people seeking asylum are Muslims. Demonisation of refugees by the tabloid press is therefore frequently a coded attack on Muslims, for the words 'Muslim', 'asylum-seeker', 'refugee' and 'immigrant' become synonymous and interchangeable with each other in the popular imagination. Occasionally, the connection is made entirely explicit. For example, a newspaper recycling the myth that asylum-seekers are typically given luxury space by the government in five-star accommodation added on one occasion recently that they are supplied also with 'library, gym and even free prayer-mats'. A member of the House of Lords wishing to evoke in a succinct phrase people who are undesirable spoke of '25-year-old black Lesbians and homosexual Muslim asylum-seekers'. In 2003, when the Home Office produced a poster about alleged deceit and dishonesty amongst people seeking asylum, it chose to illustrate its concerns by focusing on someone with a Muslim name. An end-of-year article in the Sunday Times magazine on 'Inhumanity to Man' during 2003 focused in four of its five examples on actions by Muslims. 'We have thousands of asylum seekers from Iran, Iraq, Algeria, Egypt, Libya, Yemen, Saudi Arabia and other Arab countries living happily in this country on social security,' writes a journalist in January 2004. Arabs, he says in the same article, are 'threatening our civilian populations with chemical and biological weapons. They are promising to let suicide bombers loose in Western and American cities. They are trying to terrorise us, disrupt our lives.' A second contextual factor is the sceptical, secular and agnostic outlook with regard to religion that is reflected implicitly, and sometimes expressed explicitly, in the media, perhaps particularly the left-liberal media. The outlook is opposed to all religion, not to Islam only. Commenting on media treatment of the Church of England, the Archbishop of Canterbury remarked in a speech in summer 2003 that the church in the eyes of the media is a kind of soap opera: 'Its life is about short-term conflicts, blazing rows in the pub, so to speak, mysterious plots and unfathomable motivations. It is both ridiculous and fascinating. As with soap operas, we, the public, know that real people don't actually live like that, but we relish the drama and become fond of the regular cast of unlikely characters with, in this case, their extraordinary titles and bizarre costumes.' At first sight, the ridiculing of religion by the media is even-handed. But the Church of England, for example, has far more resources with which to combat malicious or ignorant media coverage than does British Islam. For Muslims, since they have less influence and less access to public platforms, attacks are far more undermining. Debates and disagreements about religion are legitimate in modern society and indeed are to be welcomed. But they do not take place on a level playing-field. A third contextual factor is UK foreign policy in relation to various conflict situations around the world. There is a widespread perception that the war on terror is in fact a war on Islam, and that the UK supports Israel against Palestinians. In other conflicts too the UK government appears to side with non-Muslims against Muslims and to collude with the view that the terms 'Muslim' and 'terrorist' are synonymous. These perceptions of UK foreign policy may or may not be accurate. The point is that they help fashion the lens through which events inside Britain are interpreted - not only by Muslims but by non-Muslims as well. The cumulative effect of Islamophobia's various features, exacerbated by the contextual factors mentioned above, is that Muslims are made to feel that they do not truly belong here - they feel that they are not truly accepted, let alone welcomed, as full members of British society. On the contrary, they are seen as 'an enemy within' or 'a fifth column' and they feel that they are under constant siege. This is bad for society as well as for Muslims themselves. Moreover, time-bombs are being primed that are likely to explode in the future - both Muslim and non-Muslim commentators have pointed out that a young generation of British Muslims is developing that feels increasingly disaffected, alienated and bitter. It's in the interests of non-Muslims as well as Muslims, therefore, that Islamophobia should be rigorously challenged, reduced and removed. The time to act is now, not some time in the future. A further negative impact of Islamophobia is that Muslim insights on ethical and social issues are not given an adequate hearing and are not seen as positive assets. 'Groups such as Muslims in the West,' writes an observer, 'can be part of transcultural dialogues, domestic and global, that might make our societies live up to their promises of diversity and democracy. Such communities can . facilitate communication and understanding in these fraught and destabilising times.' But Islamophobia makes this potential all but impossible to realise. 'The most subtle and for Muslims perilous consequence of Islamophobic actions,' a Muslim scholar has observed, 'is the silencing of self-criticism and the slide into defending the indefensible. Muslims decline to be openly critical of fellow Muslims, their ideas, activities and rhetoric in mixed company, lest this be seen as giving aid and comfort to the extensive forces of condemnation. Brotherhood, fellow feeling, sisterhood are genuine and authentic reflexes of Islam. But Islam is supremely a critical, reasoning and ethical framework. [It] cannot, or rather ought not to, be manipulated into "my fellow Muslim right or wrong".' She goes on to remark that Islamophobia provides 'the perfect rationale for modern Muslims to become reactive, addicted to a culture of complaint and blame that serves only to increase the powerlessness, impotence and frustration of being a Muslim.' Islamophobia and the race relations industry Hostile statements about Islam and Muslims are often reminiscent of racism. For example, there is the stereotype that 'they're all the same' - no recognition of debate, disagreement and variety amongst those who are targeted. There is the imagery, also, of 'them' being totally different from 'us' - no sense of shared humanity, or of shared values and aspirations, or of us and them being interdependent and mutually influencing. Indeed, they are so different that they are evil, wicked, cruel, irrational, disloyal, devious and uncivilised. In short, they do not belong here and should be removed. These highly negative views of the other are accompanied by totally positive views of the self. 'We' are everything that 'they' are not - good, wise, kind, reasonable, loyal, honest and civilised. It is sometimes suggested, in consequence, that a more appropriate term than Islamophobia is 'anti-Muslim racism'. An obvious objection to this suggestion is that Muslims are not a race. However, there is only one race, the human race, and there is an important sense in which black, Asian and Chinese people are not races either. In any case, race relations legislation in Britain refers not only to so-called race but also to nationality and national origins, and to the four nations that comprise the United Kingdom. Further, the legal definition of another key category in the legislation, that of ethnic group, makes no reference to physical appearance and is wide enough to be a definition of religion - if, that is, religion is seen as to do with affiliation and community background rather than, essentially, with beliefs. The United Nations World Conference Against Racism (WCAR) in 2001 summarised its concerns with the phrase 'racism, racial discrimination, xenophobia and related intolerance'. The equivalent phrase used by the Council of Europe is 'racism, xenophobia, antisemitism and intolerance'. Both phrases are cumbersome, but valuably signal that there is a complex cluster of matters to be addressed; the single word 'racism', as customarily used, does not encompass them all. In effect the WCAR argued that the term racism should be expanded to refer to a wide range of intolerance, not just to intolerance where the principal marker of difference is physical appearance and skin colour. For example, the term should encompass patterns of prejudice and discrimination such as antisemitism and sectarianism, where the markers of supposed difference are religious and cultural rather than to do with physical appearance. It is widely acknowledged that antisemitism is a form of racism and in Northern Ireland sectarianism is sometimes referred to as a form of racism. There are clear similarities between antisemitism, sectarianism and Islamophobia, and between these and other forms of intolerance. The plural term 'racisms' is sometimes used to evoke this point. A description of sectarianism developed by the Corrymeela Community in Northern Ireland is a helpful description of Islamophobia as well: Sectarianism is a complex of attitudes, actions, beliefs and structures, at personal, communal and institutional levels . It arises as a distorted expression of positive human needs, especially for belonging, identity and free expression of difference but is expressed in destructive patterns of relating: hardening the boundaries between groups; overlooking others; belittling, dehumanising or demonising others; justifying or collaborating in the domination of others; physically intimidating or attacking others. But in addition to similarities with other forms of intolerance and racism, Islamophobia has its own specific features. Action against it must therefore be explicit and focused - it cannot be left to chance within larger campaigns. Unfortunately, race equality organisations in Britain have been slow to recognise Islamophobia as something they ought to deal with. Already in the 1980s there were campaigns at local levels - one of the most sustained and influential was mobilised by the An-Nisa Society in north west London - to persuade race equality organisations to take action against anti-Muslim hostility and discrimination. The concern was in particular with discrimination and insensitivity in the provision of public services, and with the failure of race relations legislation to prevent such discrimination. Major representations were made by Muslims during the review of race relations legislation that took place in the early 1990s. The categories in race relations legislation, it was pointed out, derived from the colonial period, when Europeans made a simple distinction between themselves and 'lesser breeds', and when the principal marker of difference was skin colour. In Britain, not-white people were divided into two broad categories, 'black' and 'Asian'. Little or no account was made, in this colonial categorisation, of people's inner feelings, self-understandings, narratives, perceptions, ethics, spirituality or religious beliefs. Nor, it follows, was account taken of the moral resources on which people drew to resist discrimination and prejudice against them. Continual use of the category 'Asian' by the race relations industry, to refer to most not-white people who were not categorised as black, meant that Muslims were rendered invisible. Even local authorities which in other respects were at the forefront of implementing race equality legislation, for example Brent, subsumed Muslims under the blanket category of 'Asians'. They were insensitive and unresponsive, in consequence, to distinctive Muslim concerns. A third of all British Muslims are not Asians and a half of all Asians are not Muslims. The insensitivity was - and is - particularly serious in relation to the provision and delivery of services. The objections made by organisations such as An-Nisa in the 1980s and early 90s were ignored by the government. So was a series of articles and editorials throughout the 1990s in the Muslim magazine Q News. At the end of the decade, when the Stephen Lawrence Inquiry report was published, an article by a director of the An-Nisa Society in Q News observed that race equality legislation had 'reduced the Muslims, the largest minority in Britain, to a deprived and disadvantaged community, almost in a state of siege . Much as Muslims want to confront racism, they have become disillusioned with an antiracism movement that refuses to combat Islamophobia and which, in many instances, is as oppressive as the establishment itself.' A follow-up article in declared that 'the Muslim community has little faith left in the race industry, at the helm of which is the CRE' and spoke of the CRE's 'mean-spirited hostility' towards Muslims. In 1975/76, when the Race Relations Act was being drafted and agreed, there was discussion in parliament at committee stage about whether to include religion, along with nationality and ethnicity, in the legislation. The argument was made in particular by Conservative members, supported by some Labour members. The committee as a whole, however, decided to leave religion out, since at that time discrimination on grounds of religion was not considered to be a major harm that had to be addressed. Twenty-five years later, when the Act was amended, the discussion was renewed. But again the government decided not to include religion. Further, no explicit reference to religion appeared in the various codes of practice about the amended legislation issued by the Commission for Racial Equality. In the meanwhile it is relevant to note that since December 2003, due to legislative requirements at European level rather than to a principled decision by the UK government, discrimination on grounds of religion or belief in employment has been unlawful. For rather longer there has been an anomaly, due to developments in case law since 1976, whereby Jews and Sikhs are defined as ethnic groups and are therefore protected by race relations legislation. The anomaly has been a standing insult to Muslims for two decades and was only partly removed in December 2003. It is still the case that anti-Muslim discrimination is permitted in the provision of goods and services, and in the regulatory functions of public bodies. Public bodies have a positive duty to promote race equality but are not even encouraged, let alone required, to give explicit attention to religion. Institutional Islamophobia The failure of race equality organisations and activists over many years to include Islamophobia in their programmes and campaigns appears to be an example of institutional discrimination. 'The concept of institutional racism,' said the Stephen Lawrence Inquiry report, 'is generally accepted, even if a long trawl through the work of academics and activists produces varied words and phrases in pursuit of a definition.' The report cited several of the submissions that it had received during its deliberations and included a definition of its own. If the term 'racism' is replaced by the term 'Islamophobia' in the statements and submissions, and if other changes or additions are made as appropriate, the definitions are as shown below. Reflecting and producing inequalities Institutional Islamophobia may be defined as those established laws, customs and practices which systematically reflect and produce inequalities in society between Muslims and non-Muslims. If such inequalities accrue to institutional laws, customs or practices, an institution is Islamophobic whether or not the individuals maintaining those practices have Islamophobic intentions.' (Adapted from a statement by the Commission for Racial Equality.) Inbuilt pervasiveness Differential treatment need be neither conscious nor intentional, and it may be practised routinely by officers whose professionalism is exemplary in all other respects. There is great danger that focusing on overt acts of personal Islamophobia by individual officers may deflect attention from the much greater institutional challenge ... of addressing the more subtle and concealed form that organisational-level Islamophobia may take. Its most important challenging feature is its predominantly hidden character and its inbuilt pervasiveness within the occupational culture. (Adapted from a statement by Dr Robin Oakley) Collective failure The collective failure of an organisation to provide an appropriate and professional service to Muslims because of their religion. It can be seen or detected in processes, attitudes and behaviour which amount to discrimination through unwitting prejudice, ignorance, thoughtlessness and stereotyping which disadvantage Muslims.' (Adapted from the Stephen Lawrence Inquiry report.) Culture, customs and routines The concept refers to systemic disadvantage and inequality in society as a whole and to attitudes, behaviours and assumptions in the culture, customs and routines of an organisation whose consequences are that (a) Muslim individuals and communities do not receive an appropriate professional service from the organisation (b) Muslim staff are insufficiently involved in the organisation's management and leadership and (c) patterns of inequality in wider society between Muslims and non-Muslims are perpetuated not challenged and altered. (Adapted from a statement by the Churches' Commission for Racial Justice.) Notes and references The claim that British Muslims must choose between Britishness and terrorism was made by Denis MacShane MP, minister of state at the Foreign and Commonwealth Office, in November 2003. It was compounded by the feebleness of his apology a few days later. The quotation about prayer mats is from the Daily Mail, 5 October 2001. The quotation about 'homosexual Muslim asylum seekers' is from Norman Tebbit, The Spectator, 27 April 2002. The story about the Home Office poster was in The Muslim Weekly, 5-11 December 2003, p.11. The text on the poster read 'Ali did not tell us his real name or his true nationality. He was arrested and sent to prison for 12 months.' This statement was translated into five languages, all of them connected with Muslim countries. A detailed legal reference was given in small print but in fact the case that was cited had nothing to do with asylum and nationality claims. The quotation about Arab people seeking asylum is from an article by Robert Kilroy-Silk, Sunday Express, January 2004. The quotation from the Archbishop of Canterbury is from his presidential address at General Synod, York, 14 July 2003. The quotations from Muslim observers are respectively from Tariq Modood and Merryl Wyn Davies. The concept of racisms was discussed in the 2000 report of the Commission on the Future of Multi-Ethnic Britain, especially chapter 5. The quotations from Q News are from articles by Khalida Khan and Faisal Bodi. Return to index Top of the Page
English (United Kingdom)Indonesian (ID) Jawa Dwipa Jawadwipa was the name of the island of Java in the ancient time. All the islands located in South East Asia archipelago were named Sweta Dwipa in ancient Javanese. The Javanese elders named the group of islands in South and South East Asia : The Islands of Jawata. ( in Javanese Jawata means God or god). In the old days, India was named Jambu Dwipa.While the islands in Nusantara/Indonesia named Sweta Dwipa. Both are from the same region, so it is not surprising to have many similarities in their culture and both cultures were  influencing each other. There was a  geographical change in South Asia 36 to 20 million years ago. There was a movement from the south continent to the north, resulting land erection in north India, then Himalaya was born. At that time, the land of China was still under the ocean. The sub-continent of South and South East Asia emerged as the chains of volcanic archipelago including the present Indonesia and Java. The Descendants of gods Accoding to ancient legends, the Javanese people are the descendants of gods. In common Javanese language- ngoko, the Javanese people is wong Jawa from wahong Jawa; and in refined Javanese language-kromo inggil is tiyang Jawa from ti hyang Jawa. Both has the same meaning i.e. the descendants of gods. The old name of South and South East Asia sub-continent was Jawata which means  Gusti, God , the teacher of the Javanese people. According to wayang folkstales, the beauty of the island of Java had lured the gods and rhe goddesses. They left their domain in Kahyangan ( the world of gods and goddeses) and went down to earth, on the island of Java and built several kingdoms in Jawadwipa. Jayabaya, the king of Kediri kingdom in East Java was god Wisnu who came down from his swargaloka-heaven domain. Jayabaya was very popular in Java and Indonesia  for his accurate prophecy in the course of the country. He inherited valuable teachings consisted of : 1. Guidance on how to be a good and qualified leader for kings, queens, state officers etc. 2. Moral guidance for leaders as well as for ordinary people. The teaching of moral conduct  in fact has  universal values. The First Kingdom in Jawadwipa. Another traditional source said that Jawadwipa was the first kingdom in Java, its location was at the Mount Gede, West Java. The first king was Dewo Eso or Dewowarman, the official title was King Wisnudewo. He descended from his heaven domain- Kahyangan, manifested himself as a human being and became the king of Jawadwipa. He married to a local woman, Dewi Pratiwi who became  his consort. Dewi Pratiwi ( lit. means the goddess of Earth) was the daughter of Lembu Suro, a famous ascetic, holy man of Jawadwipa. Lembu Suro had mastered high level of spiritual knowledge and was able to live in 7/seven different dimensions of life – Garbo Pitu in Javanese. His domain was at Mount Dieng, Central Java. The word Dieng itself comes from the word Adi Hyang which means : The perfect Spirit. The marriage between  Wisnudewo and Dewi Pratiwi is the manifestation of spiritual being ( god) that become human being who live on earth. They could live safely and comfortably on the planet earth due to the back up of earth energy as depicted by Lembu Suro. Betara Guru The beauty of Java had also lured The King of gods, Betara Guru who was then determined to establish his kingdom in Java. He descended from his swargaloka/heaven domain to settle at Mount Mahendra ( the old name of Mount Lawu, located in the border of Surakarta and Madiun). Another names of Betara Guru are : • Sang Hyang Jagdnata – The King of the Universe • Sang Hyang Girinata   - The King of the Mountains  In the kingdom of Mahendra which means The Great Heaven, his title as the king was Ratu Mahadewa-  King of the great God. The palace of Mahendra was built exactly the same with his heaven’s palace. In Mahendra’s palace  , there were some instruments/goods which were made precisely like in his palace in heaven such as : 1. A set of gamelan music instruments. The gods enjoyed very much its melodious voice while they were dancing. Dancing is not merely to move the bodies to follow the melody/rhytm of the gamelan music, but it is also an exercise for concentration of mind, furthermore it is for contemplation to know the real self, a way to spiritual unity with God The Creator. The name of the gamelan set was Lokananta. 2. There were statues of Cingkarabala and Balaupata, placed on the left and right sides of the main palace’s gate. 3. Some heirlooms in the form of daggers ( kris), cakra ( wheels),spears, arrows, made by Empu Ramadi, a well-known heirlooms maker. The Other gods-kings After number of  gods were able to settle comfortably in Java, married to local women and had children, Betara Guru returned back to Suralaya, the domain of gods. Some of his children then became kings in several kingdoms in Java, Sumatra and Bali. In Sumatra Sang Hyang/ god Sambo, was the king of Medang Prawa Kingdom, lived at the Mount Rajabasa. His title as king was Sri Maharaja Maldewa. ( At present, nearby Ceylon, there is a country by the name of Maldives or Maldewa). In Bali Sang Hyang Bayu, his title as king was Sri Maharaja Bimo, at the Mount Karang, his kingdom was Medang Gora. ( Up to now, another name of Bali is Pulau Dewata which means The Island of God). In Java Sang Hyang Brahma, his title as the king was Sri Maharaja Sunda. His palace was at the Mount of Mahera, Anyer, West Java. ( Thet was the beginning the people who live in West Java was named  Sundanese, the people of Sunda, in honor of king Maharaja Sunda) Sang Hyang Wisnu, his title as king was Sri Maharaja Suman, the king of Medang Puro, he lived at the Mount Slamet, Central Java. Sang Hyang Indra, his title as king was Sri Maharaja Sakra, the king of Medang Gora, at the Mount Semeru, East Java. The palaces on top of the mountains It is interesting to learn that gods always built their palaces on  mountains’s top. It depicted that their origin was somewhere in the sky, up at a  high rise place. A high rise place means is considered as clean, free from dirt. It depicts that gods who have manifested themselves as human beings who lives on earth must keep their good behaviour, moral conduct and ethics. Bumi Samboro In Javanese  Bumi Samboro means a land which rises up to the sky. It is a symbol in Javanese spirituality. In Javanese spirituality, the example is Mount Dieng, Adhi Hyang- the teaching behind it is : people as long as they live in the world is expected to reach the peak of spiritual knowledge- soul enlightened. This achievement called Adhi Hyang or Bumi Samboro. The manifestation of gods Gods with light bodies, have no physical bodies, they can appear as human beings. Our brothers and sisters who have gained high spiritual knowledge know the reality of life, with  spiritual eyes, they can see , encounters and communicates with gods. From the spirituality point of view , the descent of gods to planet Earth depicts the spirits who are permitted by Gusti( God) to live on earth as human beings who wears subtle and physical bodies. All human beings, also the Javanese,  originally are spirits, gods. Suryo S. Negoro Edited by Arie Suryo
, , , , The Poor Find Haven In Monrovia’s Cemeteries Liberia has had a trying past couple of decades. Most recently, it was plagued by the Ebola virus, which killed thousands of people. Before this, it had suffered through a 14-year-long civil war, which had taken place just a few years after yet another civil war ended. Both wars killed hundreds of thousands of people, leaving many homeless and destitute. Lacking housing or money, many poverty-stricken Liberians have turned to living in cemeteries, many of which are in Monrovia, its capital. Most go to the Palm Grove Cemetery. Many of these dwellers arrived when they were just children and after their parents had been killed. Some had been child soldiers. They were taken there by friends from the street who used the relative peace and security of the cemetery to indulge in marijuana, cocaine and heroin. They used tombs for shelter after smashing them open and throwing out their long-dead inhabitants. Monrovians look upon the cemetery dwellers with distaste and fear. They are viewed as criminals and drug addicts who disrespect the graves of their families and are deprecatorily called “friends of the dead.” On Decoration Day, a public holiday when Liberians paint and adorn tombs, conflict always erupts between the tomb dwellers and the families of the tombs’ rightful owners. Rather than provide an area for the homeless to live in, President Johnson Sirleaf simply put up walls around the cemetery in 2007 to keep them out. Just a few months later, however, people had already breached the walls to live in the cemetery once again. Now the walls serve to better hide the dwellers and their activities rather than keep them out. Prostitution has also become commonplace behind the cemetery’s walls. Some women and girls are only able to survive through sex work. They are afforded no protection from the police, who often rape them themselves. Unwanted births are commonplace. Many diseases also run rampant. Ebola was just another problem to add to a list of illnesses that included ones such as tuberculosis and diarrhea. Hope may yet be around the corner for these cemetery residents. Last year, the British charity organization, Street Child, began to work with them, setting up counseling sessions, schools and rehab centers. However, many roadblocks stand in the way of their progress. It is extremely difficult for many residents to even consider weaning themselves off their dependency on drugs. Sometimes, drugs make them aggressive and hostile, which makes it hard for people from Street Child to engage with them. The outbreak of Ebola also set back efforts. Schools were banned, as were public gatherings. Street Children also started redirecting efforts to the 2,000 children orphaned because of Ebola. Officials have been hostile to Street Children’s efforts in cemeteries, calling their residents a “lost cause.” Now that Ebola has largely disappeared in Liberia, Street Children is ready to make a renewed effort to help the cemetery dwellers. To the charity organization, small successes have boosted their belief that these people can be saved from a lifetime of poverty and dependency. – Radhika Singh Sources: Independent, BBC 1, BBC 2 Photo: Independent
Written by Blythe Davis Created: July 23, 2021 Last updated: August 28, 2021 Exoplanet Discovery For the past 25+ years, NASA has used ground- and space-based methods to identify exoplanets (planets outside of our solar system). In the past ten years in particular, campaigns like Kepler, K2, and TESS have produced an explosion of results. To date, approximately 4,400 exoplanets have been identified, and over 3,000 potential exoplanet candidates have been discovered. In this notebook, we will use Holoviews and Panel to make a dashboard visualizing the discovery of confirmed and candidate exoplanets over the years. We'll also include a scatterplot in our dashboard that reveals details about the relationship between mass and radius of exoplanets, as well as controls to filter the data based on whether the planets could support life, and if so, whether chemical rockets could be used to escape the planet. In [1]: import pandas as pd import holoviews as hv import panel as pn import numpy as np import colorcet as cc import hvplot.pandas # noqa hv.extension('bokeh', width=100)
Understanding The Importance Of Optometry An optometrist is a health care provider who focuses on the health of one’s eyes. Whether it is eye health, visual performance, or optic lenses, and optometrist is the expert in those fields. In order to be an optometrist, one must have earned a degree in the field of optometry, be registered with the Optometry Board, and often times obtain additional training for additional eye treatments or updates within the field.  When one achieves the status of optometrist, they are able to diagnose and manage various diseases of the eye and treat any injuries or disorders that one may experience. An optometrist is also able to prescribe eyeglasses well as contact lenses when a patient requires visual support. Patients will oftentimes schedule regular eye exams for check-ups regarding their vision or any possible disorders or if they happen to be dealing with a noticeable and bothersome vision problem. In those cases, the optometrist examines the patient’s eyes and uses an array of vision tests to make a diagnosis.  An optometrist is often the first level of care for people with eye or vision problems and will have to refer a patient to an ophthalmologist if a patient’s vision issues are more severe and above the educational level of the optometrist. While an optometrist can assist a patient with understanding what may be wrong with their eyes, an ophthalmologist would be the medical professional to provide any required surgeries. Whether it is laser surgery, dealing with a serious condition such a macular degeneration or diabetic retinopathy, or any other type of surgery, an ophthalmologist is what one would require and an optometrist would appropriately recommend any of their patients to get the treatment they require. For any other issue, an optometrist is the medical professional that one would require. Signs to See an Optometrist Oftentimes if an individual is sick, they know that they have to see a doctor. If an individual has a toothache, they understand that they need to see a dentist. But when does one need to see an optometrist? When it comes to requiring the services of an optometrist, the signs often focus on one’s vision. With symptoms ranging from double or blurred vision, having itchy or dry eyes, or having difficulty reading small writing, one can assume they need to see an optometrist to see what is causing their issue. Individuals may also suffer headaches and dizziness, which is not as obvious of a sign to see an optometrist, however, when headaches or dizziness are paired with vision issues, the type of medical professional that a client requires becomes more obvious. Optical shop singapore experts can help you with a wide variety of eye services, including eye examinations, eye care, and providing you with the perfect eye glasses for your eyes.  When a client experiences this range of symptoms, an optometrist can diagnose a variety of eye disorders and vision problems. Some of the common conditions that optometrists diagnose consist of cataracts and glaucoma. The solution to the symptoms a client is experiencing may also be a direct result of not using glasses or contact lenses, which an optometrist can prescribe. While symptoms for vision aid assistance, glaucoma, and cataracts are more obvious, not all eye disorders are as easy to diagnose.  With yearly comprehensive eye exams, an optometrist can discover any potential ailments of the eye that are less obvious and can work to correct the issue earlier before it worsens. During a comprehensive eye exam, an optometrist will test one’s visual acuity while using the Snellen Eye Chart. For patients who have a refractive error, or do not have 20/20 vision, the optometrist will run tests that allow them to determine the appropriate prescription, analyze the light reflex from one’s eye, and measure distortions or aberrations in the cornea and lens of the eye. Patients may also have a test in which their pupils are dilated so that the optometrist can examine further back into one’s eye. Optometrist visits should not be delayed until a problem is present. Regular eye exams can allow clients and optometrists to be more proactive as opposed to reactive to potentially avoid any vision or eye issues altogether. Children should have their first comprehensive eye exam at six months old and should have follow-up exams at age 3 and age. After the age of 6, assuming there are not issues, optometrist appointments should take place every two years. Adults also require regular optometrist visits at higher frequency as their age increases. For any individual, child or adult, catching disorders of the eye early on can have a detrimental impact on the rest of their life.  If you are searching for an optometrist, Zoom Optics Macquarie Centre can help you answer any further questions that you may have about eye care.
placer mining in oregon and california Recommended Posts While I would normally relegate such postings to Nevada Nugget Hunters, under the historical tidbits heading, summer is around the corner and likely everyone with means is looking for a new twist on old works. Though the treatise below deals with flour gold, the tail end of the article has some fairly surprising news regarding gold, gems, and platinum values that may be your ticket to further success. The article is transcribed from a March 15, 1931 issue of the [AZ] Mining Journal. VOL. XIV. No. 20 MARCH 15. 1931 The Origin of Flour Gold in Black Sands By A. E. KELLOGG, Geologist, Medford, Oregon. A discussion of the cause of flour gold, or very fine gold, with examples of its attrition in Southwestern Oregon and Northeastern California. Flour gold, or gold too light, or in too fine particles to save, has long been the "bugbear" of the placer miner and prospector. All sorts of devices have been tried to save it, with usually poor success. It may be so light and fine as to float over riffles, or by adhering to accompanying black sand, or particles of magnetic iron, or, by a certain coating of iron or sulphur, refuse to amalgamate with quicksilver, and be lost. The origin and cause of this elusive gold may be various. In the first place, it may have been originally minutely disseminated through crystalline, or other rocks, not locally concentrated, or run into solution with quartz, in vein fissures, or wrapped up within, or chemically combined with other so-called gold-bearing ores, but is practically free. Such is probably the origin of much of the tantalizing gold in the gold sands of the Pacific Coast. Another cause for flour gold is undoubtedly its attrition by waves and other violent waters. As a rule, the further we recede from the crystalline and igneous rocks of the mountains, with their quartz-fissure veins, which are assumed to be the proper habitat of gold in-place, and pass out to the plains, the finer, and more flour, does gold become. This is clearly the effect of the winnowing-grinding action of the streams and other bodies of water, flowing toward the low lands, from the mountains. A fine example of this grinding and natural milling is to be seen in the very ancient Siskiyou Island, situated in that large area of Southwestern Oregon and Northwestern California, designated as the Klamath Mountains. These lofty mountains were an island in the ocean during Cretaceous times, long before the Cascade Mountains rose above the surface of the water, and by geologists, termed the "Great Siskiyou Batholith." It is, perhaps, one of the oldest pieces of terra firma on the western continent, and compares favorably in age, with the Alps in Europe. The ancient island is located on the site of the Sierra Nevada, Cascade, and Coast Range, of mountains, a plexus of mountains, including the Klamath Mountain group. The Klamath mountains extend from the fortieth, to near the forty-fourth parallel, and include the Yallo Bally, Bully Choop, South Fork, Trinity, McCloud, Scott, and Salmon Mountains of California, and the Siskiyou, Rogue River, and others, in Oregon. The outline of the 'Island in Cretaceous Sea" and the "Klamath" mountain group is indicated by heavy broken lines on accompanying map. The major part of the mass of the ancient island is granite, or granitic in character, accompanied with other intrusive igneous rocks, such as diorite, porphyry, and other intrusions of ancient origin, such intrusions having lifted from the depths of the ocean, the sediments that had settled there. These elevations have reached an altitude in some places of 7,000 to 8,000 feet. These sediments thus lifted, were changed from their original horizontal character, to various angles of inclination, and accommodated easy erosion. Hence we find now only fragments of these early sediments, at the tops of the rather higher elevations, such as limestone, often metamorphosed into marble. These sediments were originally very deep, and erosion carried them away—some back to the ocean, while others were deposited in the valleys as the valleys were formed. These intrusions, and the necessary broken fissured condition in which they, and their beddings have been left facilitated the settling of such minerals as were held in position, or suspended, and all other matter susceptible of being carried downward into seams and fissures as the mass slowly rose. After the whole had arisen above the ocean water, and valleys were formed, or in process of forming, much of the residue resulting from this disintegration was deposited around the shore-lines of the old Siskiyou Island, resulting in the heavy placer deposits, which employed the pioneer placer miners. Observations will show that all of these deposits were formed along the shore-line of this old island, and that the gold came from it. Thus, into these mountains, as judgment dictates, we must look for the source of the gold. The searches thus prompted have proven correctly that the heavy mineral deposits are found in the seams and fissures of this ancient island. The Rogue and Klamath rivers drain, by their profound canyons, a vast area in the valley regions of the Klamath group of mountains. These valleys are largely composed of conglomerates of the Siskiyou Batholith. The pebbles, sand, or gravel components of these rocks are derived from many sources, but principally from granite or igneous rocks from the Klamath Mountains. These cementing sands doubtless carried the gold, which, having traveled far from its source, is naturally fine, and must have been reduced still finer by the angry waves and torrents that aided in laying down the Siskiyou batholith. From these cemented rocks, by a stupendous system of erosion and canyon cutting, the gold has, with sand and pebbles, found its way into the bottom of these rivers, and in some cases been deposited in bars and beaches, or along the banks of the streams. Of the kind of additional milling and comminuting treatment it has received for untold centuries in the depths of the rivers, we may have a vivid object lesson, which will leave no wonder in our minds as to the flour condition of the gold, or as to one of the causes of its particle sizing. The streams, at the time of their annual freshets, are on rampages and impassable. They have narrow passages through solid rock banks, where the boiling waters are forced between narrow walls, as through a pipe. The centers of these narrows become heavily charged with gravel, which piles up a temporary sand and gravel bank, then a sudden and terrific roar, and the roaring is over. In a few minutes, the obstacle is washed away, to re-form again in another point down stream, and the temporarily checked stream rushes on its course. During the floodwater times there are frequent repetitions of the same phenomena in these narrow gorges. Imagine this gravel to carry a percentage of gold and its ultimate grinding by such a terrific milling process is obvious. The gold found on the beaches of the Pacific Ocean near the deltas of these rivers, has received and is receiving daily, much the same treatment from the waves and tides, let alone what grinding it received in being washed from its source in crystalline rocks, far back in the Klamath Mountains, and on its way toward the sea. There it is deposited and redeposited in beaches, ultimately consolidated into conglomerates, and gradually raised above the sea, and from these again washed out seaward, and daily milled and turned over, deposited and covered, and re-deposited by the waves of the sea. Is it any wonder that such gold is very floury? What metal could withstand such a process and not become so? Gold, as it has been said, is one of the widest disseminated minerals in the world, despite its rarity in veins, or in concentrated form. It is safe to say that most of the flour gold found in the various river and sea sands of the world, was never at any time in a concentrated form such as we understand it, in gold-bearing veins and the like, but is derived from gold particles minutely and widely disseminated through the rocks of the world, primarily in the igneous and crystalline series, and secondarily, in the sedimentary rocks derived from these, and lastly in loosely consolidated modern placers, and alluvial deposits. The analysis of almost every igneous rock bears evidence to this minute gold dissemination. In this respect some igneous rocks are richer in gold than in others: as, for instance, the "pyritiferous porphyry" of Leadville, Colorado, in which the gold may have originally been contained within the pyrites, which are a constituent of that rock, just as mica and hornblende are of many igneous rocks. In some cases gold has been detected in combination with some common rock forming minerals, and again it appears to occur free, or freed, in certain porphyries, especially in zones of brecciation and decomposition of these. Slates are notorious for carrying gold, sometimes disseminated within their mass, but generally concentrated within minute veinlets, and lenses of quartz, probably segregated from the materials of the slates themselves, or derived from solution from outside, and deeper sources. Gold disseminated in sandstone, conglomerates, and metamorphic quartzites is not uncommon, sometimes even in commercial quantities. These rocks being consolidated in river or sea beach placers, the presence of gold is naturally to be expected, but commonly in a fine or flour condition. In fact in all cases, either by original deposition, or by combination chemically or mechanically, with other rock-forming minerals, or by long travel or attrition, the gold is in the minute and flour state, and appears so when rocks are broken up, and reduced to sand or gravel by wave or stream, and re-deposited in bars, beaches, placers, and river beds. In most placers, there is a large proportion of this flour gold, and when coarser gold occurs, even up to small or large nuggets, it is traceable to concentration in veins or veinlets in the vicinity, which may be very small and inconspicuous, such as those traversing the schists of the Nome and Klondike areas, in which there is a notable absence of large workable gold-bearing veins. Others, as in the Breckenridge region of Colorado, are doubtless derived from concentration in brecciated and decomposed porphyries, or from peculiar veins in place carrying crystallized gold, as in some of our local mines, leaf-gold deposited in shales. With the exception of those derived from these sources of concentrated gold, the gold so universally found and so widely disseminated is in the flour or fine state, and there are many good reasons why it should be so. An indirect evidence of the wear and tear of the flour gold, experienced in its travel from the parent source, is shown by the character of the minerals associated with it in this region. These are usually the hardest, heaviest and most insoluble known, and in minute state in which they are found, are the relics and sole survivors of the tremendous abrasion by water, which has destroyed all other associated minerals. The common rock minerals associated with placer, beach, or alluvial gold, are quartz sand, the hardest insoluble residium of the disintegration of granite and igneous rocks, and of the sandstones and porphyries formed from these. Mingled with these are all sorts of heavy and hard silicates and metals, some of them known as gems, such as garnets, rubies, beryls, tourmalines, and even rarer and more valuable stones, from sapphire and topaz, to the diamond. In the metalliferous series, the specimens accompanying placer gold are fragments of the hardest, heaviest, and most insoluble metallic minerals known, such as magnetite, wolframite, etc., and the platinum series, palladium, rhodium, iridium, ruthenium, osmium and iridosmine. Iridium is a hard metal, and 20 percent heavier than gold, while osmium is also a very hard metal, quite infusible and twenty-two and one-half times heavier than water. Iridosmine is of such extreme hardness that it is used for pointing non-wearing pens. In such hard and tough company, it is somewhat remarkable that gold actually exists or remains, considering its comparative softness, and its not absolute insolubility. Soluble minerals are generally wanting in a placer, although at one time may have been in the placer rocks, the close associates of gold, such as lead, zinc pyrite, and silver-bearing ores. The sorting down and reduction to the heaviest, hardest, and most insoluble class, is somewhat wonderful, and is comparable on a grand scale with the most insignificant processes of artificial jigging and separation in our concentration mills. Anyone who can invent a reliable means of saving this flour gold will be a benefactor to the mining world, and have a wide area for his operations. On the coast beaches, the river bars, the irregular values of the deposits, the fineness of the gold, and the difficulty of separating the minute particles of it from magnetic, or black sands, are the main difficulty. The sands are limited as to extent and ephemeral in their nature. In river and dredging placer deposits, the flour gold is apt to be carded over and beyond the plates, by an overwhelming amount of heavy, black sands often accompanying it, or else it is too fine to save and declines to amalgamate. In addition to its fineness, the microscope shows the tiny grains are flat, boat, or cup shaped, causing an air vacuum, permitting them to float and they are consequently lost in the clean-up. An unusually severe storm raging along this Oregon and California coast lasts for several days. Huge breakers lash the shores, making sweeping changes in the beach modeling, tearing up and tossing about large areas of shingly beach, and piling up the sands in new places and shapes to suit the angry mood of the lashing sea. At last, the angry waves having spent their vengeance, lap peacefully along the shores. From South Slough, in Coos County, Oregon, to Gold Bluff, Humboldt County, California, the beach miners, following the storm, swarm on the newly made beaches to gather their golden harvest. The origin and causes of fine or flour gold may be summed up to the original deposition of the metal, in a minute, and disseminated condition and, secondly, travel and attrition. For more than 70 years the black sands of the Pacific Coast have been delved and tossed about by man as well as nature. At the beginning of beach mining, the reward was aimed at the golden contents of the sands. However, at an early date, platinum, iridium, palladium, and associate metals, were known to exist in these beach placers. In those days platinum was given little attention by miners on account of its then small value in limited market. Many old clean-up dumps have been re-worked for their platinum contents in recent years, with gratifying results. Government experts, making search for war-metals during the war in this region, said: "There is but little doubt that from the early days of placer mining in South­western Oregon, more values in platinum went through the sluice boxes, and was lost, than were ever taken out in gold." The first record of beach mining was at Gold Bluff, California. The '49ers from middle California were already drifting northward on new quests. Shortly afterward, the northward trek found the Oregon Coast, equally rich. In 1854, an authority, Blake called attention to the occurrence of platinum with the gold at Cape Blanco, Oregon, and stated that platinum was present at a ratio of from 10 to 80 percent of the gold. In the late 1850's and early 1860's, beach and placer mining was at its height along the Oregon coast, as far north as Coos Bay. The Rogue River Indian war, which was raging made it a very precarious undertaking at the time, however. When easy takings had been cleaned up, the advancing horde of miners moved northward to the Idaho diggings. However, the Old Man of the Sea has since, and still, is generous to the beach miners. Many ingeniously designed devices for extracting gold, platinum, and associate metals from the black sands on these beaches have been tried with indifferent success. The old-time devices, the sluice box, and the long tom, still remain the tried and trusty method used by successful beach miners. Link to comment Share on other sites Just north of Gold Beach, OR, there's an inlet and an old island in the middle of the inlet...There's a good running stream coming in there and the sands are solid black...I've set up a sluice there and shoveled in....That black sand is loaded with gold, but there so much black sand that your riffles get clogged within minutes...I tried magnetic separation, rather crudely, but it just sucks that flour gold right up with the magnetics...Very frustrating...I was told there was an old man back in the day who claimed the island and ran an amalgamation operation...Unfortunately, he died of mercury poisoning....We ended up just combing the beach for agates, of which there are some good ones...Cheers, Unc Link to comment Share on other sites Yeah Ron, pretty much my experience too. I tried sluicing along a lot of beaches. I did have better results with synthetic miner's moss and a fairly open riffle system, and flatter angles. It takes a special breed to deal with fine gold though, and a lot of patience. My better returns were near reedsport, brookings, and north of crescent city, and parts around eureka, but the water there is cold, along with the weather. Still, I got quite a mix of stuff, and after sending the lot out to someone else for cleanup, still made $4000 in 1980. Though I did a solo operation, this is one effort that would be better with 3 or 4 guys, each with their own particular job. I did a little better with a keene's dredge prior (one of their first models that you couldn't get wet), up along the Umpqua watershed, east of Roseburg. One of the 'glamorous' jobs I sought right out of highschool was logging in the woods. That 24 year old greeney on AxMen pretty much mirrors my experience setting chokers, high-leading (yarder), and helicopter logging later on near Red Bluff, CA, and time in a redwood sawmill at Scotia. A lot of work, ornery boss and work partners, ticks, extreme accidents, and bad deals working with a cat skinner; not to mention not all that great paychecks. I never really knew about the whole scope of placer mining in southwestern oregon, and northern california till decades afterwards, and that there were sizeable nuggets and hydraulicking like there was in the siskiyous and west of grants pass. Oh well... Link to comment Share on other sites I have'nt posted in a while, ( had a electrical injury 6 months ago so I've been offline for a while ). Nonetheless. I have used them since they came out, and used them on California Beaches, and it works. Check out They make a very cool product. Some may know of it, but most do not. I use them for clean ups, and end stage Dredge sluice, but mostly for Beach placers. Cheers, John Link to comment Share on other sites Join the conversation Reply to this topic... ×   Pasted as rich text.   Paste as plain text instead   Only 75 emoji are allowed. ×   Your previous content has been restored.   Clear editor
Bernhard Berenson Masaccio’s death left Florentine painting in the hands of three men older, and two somewhat younger than himself, all men of great talent, if not of genius, each of whom—the former to the extent habits already formed would permit, the latter overwhelmingly, felt his influence. The older, who, but for Masaccio, would themselves have been the sole determining personalities in their art, were Fra Angelico, Paolo Uccello, and Andrea del Castagno; the younger, Domenico Veneziano and Fra Filippo. 1397-1475. Influenced by Donatello. Andrea del Sarto approached perhaps as closely to a Giorgione or a Titian as could a Florentine, ill at ease in the neighbourhood of Leonardo and Michelangelo. As an artist he was, it is true, not endowed with the profoundest sense for the significant, yet within the sphere of common humanity who has produced anything more genial than his “Portrait of a Lady”—probably his wife—with a Petrarch in her hands? 1425-1499. Pupil of Domenico Veneziano; influenced by Paolo Uccello. 1449-1494. Pupil of Baldovinetti; influenced slightly by Botticelli and more strongly by Verrocchio. 1475-1564. Pupil of Ghirlandaio; influenced by the works of Jacopo della Quercia, Donatello, and Signorelli. Fra Angelico we know already as the painter who devoted his life to picturing the departing mediæval vision of a heaven upon earth. Nothing could have been farther from the purpose of Uccello and Castagno. Different as these two were from each other, they have this much in common, that in their works which remain to us, dating, it is true, from their years of maturity, there is no touch of mediæval sentiment, no note of transition. About 1400-1461. Probably acquired his rudiments at Venice; formed under the influence of Donatello, Masaccio, and Fra Angelico. Syndicate content
The most secure period in locking history The history of locks is a very interesting one and the practise of locking valuables away has been used since ancient Egypt. For thousands of years locks weren’t particularly strong and could usually be broken or unlocked quite easily. This changed in the 1770s when British inventor Joseph Bramah created his safety lock. The device was so complex and Bramah had such confidence that he even offered a contest with a 200 guineas prize (approximately £20,000 by today’s standards) for anybody who could break it. The Bramah lock was followed by the Chubb detector lock, another complex product that seized up if someone attempted to open it without the actual key. Once seized, a second key needed to be used and rotated backwards to reset the lock. Both locks dramatically improved standards and ushered in an era of “perfect security” where people could have complete confidence in the safety of their valuables. The secure feeling lasted until 1851 until American locksmith A.C. Hobbs broke both locks. He opened the Chubb lock in just 25 minutes by using the mechanism against itself. When it seized he picked it backwards to learn more about how it worked, eventually finding all of the information he needed to open it successfully. The Bramah lock was more challenging to crack but after 52 hours of working on it over a 14 day period Hobbs finally succeeded. The security lock that had defeated countless people for 70 years was beaten and with it came the loss of the confidence in a “perfect” secure solution. The following 160 plus years have brought countless new locking techniques but we have never been able to attain the same level of confidence. Nowadays locks still serve a vital purpose but there is still the inherent risk that they could be tampered with or breached. Fortunately newer safety systems are in place to work alongside them to increase protection. Society itself has also changed a great deal in this time. Modern locksmiths are trained to open all kinds of lock and give people access to their properties and vehicles if they find themselves locked out. They provide an important service and many of them can also offer security advice. If you need the services of experienced locksmiths in Northampton, we are here for you. From helping you boost security in your home to providing useful guidance, we can offer all the help you need.
How To Emulate 50s Recording Techniques As musicians or music addicts, it’s hard not to fall in love with the unique sound of the ‘50s. Thanks to the introduction of electric guitars, electric bass guitars, and other instruments, the ‘50s have defined music in their own way. Thus, it’s natural to be curious about how artists like Elvis Presley or Chuck Berry managed to record their music without today’s technology. In this article, we’ll be your guide to the ‘50s recording techniques. Let’s dive in! Overview of Music and Instruments in the ‘50s A lot of things have changed in various industries after World War II, including the music industry. People started to embrace and appreciate art and entertainment more, especially that the tough reality of war was over. As a result, Rock and Roll was born. The appearance of this beloved genre along with many new inventions have helped create the unique sounds that we love about the 50s. For example, that decade gave us Fender Strats and Teles, the forever-worshipped Les Pauls, and many more remarkable instruments. The P-Bass was also first introduced in the ‘50s. Then, the Fender Tweed amp came out afterward. As for the microphones, Electro-Voice’s collection of mics, with model 950 being the flagship, were the rage back then. As you can see, the ‘50s were a turning point in the world of music. Now, let’s take a look at how the top artists of that period recorded their unforgettable songs. The 50s Recording Techniques and Gear Ever wondered what makes the sounds of the ‘50s unique? Well, it all depends on the gear and recording techniques that were used back then. Defining ‘50s Gear and Sound A lot of musicians, including Ricky Nelson, recorded their music using all-tube mics, recorders, and mixers. Even the routing technique had a major effect on the resulting sound. In the 50s, it was as simple as going from the mic, to the mixer, then straight to the recorder. Today, artists have a huge variety of routing possibilities thanks to DAWs. They can spend days at the studio to perfect their music by adding effects, enhancing different sounds to match their liking, and so on. We can’t say the same for the musicians of the ‘50s. Because of how short their routing process was, the artists could perform minimal changes on the sound. As a result, their music had more of a natural and raw touch to it. Another thing that had a huge impact on the Rock and Roll of the ‘50s is the fact that compression and EQ weren’t often used. This gave room for the records to have a large dynamic range, not to mention that the sounds stayed true to the original frequency spectrum. Quite the opposite, music recorded in modern studios doesn’t feature latency or any other issues. In other words, the nuisances that today’s technology easily gets rid of are what gave ‘50s music its flavor. Proof of that last statement is the Ampex MX-10 tube mixer. This preamplifier was used in that period to increase the level of the microphone before it reached the tape. Back then, sound engineers didn’t like how it seemed to oversaturate the sound. However, musicians today would go to incredible lengths to put their hands on this classic mixer to reach the desirable ‘50s vibes. Understanding ‘50s Recording Techniques Throughout the ‘50s, tape machines were responsible for recording music in every studio. Two of the most popular choices back in the day were the Ampex models 200 and 600. Although, some artists, like Ricky Nelson, chose the Ampex-C set. Ricky Nelson and Elvis Presley used a recording approach that defined music in that era. These artists used to Scotch tape a mic to each musical instrument to catch the sound, including the drum kit. That technique was later used by the Beatles, too. Another thing that makes the sound of this decade unique is the effects that were used. For example, echo was achieved by using units such as the Tubeplex Echoplex. Reverbs were also quite the discovery back then, created using the EMT 140 plate reverberator. These units tended to weigh up to 600 pounds, and they usually required entire rooms to keep them in. Tips on How to Recreate the 1950s Sound Using Today’s Equipment Just admit it, you’ve thought about emulating a ‘50s-sounding song at one point or another. Well, in this section, it’s time to channel your inner rocker and learn how to make that happen! Step 1: Recreate the ‘50s Recording Atmosphere The first thing you should do is try to focus on recording your band live instead of recording individual tracks to fuse them later on. Next, you’ll have to follow the same steps that musicians of the ‘50s made to make their music. For example, the top studios back then, including Sun Studios and Abbey Road, hired top-notch artists who helped their songs become hits. Therefore, you’ll need to ensure that your musicians know exactly how to play and sing according to ‘50s standards. Step 2: Choose Your Instruments Carefully It’s a good idea to go for highly sensitive condenser mics, even better if you get your hands on classic tube amps. As for the guitar choice, we advise you to pick flatwound guitar strings. These should easily capture the distinctively smooth vibes of the ‘50s. A lot of experts recommend using as few mics as possible when the band is playing together. This can help you achieve that scratchy sound quality that we all appreciate in a ‘50s song. Step 3: Don’t Overproduce Your Song Pay attention to record only one or two tracks to stick to the original recording techniques of the ‘50s. When the tracks are ready for post-production on your best DAW, be careful not to polish them too much. Instead, add only the effects that were popular back then, including reverbs and echos. Wrapping Up Despite how primitive the musical equipment was compared to today’s technology, we all appreciate a good ‘50s song. The ‘50s recording techniques play a major role in that decade’s much-loved sounds. If you want to recreate the sounds of the ‘50s in your home studio, it may be easier than you think. Just focus on the live recording of your song, leave plenty of room for mic bleed, and keep post-production to a minimum.
Could water-saving "shade ball... The shade balls getting dispersed into the LA Reservoir The shade balls getting dispersed into the LA Reservoir View 2 Images The shade balls getting dispersed into the LA Reservoir The shade balls getting dispersed into the LA Reservoir The shade balls getting dispersed into the LA Reservoir The shade balls getting dispersed into the LA Reservoir Source: Imperial College London Fairly Reasoner They sound surprised. Ben Chernicoff They are completely missing the fact that they were most likely made somewhere that water was plentiful. The point wasn't to save a net amount of water nationally, it was to conserve water someplace it is scarce. Paul Muad'Dib Mistakes are part of progress. Roger Garrett It seems to me that the balls would have saved a lot more water from evaporation if they had been bright white or even silvery instead of black. The blackness of the balls no doubt absorbed a lot of solar energy, heating up the underlying water and making it evaporate even more. I'd like to know if any experiments were done comparing different colored balls to see which would be most effective in retaining water in reservoirs. I assume the balls are made somewhere else, and have a lifespan longer than one year... Ben, respectfully, if the point hadn't been to generate net water savings, water could have simply been transported to CA, avoiding the plastic and manufacturing expense altogether. This seems to be a clear case of someone not doing an even basic back-of-the-envelope materials balance calculation. shouldn't the balls be white? I'd say they were black to be UV resistant and also still be "food safe". Some plastics might be UV resistant but then leech chemicals into the water etc... A food safe plastic like PET in black would be the safest and cheapest option. @PaleDale I have the contrary, that many black plastics are not safe. Some of them are black because they contain recycled plastics of various color, hidden in a black color. And with recycled plastics, it is hard to be sure no one contains any dangerous product. First, like other said, they should have been white, to prevent evaporation due to added heat. Second, that water that was used to make the plastic doesn't just disappear into the ether... It gets either evaporated and rained down again, or dumped back into the water supply it came from. Load More
Glitches in the membrane 4 July 2016 Though the root causes of cancer may range from an infection to radiation exposure, it is the signalling behaviour of affected proteins and enzymes that directly lead to the errors in replication control that eventually lead to tumours. Understanding these pathways and the ways that proteins are activated, inactivated, or otherwise modulated, will give greater insight into the mechanics of cancer, hopefully provide more opportunities for treatment. At the University of California, Berkeley, Monatrice Lam is studying the activity of Ras proteins and their effect on cancer-related cell processes like proliferation and apoptosis, or cell death. Studies have shown that almost 30% of cancers have some sort of Ras mutation, highlighting the importance of understanding this protein in the fight against cancer. Ras proteins are ubiquitous in the body. They are found in all cell types and organs. Ras proteins belong to the group of proteins responsible for signal transduction, that is, they help send messages throughout the cell.  If a mutation causes Ras to ‘switch on’ permanently, there will be no end to the signal for growth and division, leading to the growth of a tumour. We’re looking at how early stages in cancer signalling can be mis-triggered To study the effects of Ras and better understand how best to combat potentially dangerous mutations, Lam and her team work with synthetic membranes on glass, allowing them to examine Ras protein’s activities while protein-membrane interactions are preserved. By tethering proteins to the synthetic membrane on the glass plate, membrane proteins like Ras can be tagged for observation and studied with greater ease. Son of Sevenless Ras is activated by Son of Sevenless (or SOS), a guanine exchange factor (or GEF) which drives Ras’s activation and subsequent conformational change. It appears that SOS is capable of activating thousands of Ras proteins, though the mechanism of this processive activation of Ras is still unclear. SOS is a guanine nucleotide exchange factor, meaning it mediates the exchange of guanine diphosphate and guanine triphosphate (GDP and GTP respectively, two molecules that act as an energy currency within the body). While there are a few guanine nucleotide exchange factors present within cells, SOS is the most ubiquitous and its connection to Ras activity is critical to better understanding Ras related mutations and their effects on oncogenesis. When SOS binds to Ras, it causes the guanine nucleotide that Ras was bound to to switch out, activating and re-activating Ras over and over again rapidly. While a lot of research revealing the structure of SOS has been done and its partnering with Ras clear, the positive feedback mechanism displayed by SOS and Ras is still unclear. Fluorescent imaging To visualise the activity of Ras and SOS, Lam and her team have developed a fluorescent sensor that can specifically bind to activated Ras, thereby illuminating the activation patterns and behaviour with respect to SOS. Building on studies that indicate Ras nanoclusters are present on cell membranes, these nanoclusters is likely to become activated all together by a single SOS molecule, resulting in a powerful signal. This signal even seems to be irreversible. Once SOS binds and the processive action is set in motion, it appears there is no going back and undoing the signal, eventually triggering the entire cell.. This activation is so thorough and rapid, it’s thought of as a digital signal, with one SOS leading to rapid and complete activation, with no way to stop it. Because of this powerful activation sequence, it’s the recruitment step of SOS that is most important.  SOS has two Ras anchoring sites, the allosteric and the catalytic. When SOS binds to Ras at the allosteric site, the catalytic pocket of SOS is then brought closer to other Ras molecules allowing it to rapidly bind and activate one Ras protein after another.  While the specifics are not completely understood, what is known is that once SOS binds, it has the potential to activate a huge number of Ras proteins on the cell membrane. In short, it’s that first binding of Ras to SOS that needs to be controlled, as that’s the tipping point for the eventual activation of possibly thousands of Ras proteins and the dangers that come with uncontrolled cell proliferation. A graphic visualisation of the lipid bilayer membrane Lam’s lab makes artificial membranes by creating supported lipid bilayers (or SLB). The lipid bilayer forms the cell membrane in human cells. Lipids are a large group of molecules encompassing fats, waxes, some vitamins, and more. The lipid itself is made up of two parts, a hydrophilic head and a hydrophobic tail. In forming a membrane, the lipids come together tail to tail, with the heads facing the exterior and interior of the cell. In the lab, Lam and her team create lipid bilayer vesicles which are used to make SLB on etched glass. The lipids can then easily be tagged with fluorescent molecules for easy identification and visualisation of the myriad processes that occur within, around, and across the cell membrane. While options for treatment are still in a more speculative state at this point, it’s through a deeper understanding of our biological pathways that we can better arm ourselves in the fight against cancer.   After graduating from Sha Tin College, Monatrice attended Washington University in St. Louis (Wash U) where she majored in Chemistry and Physics, and minored in Psychology. During the fours years of her undergraduate studies, she was a freshman Resident Advisor (RA) for two years and participated in the St. Louis Area Dance Marathon to raise money for children’s hospitals. To view Monatrice’s personal Croucher profile, please click here.
100 Island Challenge RESEARCH GOALS A priority of research and management is to improve our ability to forecast changes in coral reef ecosystems and to provide advice on means to slow or reverse the loss of corals and other reef builders. We will establish a rigorous standardized approach for assessing the structure and dynamics of coral reefs across the Pacific and, in collaboration with local partners, to build these efforts into a coherent, long-term, hierarchical, and regional data collection program. The specific goals of this research project are to determine: 1. What are the independent and interactive effects of biophysical conditions on the structure of coral reefs? 2. How reef structure is influenced by exogenous and endogenous conditions across spatial scales (from mm to km)? 3. How scale-dependent factors influence reef dynamics through time (e.g., coral growth and reef accretion)? 4. How can these insights be applied to develop meaningful tools for forecasting regional reef change under future local management and global climatic scenarios? PROJECT SUMMARY Coral reefs cover less than 1% of the Earth’s surface, yet are estimated to support 25% of marine biodiversity. For the 100s of millions of people living adjacent to coral reefs, this productive ecosystem provides important shoreline protection and critical food security. Despite the high societal values, a combination of local anthropogenic influences and global climatic changes are altering the structure and functioning of reef ecosystems. The goal of the 100 Island Challenge is to gain a holistic understanding of the current state and future trajectory of the world’s coral reefs by conducting a global assessment of coral reefs and the factors promoting or inhibiting their growth. This project is designed to provide a regional scale perspective of coral reefs, investigating spatially explicit patterns in community organization through time. Coral reefs spanning across multiple ocean basins will be studied, with islands chosen evenly across each subregion. Reef community organization will be assessed across spatial scales, including the individual scale (<1-10m2), the site scale (100s of m2), the island scale (10s-100s of km2), and to the basin-specific regional scale (1-10 million km2). Standard methods of in situ data collection will be complemented by novel photomosaic techniques providing spatially explicit and archivable records of reef benthic structure, from scales of mm2 to 100s of m2. An intensive field campaign will enable replicated imaging of reef community structure, and repeated sampling will provide insights into reef dynamics through time. Coral taxonomy organized by color. Coral Reef Time Series from Palmyra Atoll.
Share this! Ever since Angelina Jolie shared with the world her BRCA status and her choice to undergo a prophylactic mastectomy, a growing number of women have asked: “Should I have BRCA testing too?” BRCA1 and BRCA2 are the genes that cause the two most common hereditary breast and ovarian cancer syndromes. Currently, the recommendations for genetic testing for BRCA1 and BRCA2 are limited to those with a strong personal and family history of associated cancers or those of particular ethnicities known to be at increased risk to have a mutation. Like people of Ashkenazi (Eastern European) Jewish descent. JScreen, a non-profit community-based public health initiative dedicated to preventing Jewish genetic diseases, is engaged in a study aimed at increasing awareness about and access to BRCA testing for Ashkenazi Jewish people. The study, referred to as the Program for the Evaluation of Ashkenazi Jewish Cancer Heritability (PEACH) BRCA study, is looking at hereditary breast and ovarian cancer risk specifically caused by a BRCA1 or BRCA2 mutation. The study is focused on the Jewish population because individuals of Ashkenazi Jewish background are at 10x greater risk to have a BRCA mutation than the general population. This is significant, as having a BRCA mutation increases the lifetime risk for breast cancer up to ~70% and ovarian cancer risk to as high as 44% with BRCA1 and 17% with BRCA2. This is compared to just a 1.3% risk for ovarian cancer in women without a mutation. The purpose of the PEACH study is to evaluate the utility of a BRCA education and screening program for men and women with Ashkenazi Jewish background who did NOT meet National Comprehensive Cancer Network (NCCN) guidelines at the time the study was initiated. NCCN is an alliance of leading cancer centers devoted to patient care, research, and education who publish guidelines on best practices for genetic testing and medical management of hereditary cancer syndromes. To be considered for the study, participants have to meet all of the following criteria: • Have at least one Ashkenazi Jewish grandparent • Are at least 25 years old • Reside in the metro-Atlanta area  • Have not had BRCA testing in the past In the PEACH study, genetic testing is done via gene sequencing, meaning that it looks at hundreds of disease-causing or pathogenic mutations across the entire length of both genes. This is considered diagnostic testing and is more comprehensive than at-home genetic testing for BRCA. All individuals tested receive their results and post-test genetic counseling over the phone by a certified genetic counselor. Testing is free of charge for all those enrolled. The study hopes to enroll 500 participants before July 2020. How Does this Relate To PGT? Individuals who are found to have a BRCA1 or BRCA2 mutation have a 50% chance to pass it and it’s associated cancer risks along to each child. This risk is the same whether it is the mother or the father that carries the mutation. IVF with PGT, however, allows individuals to start or grow their family without the concern that their children will inherit BRCA by selecting only embryos that do not have the familial mutation. Genetic counselors at the PEACH BRCA Study mention PGT as a reproductive option to everyone that tests positive for a mutation.  “Participants with BRCA mutations who are in their reproductive years are often concerned about the health of their future children. Our genetic counselors take the time to speak with them about using IVF with PGT when planning for their future families.” – Esther Rose, LCGC, PEACH BRCA Study The Sharing Healthy Genes Resources page may be a helpful place to start for anyone wanting to learn more about this option.  Future Directions The PEACH BRCA Study wants to shed light on the true BRCA mutation rate for people with Ashkenazi Jewish ancestry. Current sources suggest that risk is 1 in 40 (compared to 1 in 500 in the general population). But those studies have mostly concentrated on people who meet genetic testing criteria, meaning those who are already at higher risk. This study hopes to get a better sense of what this number truly is for all individuals of Ashkenazi Jewish descent, whether or not they have a family history of BRCA-related cancers. The study is also addressing issues of scalability with the hopes of expanding a program like this nationwide. Finally, through post-test genetic counseling, the study wants to ensure people who have a mutation understand all of their prevention, treatment, and reproductive options including IVF with PGT.  To learn more about The PEACH BRCA Study, check out their brochure
Since women and men in Africa wear many kinds of beads around their waist, hip, neck, and arms, these beads are produced in different shapes with charms, including glass stones and gemstones. These beads have many colors such as black, yellow, green, purple, and orange. Each color has its own meanings and effect on the people. These African waist beads are treated as their traditional accessory and jewelry. These beads are drilled into the wire. History of African waist beads The history of brads is very bygone. In ancient Egypt, women wore these beads for the seduction of men. In the past, beads were also known as girdles. They got famous by the Nigeria Yoruba tribe. According to Google Arts and Culture, ‘’Glass beads were introduced on the coast of East Africa by Arab and (from the 16th to the 18th centuries) Portuguese traders, and reached southern Africa in small quantities through trade’’. In Africa, women and men wear these beads on every ceremony and festival because it is now their custom. According to the African fact of organization, ‘’Jewelry in Africa is seldom just ornamental; rituals, religion, and ceremonies play a large part’’. Black Beads Benefits African people use black color beads in their waist beads, bracelets, and mala. According to the black color is a magic color that has many benefits on their health. Black beads in their different ornaments symbolize wisdom. They think that black is a mysterious color which shows their power, authority, and seriousness. Actually black beads are prestigious for them. Black Beads Meanings Every color has its own powerful impact on emotions and feelings. When anyone is choosing beads, one must consider and study the color of beads because of not only the shape of beads matters. But, also the color of beads has a long-lasting effect. Like all colors, black color is also used in Africa for their traditional jewelry. According to African people, black color beads are very holy and mysterious for them. The following are some meanings of black beads. • Power Black beads symbolize power. Black beads increase their power. Intellectual people wear black beads to enhance and show their intellectual power. • Protection Black color does not contain any color or any glow. It is just like water. According to African people, black beads protect them from every bad spirit, negative emotions, and false deeds. Pregnant women also wear black beads during their pregnancy to protect their children. • Negative Energy and Confusion Black beads in mala repel the negative energy and spread positive vibes. These beads also make everything clear and prevent everyone from distraction and confusion. In fact, black beaded mala is also called the Confusion killer. • Growth and Strength Newborn babies in Africa also wear black beads around their waist and hands because these beads enhance their growth and also increase their strength. Men wear black beads to show their strength. These beads also block bad emotions for the newborn baby. • Self-control and Stability Black beads increase self-control and prevent everyone who wears black beads from every false deed. These beads increase stability and resistivity. People use black beads to control and limit themselves. • Fertility and Intimacy Women wear black beads to attract their men because black beads increase the fertility and intimacy level. Men also wear black beads to increase their intimacy rate. These beads also increase love among relationships. Usage of Black Beads African people wear black beads around their waist which increases their intimacy and fertility as well as their stability. Some wear black beads around their neck in the shape of mala which shows their knowledge and power. Some people wear black beads around their hands in the form of bracelets which enlightens and bring peace. Black beads also symbolize death and evil. The black color is a very bold and heavy color. Black beads are also considered as Holy, elegant, powerful, and Heavy. These beads represent seriousness, expensiveness, and death. These beads kill every false notion and deed. These beads also kill every bad spirit and evil. If a person wears black beads than these beads protect him from every bad happening and also increase his power, strength, self-control, stability, resistivity, growth, and happiness.
Glossary of the terms uded. Ampere, amps A measurement of the amount of electric current. A circular path in which electricity travels. The movement or flow of electricity. Distribution wires Electrical Energy The energy associated with electric charges and their movements. The flow of electrons. The basic particle that orbits the nucleus of an atom. The flow of electrons produces electricity. The ability to do work. People get energy from food. Your toaster and your washing machine get their energy from electricity. Fluorescent bulb Fossil Fuels Fuel cell A machine that converts mechanical energy into electrical energy. Generating Capacity The amount of electrical power a power plant can produce. Geothermal energy Energy that is generated by converting hot water or steam from deep beneath the Earth's surface into electricity. The layout of an electrical distribution system. Hydroelectric Power Plant A power plant that uses moving water to power a turbine generator to produce electricity. Incandescent bulb One kilowatt of electricity produced or used in one hour. The power and energy requirements of users on the electric power system in a certain area or the amount of power delivered to a certain point. 1,000,000 watts of power or 1,000 kilowatts. Fuels that cannot be easily made or "renewed." We can use up nonrenewable fuels. Oil, natural gas, and coal are nonrenewable fuels. Nuclear Energy Power plant A place where electricity is generated. Peak Load Plant Renewable Energy Sources Fuels that can be easily made or "renewed." We can never use up renewable fuels. Types of renewable fuels are hydropower (water), solar, wind, geothermal, and biomass. Solar energy Energy from the sun. A device used to increase or decrease electricity's voltage and current. Voltage, volts A measure of the pressure under which electricity flows.
Biblical Paleoarcheology The Divine Right of Kings: dynasty Article #41 What in the world is a dynasty and where did the idea come from? Dynasty: “a race or succession of kings of the same line or family, who govern a particular country.” -Webster’s 1828 dictionary The United States of America is the first country to make a major break from the dynastic system, which had been in place for four thousand years. Every year, we elect a new president; only the court system has life tenure. King George III, who ruled over the American colonies at the time of their independence, was on the throne for 59 years, even though he was insane and unqualified to rule near the end. George III took the throne after his grandfather George II died. He was one of five kings who ruled during the Georgian era, which lasted 116 years. These all received their authority to rule from the Electorate of Hanover in Germany, which was part of the Holy Roman Empire. It was definitely a top down hierarchy, with the Vatican claiming ultimate authority over the Holy Roman Empire. “And when Athaliah the mother of Ahaziah saw that her son was dead, she arose and destroyed all the seed royal” -2 Kings 11:1 This story from the Old Testament illustrates how deeply the idea of dynasty was accepted by Jews as well as gentiles. The Davidic covenant, in 2 Samuel 7:11-16, was the promise of a dynasty: “And when thy days be fulfilled, and thou shalt sleep with thy fathers, I will set up thy seed after thee, which shall proceed out of thy bowels, and I will establish his kingdom.” (vs 12) Good kings reigned in Jerusalem for 162 years, from the time of David until Jehoram married the daughter of Ahab and Jezebel, Athaliah. Her son then ruled for one year, until he died without children. She thought that she had killed all the other royal descendants of David when she took over the throne herself. “And he brought forth the king’s son [Joash], and put the crown upon him, and gave him the testimony; and they made him king, and anointed him; and they clapped their hands, and said, God save the king.” -2 Kings 11:12 Faithful people, however, saved alive Joash, the last remaining descendant of David, who was able to regain the dynasty for the Lord. So we see that God can work through a godly dynasty, but the danger of its corruption is great, and it’s almost impossible to restore goodness to an evil dynasty. Ungodly, pagan dynasties have assured the continuation of evil for most of history. Egyptian history is divided into 31 dynasties that were ruled by pharaohs. Eleven pharaohs, for instance, bore the family name of Ramesses. Assyria, Babylon, Persia, Greece (after Alexander the Great), and Rome were all dynastic for the most part. As we saw in article 40, twenty dynasties are supposed to have existed before the first dynasty of Babylon. The first of all these dynasties was Eridu (Babel), ruled by Gilgamesh (Nimrod). The fact that the Sumerian King List is divided into dynasties shows that the dynastic concept was strong, even at the time that Abraham left Ur of the Chaldees. I must ask the question, “Why would the first king think it religiously important to pass the throne to his son?” A king could very well reign without setting up the next generation. Dynasty became an institution at Babel. I can guess two reasons why it was established. First, it grew out of the profound respect for ancestors. Genesis 5 gives a lineage of ten ancestors from Adam to Noah. Since these patriarchs where highly respected, Ham, being rejected by his father, probably had a strong motivation to make his lineage successful and famous. Secondly, Satan has used the dynastic practice to keep evil rulers in power for as long as possible. The term Lord of lords, referring to Jesus Christ, has even more meaning when we consider the dynastic system. Satan has attempted to assure the permanence of his rule through kings, tyrants, and dictators. Even today, repressive regimes, such as North Korea, Iran, China, Russia, and many Muslim countries are anti-Christian. That will soon end. It is interesting that in the Book of the Revelation that the appearance of Babylon precedes that of Jesus Christ. In Revelation 17:5, we see a woman with words “MYSTERY, BABYLON THE GREAT, THE MOTHER OF HARLOTS AND ABOMINATIONS OF THE EARTH” written on her forehead. This represents the long-enduring institution of human kingdoms. In Revelation 19:16, we see Jesus Christ returning on a white horse, with the words “KING OF KINGS, AND LORD OF LORDS” written on his thigh. In Revelation 19:19, the kings of the earth make war against our Lord, and He defeats them. Unlike the kingdoms of this world, Jesus will continue the dynasty of David and “of his kingdom there shall be no end” (Luke 1:33, Isaiah 9:7). 2 thoughts on “The Divine Right of Kings: dynasty 1. Never considered that Babel and Nimrod was the beginning of kings and dynasties. Interesting. The Revelation Babylon isn’t the last ‘Kingdom’ or reign prior to Jesus’ reign … between the two is the reign of the antichrist. 1. I’m finding new things even as I write. Interesting point, but wouldn’t you consider the fall of Babylon in Rev. 18 to be the end of the reign of the antichrist and Rev. 19 and 20 as the beginning of Christ reigning for a thousand years? “They came to life and reigned with Christ for a thousand years” (Rev 20:4) or are you referring to Satan’s release after a thousand years? Leave a Reply
Farmer Cooperatives, Not Monsanto, Supply El Salvador With Seeds In the face of overwhelming competition skewed by the rules of free trade, farmers in El Salvador have managed to beat the agricultural giants like Monsanto and Dupont to supply local corn seed to thousands of family farmers. Local seed has consistently outperformed the transnational product, and farmers helped develop El Salvador’s own domestic seed supply–all while outsmarting the heavy hand of free trade. This week, the Ministry of Agriculture released a new round of contracts to provide seed to subsistence farmers nationwide through its Family Agriculture Program. Last year, over 560,000 family farmers across El Salvador planted corn and bean seed as part of the government’s efforts to revitalize small scale agriculture, and ensure food security in the rural marketplace. Drought conditions across the country made access to seed all the more vital for rural livelihoods, making the seed packets supplied through the government program the primary means for thousands of families to put food on the table. In 2015, rural cooperatives and national associations will produce nearly 50% of the government’s corn seed supply, with 8% coming from native seed—a record high. In the Lower Lempa, where seven farmer organizations have produced corn seed since 2012, this means over 4,000 jobs and income for rural households, primarily employing women and young adults. The public procurement of seed—or the government’s purchasing power through contracts—signifies over $25 million for a rural economy still struggling to diversify and gain traction. The success of locally-bred seed varieties, compounded with their low production costs, allowed the Family Agriculture Program to contribute to historically high yields nationwide for corn and beans. Last year, more farmers produced more corn and beans at the most efficient yield per acreage than any other year over the last decade. This has also led to a significant adjustment in El Salvador’s trade balance on corn: Imports of white corn in 2014 were a full 94% less than 2011. Producing seed locally was no small feat. It involved savvy farming techniques, better business practices, and advocacy. It also required a government willing to take a critical look at the transnational agribusiness model that dominates the farming sector the world over. The previous administration under Mauricio Funes understood this model, and its impact on a relatively small agricultural market like El Salvador’s. It also understood how to break these cycles of dependence on foreign agribusiness, and simultaneously build a more robust private sector through the power of public procurement. In answering his call, growers’ associations, categorized as small or medium-sized enterprises, had a steep learning curve in providing seed to meet government standards, including germination, yield rates and packaging. They also had to conform to government contracting guidelines, a task that proves difficult to navigate for many small-medium sized enterprises. Throughout this process, EcoViva and partners at the Mangrove Association labored to prepare local cooperatives to successfully bid for and execute these contracts for corn seed. Our efforts paid off: in 2014, El Salvador successfully sourced quality seed from 16 national enterprises. Over 20% of corn seed originated in local cooperative fields in the Lower Lempa region, and participating families saw their annual income double—while saving the government hundreds of thousands of dollars by providing affordable seeds. In 2015, that number has risen to nearly 50%. Despite these successes, some questioned the validity of Salvadoran businesses providing seed. In 2013 and 2014, the United States Trade Representative and the Interagency Trade Enforcement Center circulated an annual report that cited concerns about government purchases, including seed, under the Central American Free Trade Agreement (CAFTA). Coincidental to these reports, the American Chamber of Commerce in San Salvador complained in the press that their members were being denied contracts for seed, and Salvadoran farmers denied a superior product. These members included Monsanto, Dupont and Pioneer, whose affiliates had provided seed in the past to the Salvadoran government. CAFTA Chapter 9 outlines the standards for how contracting governments, such as El Salvador’s, can purchase goods and services. It sets the rules for open, competitive and transparent contract approval. It also stipulates that governments cannot discriminate against foreign businesses. During the period questioned by the USTR, the government of El Salvador ironically conducted a contract process that allowed more businesses to provide a better product at a cheaper price. Prior to 2013, the Salvadoran government bought 70% of its annual demand from a Monsanto affiliate, purchasing a seed variety with no field trial validation and at a price over double that being offered by local seed producers. In 2014, EcoViva and allies proved that the Salvadoran government denied this affiliate a contract because its seed was expensive and lacked proper field trials- not because it was a foreign company. Nevertheless, in 2014, the United States threatened to deny foreign aid to El Salvador unless it opened its seed contracts to foreign businesses, then stepped back when its power to use foreign aid as leverage on free trade standards was publicly questioned. Today, the United States now says that it supports El Salvador’s current contract process on seed—a process in which national seed producers continue–as before–to offer a better, more competitive product. Local seed producers like the Mangrove Association and cooperatives in the Lower Lempa can guarantee the government of El Salvador seed varieties that have better yields and lower prices than what is found in the transnational agribusiness market. Salvadoran businesses have learned to compete for and win government contracts, which allows small and medium sized enterprises to innovate and employ hundreds of people in rural communities. Improving the rural economy is critical for these areas, such as the Lower Lempa, that have high rates of unemployed young adults fleeing to the United States in search of jobs and opportunities. National cooperatives and businesses have also helped to protect El Salvador’s own seed lineage, and reduce the quantities of harmful chemicals applied daily to Salvadoran soil. It’s initiatives like these that provide a way forward for El Salvador and its domestic economy in a globalized trade environment.