text
stringlengths 144
682k
|
|---|
10 Tips For New Runners
10 Tips For New Runners
Running is one of the most popular forms of exercise across the world, including in the United Kingdom, where estimates suggest that around 6 million people run at least once a week. There are many reasons for which running is the exercise activity chosen by these people – from the fact that it doesn’t cost anything to participate, to being able to run outside, inside, on your own or in a group. It’s also a great way to lose or maintain weight, build muscle and improve your overall health without the need for extreme diets or difficult or expensive exercise plans.
You’ve probably heard of the ‘Couch to 5k’ app – a mobile app designed for beginner runners and which contains training plans that gradually progress towards a 5 kilometre run over the course of nine weeks. The last few years have seen a 92% increase in the number of downloads of the Couch to 5k app, showing just how many people are incorporating running into their weekly physical activity. The app is regularly promoted by the NHS as a great tool for getting people running and improving their physical- and mental health.
Jump to:
running beginner
Why running is important
Running may not seem particularly fancy or exciting to anyone watching, but it is one of the best forms of exercise for both physical and mental health. Here are just a few of the reasons why running is important.
Running for cardiovascular health
The benefits of running for cardiovascular health are widely reported, particularly by charities and organisations that focus on cardiovascular disease, such as the British Heart Foundation. Cardiovascular disease is a general term used to describe conditions affecting the heart and blood vessels, and includes:
- Coronary heart disease
- Angina
- Heart attack
- Hypertension (high blood pressure)
- Stroke
- Vascular dementia
Running for at least 10 minutes every day has been shown to significantly lower your risk of cardiovascular disease. Research published by the BHF and conducted by the British Journal of Sports Medicine found that participants who ran regularly could have their risk of early death from any cause cut by a quarter, while their risk of early death from heart and circulatory conditions dropped by 30%.
Sleep better
Getting a good night’s sleep is essential for your overall health. The main reason for this is because your body repairs itself while you sleep. Failing to get enough sleep can lower your immune system, delay recovery from illness or injury and cause cognitive problems such as difficulty concentrating and poor memory. It can even lower your mood, making you more prone to anxiety and depression.
Reduce your stress levels
Aerobic exercise is just as good for your head as it is for your heart and overall health. Exercise reduces the amount of the stress hormone, cortisol, in the body, while also prompting the release of endorphins – chemicals that are natural painkillers and mood elevators. It’s the endorphins that create the ‘post-exercise’ high that many people experience after physical activity, including running.
Tips for beginners to start running
Here are our top ten tips for beginners to start running.
1. Invest in the right equipment
Granted, you don’t need any actual equipment to start running, but you do need a good pair of running shoes. While there’s nothing to stop you starting out in your usual trainers, if you’re going to be running regularly, it’s essential that you invest in running shoes suited to your foot shape and running style. Getting the right pair can prevent unnecessary pain and even injury. Comfortable, breathable and supportive running gear can be hugely beneficial, and females should also get a good quality sports bra to make their running sessions more comfortable.
2. Create a training plan
Whether you want to download Couch to 5K or run your own way, having a training plan in place will keep you on track, both with your goals and with the prevention of any injuries. In the early stages it can be tempting to push yourself as hard as you can, but it’s important to build up slowly to avoid injury or the need for prolonged recovery periods between runs. You can always adjust your training plan as you progress.
tips for new runners
3. Choose a routine that you can stick to
When you create your training plan, make sure that you choose a routine that you can stick to. This will help you to stay consistent with your running, which is important for making it part of your day-to-day life. Before you know it, your runs will be a habit that you can’t imagine life without.
4. Start with a combination of running and walking
The Couch to 5k app starts with something called interval running. Interval running is particularly good for running beginners who are trying to build their stamina and who cannot run for more than a short distance without experiencing pain or severe breathlessness. During interval running, brisk walking is interspersed with short bursts of jogging/running, starting at 30 seconds. For example, 60 seconds of walking followed by 30 seconds of running, followed by 60 seconds of walking, and then repeat for the duration of your run.
5. Track your runs
Nothing is more motivational than seeing your progress, so we strongly recommend that you track all of your runs. There are many different apps that enable you to do this including Strava and MapMyRun. Many of these apps enable you to link up with others and see their runs as well, meaning that later into your running journey, you can get competitive if you choose to.
6. Practice your breathing
Breathlessness is a common problem among running beginners and learning the correct technique to breathe could totally transform your performance. Focus on your breathing when you run, training yourself to breathe from your belly rather than your chest. Inhale through your nose and breathe out of your mouth. Remember your posture too, as the straighter your spine, the more oxygen you can take into your lungs.
7. Listen to music
Running is as much a mental activity as a physical one. If you’re finding running particularly hard, it can be tempting to give up. However, listening to music can distract you from the physical exertion and help to keep you focused. Upbeat music can be particularly motivational. However, be careful not to turn your music up too loud and ensure that you’re alert and aware of what’s around you.
8. Stay hydrated
You should make sure that you drink plenty of water before, during and after your run. Water regulates your body temperature, helps bring energy to your cells, removes waste and cushions your joints – all things that are essential for running. Drinking enough water can improve the rate at which you recover, minimise your risk of injury and even maximise your performance.
9. Warm up and cool down before every run
When time is tight, it can be tempting to overlook the need to warm up and cool down before and after each run. However, failing to do so could increase your risk of injury, compromise your performance and slow down your overall recovery, meaning it could take longer before you’re ready for your next run. More on this shortly.
runner stretching
10. Consider joining a running club
There are countless running clubs all over the UK, many of which are ideal for or dedicated to beginner runners. They can provide you with a supportive network of friends who will champion you through your running journey, as well as giving you others of all ages and backgrounds to run with. You’ll also have the option of having people to regularly run with, which could help to keep you motivated and striving for your personal best!
The importance of warming up and post-run recovery
No matter what distance you’re running, all of your runs should begin with a warmup and end with a cool down, and it’s crucial that you make time for these in your running training plan. These simple exercises take just a few minutes but could make a big difference to your running.
Why do I need to warm up before running?
Warming up properly dilates your blood vessels, enabling oxygenated blood to flow to your muscles, raising their temperature and improving the efficiency with which they’ll work, as well as your overall flexibility and range of motion. Warming up your muscles will also reduce the likelihood of experiencing damage, such as overextension or tearing. Raising your heart rate slowly is also much safer than a rapid acceleration as it puts less stress on your cardiovascular system.
Why do I need to cool down after running?
Cooling down properly is just as important as warming up. When you perform cooling down exercises, it enables your heart rate and blood pressure to drop slowly and safely down to normal levels. It also helps your body to eliminate any lactic acid that may have built up in your muscles, and bring your body back into a natural, balanced state.
Running recovery aids
While cooling down forms an essential part of the recovery process after a run, there are also techniques that can be extremely beneficial to post-run recovery. One of these is the use of recovery aids like those offered by Pulseroll. Two of our most popular recovery aids for runners include our Vibrating Foam Roller and our Pro Massage Gun.
Our Vibrating Foam Roller uses performance-enhancing vibrations to improve general muscle health and recovery time following moderate- to intense physical activity. The handheld roller makes it easy to perform self-massage, while the vibrations boost the flow of oxygenated and nutrient-rich blood, sending it to the muscles so that they can heal and recover faster. Regular use of a foam roller over time can reduce the risk of injury and improve long-term muscle health and performance. Here’s how to use the Pulseroll Foam Roller for recovery after a run
Meanwhile, our Pro Massage Gun, which is trusted by some of the world’s greatest athletes, uses percussion therapy to support your muscle therapy. Our massage gun is a handheld tool that delivers strong pulses to the muscles it is placed on, replicating the benefits of conventional massage at a place and time of your choosing. Moving it around the muscles will reduce muscle tension and improve blood flow, helping you to recover more quickly. Here’s our guide on how to use your Pulseroll Pro Massage Gun
Reference list
Ready to get running and need more support with your recovery? Check out the Pulseroll website for more information.
• Pro Massage Gun
Sale price £224.99 Regular price £224.99
Regular price £224.99
Pro Massage Gun
• Mini Massage Gun
Sale price £129.99 Regular price £129.99
Regular price £129.99
Mini Massage Gun
• Mini Massage Gun White
Sale price £129.99 Regular price
Regular price £129.99
Mini Massage Gun White
• Vibrating Foam Roller Pro
Sale price £119.99 Regular price £119.99
Regular price £119.99
Vibrating Foam Roller Pro
|
In the United States, rates of poverty have varied over time and are higher for some groups of people than others. Imagine that you are in charge of reducing rates of poverty. You must first provide an evidence-based argument of the key sociological explanations for differences in poverty rates. Because you have limited resources, you must then pick one key sociological explanation to justify an intervention to reduce poverty rates. Provide an evidence-based argument for why your intervention focus will have the greatest impact on lowering poverty.
|
Important Teddy Roosevelt Letter of January 1918
PRESENTING A Very Important Teddy Roosevelt Letter of January 1918.
On ‘Sagamore Hill’ letterhead. Fully handwritten and personally signed by President Theodore Roosevelt.
Dated January 28th 1918.
With it’s original envelope, stamp and postage marks.
What makes this letter so important is the author, the office and the content.
It is addressed to Eliza Calvert Hall/Obenchain who was a well known author at the end of the 19th Century and Early 20th Century. In 1905, Teddy referred to her book “Aunt Jane of Kentucky’ in a speech and recommended that every man in America should read it to understand ‘the plight of their womenfolk’. He regularly corresponded with her and we have a number of those letters in our collection. It appears that both he and Edith became big fans of Mrs Hall/Obenchain who was also heavily involved in the Suffragist movement.
The letters also provide a fascinating and historic record of Roosevelt’s personal beliefs and feelings on female empowerment.
This letter reveals Teddy’s desire to meet the author in New York and refers to another important female author of the time, Alice French (Octave Thanet).
The letter is a folded double folio with writing on the outside and inside fold.
The Letter Reads:
Sagamore Hill
Jan 28th, 1918
“My dear “Eliza Calvert Hall”
(I can’t write you – or Octave Thanet – except as I naturally think of you !), if I had known you were in Washington I would have made all other engagements bend so that I could have seen you. I don’t know where you live in Texas; are you never coming to New York , so that we may have you out here ?
Always yours,
Theodore Roosevelt
The envelope is addressed to: Mrs. Lida Calvert Obenchain, The Woodley, Washington, D.C.
It is stamped on the front as posted from Oyster Bay, New York Post Office on January 28th 1918.
It has a purple 3 Cent ‘George Washington’ Postal Stamp.
It has a return address noted on the front: “Please forward, or return to Theodore Roosevelt, Oyster Bay, N.Y.”
To Read more about Mrs Obenchain click on the following Link:
Theodore Roosevelt Jr. (/ˈroʊzəvɛlt/ ROH-zə-velt;[b] October 27, 1858 – January 6, 1919), often referred to as Teddy or his initials T. R., was an American politician, statesman, conservationist, naturalist, historian, and writer who served as the 26th president of the United States from 1901 to 1909. He previously served as the 25th vice president under William McKinley from March to September 1901, and as the 33rd governor of New York from 1899 to 1900. Having assumed the presidency after McKinley’s assassination, Roosevelt emerged as a leader of the Republican Party and became a driving force for anti-trust and Progressive policies.
Roosevelt was a sickly child with debilitating asthma but partly overcame his health problems by embracing a strenuous lifestyle. He integrated his exuberant personality, a vast range of interests and achievements into a “cowboy” persona defined by robust masculinity. He was home-schooled and began a lifelong naturalist avocation before attending Harvard. His book The Naval War of 1812 (1882) established his reputation as a learned historian and popular writer. Upon entering politics, he became the leader of the reform faction of Republicans in New York’s state legislature. His wife and mother both died in the same night and he was psychologically devastated. He recuperated by buying and operating a cattle ranch in the Dakotas. He served as Assistant Secretary of the Navy under President William McKinley and in 1898 helped plan the highly successful naval war against Spain. He resigned to help form and lead the Rough Riders, a unit that fought the Spanish army in Cuba to great publicity. Returning a war hero, he was elected governor of New York in 1898. The New York state party leadership disliked his ambitious agenda and convinced McKinley to make Roosevelt as his running mate in the 1900 election. Roosevelt campaigned vigorously, and the McKinley–Roosevelt ticket won a landslide victory based on a platform of victory, peace and prosperity.
Roosevelt assumed the presidency at age 42 after McKinley was assassinated in September 1901. He remains the youngest person to become President of the United States. Roosevelt was a leader of the progressive movement and championed his “Square Deal” domestic policies, promising the average citizen fairness, breaking of trusts, regulation of railroads, and pure food and drugs. He prioritized conservation and established national parksforests, and monuments intended to preserve the nation’s natural resources. In foreign policy, he focused on Central America where he began construction of the Panama Canal. He expanded the Navy and sent the Great White Fleet on a world tour to project American naval power. His successful efforts to broker the end of the Russo-Japanese War won him the 1906 Nobel Peace Prize. Roosevelt was elected to a full term in 1904 and continued to promote progressive policies. He groomed his close friend William Howard Taft to succeed him in the 1908 presidential election.
Roosevelt grew frustrated with Taft’s brand of conservatism and belatedly tried to win the 1912 Republican nomination for president. He failed, walked out, and founded the Progressive Party. He ran in the 1912 presidential election and the split allowed the Democratic nominee Woodrow Wilson to win the election. Following the defeat, Roosevelt led a two-year expedition to the Amazon basin where he nearly died of tropical disease. During World War I, he criticized Wilson for keeping the country out of the war; his offer to lead volunteers to France was rejected. He considered running for president again in 1920, but his health continued to deteriorate. He died in 1919. He is generally ranked in polls of historians and political scientists as one of the five best presidents.
Important Teddy Roosevelt Letter of January 1918
Condition: Very Good. Some discoloration (yellowing of paper) through passage of time.
Dimensions: Envelope is 6 inches wide and 4 inches tall
The Letter is: 5.75 inches wide and 7.65 inches tall.
PRICE NOW: $7,000
|
The Current State of Texas
Angelo Ledda ’24
Within this past weekend, something drastic has happened to the state of Texas. A horrible snow storm hit Texas and caused many bad things to happen such as, deaths, the destruction of homes, the pollution of water, and many more terrible events. Texas is usually an extremely warm state so the houses were not built for cold conditions and the pipes of Texas are not insulated for cold conditions. These reasons are what led to the destruction of homes, pipes being demolished, and water being polluted. The polluted waters of Texas, the demolition of homes, and the breaking of pipes are leading to many injuries and deaths in the state of Texas.
The damaging of pipes led to an urgent need for help. Plumbers were constantly being called for and there was a lot of work to be done. Some houses actually needed to be re-piped completely or had immense amounts of damage done to them.
The New York Times stated in the article, What a Texas Plumber Faces Now: A State Full of Burst Pipes, “Some other companies had gotten so swamped that they stopped answering the phone at all. . . . But some houses will need major work, and may even have to be re-piped completely.”
(The New York Times)
This statement shows that the need for plumbers was immense and that some houses had gotten horribly damaged from the winter storm that recently hit Texas. The destruction of pipes in Texas also led to polluted water. The pipes had been busted due to no insulation and the cold destroying them. Millions of residents in Texas woke up around Monday without safe water to drink.
“Millions of Texans were waking up without safe drinking water Monday. . . . As of 8 a.m. ET, nearly 8.8 million people were still under boil water notices. . . . The commission said 260 boil water notices across the state have been rescinded as of Monday morning, but 120,000 people still had no water service at all.” This quote shows that warm, clean water could not have been given out due to the demolition of pipes and how a lot of people did not have water service. The winter storm that hit Texas was absolutely devastating and caused millions of people’s pipes to be damaged which led to polluted and unhealthy waters.
Millions of Texans Wake up Without Safe Drinking Water After Winter Storm
(NBC News)
|
In statistics, what is the difference between Bias and Error?
You can say, Bias is a type of error? or Bias is an error with some tendency?
• 1
$\begingroup$ I think they are different things. $\endgroup$
– SmallChess
Feb 2, 2015 at 10:24
• $\begingroup$ Bias relates to expectation while error measures deviation in the sample to population. $\endgroup$
– SmallChess
Feb 2, 2015 at 10:25
• $\begingroup$ Bias doesn't go to 0 asymptotically, $\endgroup$
– user603
Feb 2, 2015 at 11:29
• $\begingroup$ I have often seen "bias" used to describe differences from expectation that are systematic, and "error" to describe differences that are (or appear) random. $\endgroup$
– D L Dahly
Feb 2, 2015 at 14:25
• $\begingroup$ I learned that BIAS is how far off on the average the model is from the true. $\endgroup$
– Darwin PC
Feb 12, 2015 at 3:02
5 Answers 5
We can talk about the error of a single measurement, but bias is the average of errors of many repeated measurements. Bias is a statistical property of the error of a measuring technique. Sometimes the term "bias error" is used as opposed to "root-mean-square error".
• 1
$\begingroup$ RMS error inevitably includes any bias. $\endgroup$
– Nick Cox
Feb 2, 2015 at 11:39
The term error appears in several related (but not identical) contexts throughout science in general and statistical science in particular.
Error still carries the flavour of mistake (something erroneous), at least in the context of measurement error and particularly when scientists are thinking about their data. But its primary meaning in statistical science has long since been simply that of more or less uncontrolled variation (something erratic or errant). Sampling error, for example, refers to sampling variation, the uncontrolled and uncontrollable fact that different samples, responsibly taken, will include different data; hence in general any statistics (such as means, correlations, fraction blue) based on those samples will differ from sample to sample.
In simple regression-type models, error refers to individual disturbances in specifications such as
response variable $=$ function of predictors $+$ stochastic error
and error can refer more generally to the conditional distribution of the response variables given the predictors.
Bias refers to the difference between the true or correct value of some quantity and a measurement or estimate of that quantity. In principle it cannot be calculated therefore unless that true or correct value is known, although this problem bites to varying degrees.
• In the simplest kind of problem, the true value is known (as when the centre of a target is visible and the distance of a shot from the centre can be measured; this is a common analogy) and bias is then usually calculated as the difference between the true value and the mean (or occasionally some other summary) of measurements or estimates.
• In other problems, some careful method is regarded as the state of the art and so yielding the best possible measurements, and so other methods are regarded as more or less biased according to their degree of systematic departure from the best method (in some fields termed a gold standard).
• In yet other problems, we have one or more methods all deficient to some degree and assessment of bias is then difficult or impossible. It is then tempting, or possibly even natural, to change the question and judge truth according to consistency between methods.
The two terminologies can be made consistent with the idea that systematic measurement errors have non-zero means (hence their summary quantifies bias) and random errors have zero mean. (Equivalently, that is how we label error as systematic or random.)
In mathematical statistics, standard analyses analyse whether particular estimators are biased in small samples, asymptotically, etc,, either in general or under particular circumstances.
This sketch at times implies that error is defined additively, so that
measured value $=$ true value $+$ error
but that is just the simplest situation. Nothing here rules out the idea that error may be multiplicative rather than additive, or defined on more complicated scales (e.g. in measuring proportions or percents, error may be better considered on something like a logit scale).
Comments on erroneous and erratic here were inspired by discussions in Jeffreys, Harold. 1939/1948/1961. Theory of probability. London: Oxford University Press.
The difference between the two is not only semantic, one can also express the difference in a formula: the bias-variance-tradeoff.
The following is the bias-variance decomposition as in Elements of Statistical Learning or the wikipedia page on bias-variance tradeoff:
$$ \text{MSE}(\hat{\theta}) = \text{Var}(\hat{\theta}) + \text{Bias}^2(\hat{\theta},\theta).$$
Where $\hat{\theta}$ is the estimator for $\theta$, $\text{MSE}(\hat{\theta}) = \mathbb{E}(\hat{\theta}-\theta)^2$ is the mean square error, $\text{Var}(\hat{\theta})=\mathbb{E}(\hat{\theta}-\mathbb{E}\hat{\theta})^2$ is the variance of $\hat{\theta}$ and $\text{Bias}^2(\hat{\theta},\theta)= (\mathbb{E}\hat{\theta} - \theta)^2$ is the bias (systematic deviation) of the estimator.
Form this identity we can see that in the context of estimators,
• bias is an error because it is a component of the mean square error.
• not every error is a bias (unfortunately)
• (this is not related to the question) there might be biased estimators that can have a lower MSE than unbiased estimators although it is a nice property for an estimator to be unbiased.
What I present here is about the terms error and bias for estimators but I think the principles hold true for the words as they are used in statistics in general:
One can decompose error into a systematic and an unsystematic component. Bias is a name for the systematic error.
To put it succinctly, bias is the difference of the expected value of your estimate (denote as $\hat{\theta}$) with the true value of what you are estimating (denote as $\theta$).
$$E[\hat{\theta}] - \theta$$
Error is the difference of your estimate with the true value of what you are estimating.
$$\hat{\theta} - \theta$$
You can have a fantastic estimator which is unbiased, but still have error because your observed value of the estimator didn't get it exactly right.
• $\begingroup$ isn't error the absolute value of the difference between estimated and true value? $\endgroup$
– Agile Bean
Apr 17, 2020 at 8:30
Error means wrong, e.g. type 1 & 2 errors. Bias means shifted or straying from a true value, e.g. underreporting of alcohol consumption.
Sometimes error is used to refer to fundamental or unmeasured randomness, such as the error term in a regression model, or measurement error. In some cases, but not always, such error causes bias, but they are not exchangeable terms. Error will increase variance, however.
As an example, suppose families above median household income are 60% likely to vote Republican, and families below are 30% likely to vote Republican. The odds ratio is 0.6/0.5 / (0.3/0.5) or 2. However, suppose respondents on the survey misreport their income, so that 10% misclassify from low to high, and 10% misclassify from high to low - a typical problem when non-working household members respond to these surveys. The odds ratio becomes (0.60.9 + 0.30.1)/0.5 / ((0.30.9 + 0.60.1)/0.5) or 1.7
• $\begingroup$ Time to bust out 2×2 table thinking: Both estimators and measures may have small bias and have small error, may have small bias and large error, may have small error but large bias (i.e. 'precisely incorrect'), and may have both large bias and error. $\endgroup$
– Alexis
Mar 1 at 16:57
• 1
$\begingroup$ @Alexis The main reason for this answer is that no answer considered Type 1 and Type 2 errors. "Error" actually does not have any rigorous definition because it's used in many senses, even in your case there are many forms of "noise as error" in modeling and measuring, and even that is only a snippet. Bias is more rigorously defined. $\endgroup$
– AdamO
Mar 1 at 16:59
• 1
$\begingroup$ You got my +1. :D Say, biostatistician, did you see my recent question about the sampling distribution of RR? Wanna keep up your habit of enlightening me? <3 $\endgroup$
– Alexis
Mar 1 at 17:04
• $\begingroup$ I agree with you insofar as I didn’t imagine that a question about error and bias was really about Type 1 and Type 2 errors, or indeed about other types of errors in inference. $\endgroup$
– Nick Cox
Mar 2 at 8:32
Your Answer
|
What is ATTyC?
ATTyC is a command line application which you can use to check the expressions (i.e. the bits in curly braces, directive attrs, ng-if etc.) in AngularJS template files are correct. It checks that each expression has valid syntax, and obeys all the type constraints you specify. For example, say you specify that variable n is of type number (we’ll cover how to do that shortly). If the expression n.toFixed(3 is found in your template, ATTyC will tell you it isn’t valid because of the lack of a closing bracket. Also, if ATTyC finds the expression n.concat([2]), it will tell you that you can’t call .concat on a number.
ATTyC needs to know about every variable you use in your template to effectively type check it. It needs to know, at minimum, the name and type of each variable. If you’re using the "controller as" syntax (and you should be), you should just need to give ATTyC the name and type of your controller. The way you do this is (loosely) inspired by the MVC Razor @model syntax. ATTyC looks for a comment at the top of the template, which we call the template’s "metadata". This metadata contains an EDN vector of maps, each containing the keys :name and :type. You can also add an :import key, which is the path relative to the template that ATTyC should import the type from. This comment block will probably look something like this:
<!-- [{:name "ctrl" :type "Controller" :import "./controller"}] -->
One thing to note is that although ATTyC does it’s best to find any bugs in templates, it isn’t intended to completely prove that your template is correct. As far as I’m aware, this is impossible with AngularJS because scopes are so dynamic – for example a child scope can add properties to the parent dynamically. Nevertheless, we’ve found that ATTyC will catch 90% of bugs with your templates.
How does ATTyC work?
ATTyC is written in ClojureScript, which compiles to JS and runs on Node. This allows you to install it through NPM – just run npm install -g attyc, easy peasy. The basic flow is:
1. ATTyC reads your template file
2. It extracts the metadata
3. It then extracts all the expressions in the template
5. This string is run through the TypeScript compiler
Predictably, this is harder than it looks on paper. There are several reasons for this. Firstly, ng-repeat and ng-options have their own "special" syntax, which aren’t documented well. I found lots of very strange edge cases with ng-repeat. Just to give you one example, x in (y = 1) && [1,2,3] actually works, and you can use y in bindings both within and outside the ng-repeat block (wat).
Another problem was how different scopes interact in templates. Child scopes can add or mutate properties on their parent scopes, and vice versa. You can also write expressions that declare variables anywhere in a template, and these expressions can be run at any time. This can lead to completely unpredictable behaviour, which is in general impossible to statically analyse. To get around this, ATTyC does not allow variable declaration anywhere outside of ng-init, ng-repeat and ng-options attributes, even though this is technically possible in AngularJS. Also, all initialised variables are treated as global, even though they’re actually not.
An aside about Instaparse
ATTyC uses Instaparse to verify and extract expressions from templates. Originally, ATTyC used regexes for this, but ng-repeat and ng-options have their own grammars, and extracting TypeScript expressions from them is very difficult using regexes. Instaparse parsers are far easier to write, extend, and understand than regexes for this use case.
ATTyC also uses Instaparse to verify that the expressions themselves are correct. This basically involves writing a parser for the entirety of JavaScript, which is about 15 lines of code. This allows ATTyC to detect that expressions like [{y: 1] are wrong. (Note that if an expression in curly braces can’t be parsed, ATTyC errs on the side of caution and just ignores it).
Does using ATTyC really help avoid bugs?
Future Work
• Performance – ATTyC could definitely be faster.
|
skip to Main Content
Cyber Liability
Cyber liabilities describe the risk exposures to a business which arise from the use of their computer network, the internet, data storage and any electronic data exchange. Contrary to common belief, it is not purely large multi-national organisations or banks which are targeted, but also an ever increasing amount of private, small and medium sized enterprises. Cyber risks are real and need to be taken seriously by the board of directors of all companies.
Cyber insurance is designed specifically to help an organisation manage and control the impact of a cyber attack or data breach and get it back to business as quickly as usual.
As well as protecting the firm financially if someone brings a claim arising from use of the internet, email, intranet, extranet or company website, cyber insurance can cover loss of customer data and the consequential losses and fines that may arise from this.
It can also cover the company’s own IT systems if they are attacked or damaged (for example by a virus, malware, ransomware, etc.), as well as the resultant damage to reputation and potential loss of profits associated with a cyber breach.
An important element of cyber insurance is the provision of immediate access to professional IT consultants and advisors in an emergency, as well as paying for forensic costs, public relations costs, and regulatory defence and penalty costs as a result of a cyber extortion.
Back To Top
|
The Surprising Mental Health Benefits of Swimming You Never Knew About
Last Update on April 2, 2021 : Published on April 2, 2021
Mental health benefits of swimming
When we talk of visualization for mental health we often ask a person to visualize seaside, beach, or water flowing. Reason being that looking at water makes us feel calm, peaceful, and relaxed. Imagine when a mere vision of water can be so therapeutic for our mental wellness imagine what submerging ourselves into the water can do! Yes, I am talking about one of my favorite summer activities- SWIMMING!
In fact, recently I was exploring different activities that are found to boost the mood, increase wellness, and regain calm, the one activity that kept on showing on the list was swimming! This surely proves that swimming is a great activity to make a part of your life. But what is the magic behind swimming? Let us find it out!
Surprising Mental Health Benefits of Swimming
1. Swimming Makes You FEEL HAPPY
The first mental health benefit of swimming is that it reduces the level of stress and pain hormones in our body. Well, that is not it, swimming increases the level of happiness hormonesThe Endorphins in our body. This in turn reduces our perception of pain, giving us a sense of positivity and happiness.
2. Swimming Relieves STRESS
The major cause underlying stress is that we are never in the moment. Either we are constantly evaluating our future or regretting our past, and thanks to technology and our devices to never help us come out of this cycle! Luckily while you are submerged in water, engaged swimming, you are just in the moment. The focus then remains on your movements, the rhythm of your body and plus there is no device distracting you. Thus, helping you jump out of the loop causing stress.
Scientifically, it is proven that swimming reduces the level of stress hormones in our body and helps in the growth of new brain cells which break down during chronic stress.
3. Swimming Boosts BRAIN BLOOD FLOW
Brain health is improved by swimming, a small study found that when the head is submerged in water the blood flow to the brain improves. This in turn is found to leave a positive impact on the mental well-being of an individual. Carter, in their study, found that immersing in water improves blood flow and hence memory, mood, cognitive functioning, and mood.
4. Swimming Improves SLEEP
Exercise is found to improve your sleep. Swimming is a full-body workout that allows you to sleep a lot better. Another reason why swimming is found to improve sleep is that it cuts down our stress levels. With stress levels going down, the quality of sleep is found to improve. Swimming laps are likely to help you wave off the tossing and turning due to bedtime anxiety.
Mental health benefits of swimming
5. Swimming Offers RELAXATION
Can there be any better relaxing activity than swimming? I bet there isn’t one! Swimming is nothing less than a meditation offering you no contact with external stimulation, taking your mind off the worry zone, and practicing deep breathing. Along with this, when you swim you focus on the moment, putting a pause on the daydreaming, and help you cool down.
6. Swimming Works as a SOCIAL ACTIVITY
Swimming also works as social anxiety. Swimming can be a great way to meet like-minded individuals to share experiences and learn new swimming techniques together (trust me it does happen). It is a sport with social linkage with it. You can also make new friends at the pool or even meet them outside, making swimming a social hub for you. If you are wondering what socializing has to offer you consider these pointers:
7. Swimming Reduces ANXIETY and DEPRESSION
Several studies and surveys have found a positive impact of swimming on the symptoms of depression and anxiety. In a research where 4.000 swimmers across the world were analysed, 3/4th of them agreed that water based activities helped release tension and 68% of them said that being in water made them feel happier. Not only this, people suffering from depression when engaging in water based activities like swimming reported an increased sense of self-esteem. More such studies in this direction have shown that swimming helps in reducing symptoms of depression, anxiety, and even dementia.
8. Swimming is Great For YOUR BODY Too
benefits from swimming
Your body also benefits from swimming. There are many research studies that have shown the positive side of swimming on the physical health of an individual. The infographic shared below shares some of the key benefits of swimming on your body.
9. Research on Mental Health Benefits of Swimming
Multiple and extensive researches have been carried out to study the positive relationship between swimming and mental wellness. We have picked our favorites to share with you.
Swimming and Mental Health: Research 1
In 2018, a YouGov poll by Swim England found that:
• 4 million adults in Britain significantly reduced the symptoms of anxiety or depression through Swimming
• 3 million British adults with mental health problems swim at least once every 2-3 weeks.
When these groups were analyzed further it was found that:
• 43% feel happier.
• 26% feel more motivated to complete daily tasks
• 15% believe life feels more manageable
Swimming and Mental Health: Research 2
In a study conducted by Carter, participants were immersed in water up to their hearts. Data recorded showed that blood flow to the brain was higher compared to on land; blood flow to middle cerebral arteries increased by 14 percent and blood flow to posterior cerebral arteries increased by around 9 percent.
Swimming and Mental Health: Research 3
There is more research that supports a positive relationship between swimming and mental health. That is why we would recommend you to make swimming a part of your lifestyle.
Swimming, Mental Health, and FAQs
Q1. How often should I swim to benefit my mental health?
To enjoy the mental health benefits of swimming we would recommend you to swim for 20-30 minutes twice or thrice a week. If you are comfortable with it you may swim for more than 30 minutes and for more days in a week. The best advice would be to start slow and then gradually increase your time spent on this exercise.
Additionally, you may also consult your therapist or mental health professional before making swimming a part of your lifestyle.
Q2. What things to avoid when swimming with a mental health condition?
There are a few things that you should take care of if you opt for swimming as a mental health exercise to bring relief to your mental health condition. This includes:
1. Avoid triggering situations:
If you are uncomfortable with large crowds or it is the opposite (fear of being alone), avoid the respective trigger. For this to happen you must first become aware of your triggers and then work on your swimming plan such that you can avoid these triggers.
2. Avoid excessive swimming:
Anything too much does more harm than good to both our body and brain. The same applies to swimming as well. Exercise swimming can have damaging effects like addiction, burnout, self-harm, and more.
3. Check for medication:
Check with your mental health professional what medications you are on and ensure that they won’t impact your safety while swimming. Please make sure that you check on the same, to avoid any negative outcomes resulting from their side effects.
Q3. How to make swimming helpful when you have a mental health condition?
We understand that picking any activity and making it a part of your lifestyle especially with a mental health condition can be a huge task to work on.
To help you with the same we are sharing a guide with you that will you make swimming a part of your life:
1. Ask yourself if you are ready to move out of the house and engage in swimming. It is only when you are prepared that other things will line up as well.
2. Instead of giving 1 hour each day of the week and getting exhausted, start small. Ease yourself and make yourself comfortable with the pool, people around you, and the new routine. Gradually, increases your time and efforts.
3. Talk to your coach about your condition and see how they can help. By doing so you let them understand your progress and efforts.
4. Buddy with if you have someone to tag along. Having a company will make you a lot more comfortable with the pool and boost your confidence as well.
Q4. Why Do I Love Swimming?
When I first went for my swimming class I had the following physical health conditions viz., PCOD, rheumatoid arthritis, and thyroid. All these invisible health conditions always had an invisible impact on my mental health.
It was when I dipped myself into water that I found my calm after years, I felt liberated, free, and loved!
So it was exactly 6 years back that I explored my love of swimming and cherished its benefits. To date every 2 months, I join swimming classes and never miss a chance to dive in a pool whenever possible.
Do you have a swimming story to share? Share it with us in the comments section below.
Now it is time for you to take out your swimming costume, dive deep into the water, and give your mental wellness a treat!
Happy swimming to you…
About The Author
Anjali Singh
Leave a Reply
As Seen On
|
Flipping Heck! Learning To Be Productive One Day At A Time
A Guide To Sleep Disorders
Woman Sleeping
A Guide To Sleep Disorders
Sleep disorders are some of the most common conditions all across the world. A study conducted in India showed that about 1/5th of a healthy, productive age group of the Indian population had SRD or Sleep-Related Respiratory Disorders and about 30% of them suffered from occasional insomnia. Let us go through the types of sleep disorders and how you can address them.
Insomnia is the term that is used when a person experiences difficulty sleeping or trying to fall asleep or stay asleep. This is one of the types of sleep disorders which is very common. There are two kinds of insomnia.
• Transient insomnia is when a stressful situation happens and the person’s sleep gets affected. For example, losing a loved one. It could also happen due to jet lag or if you work shifts. Right now, many people may be experiencing stress because of COVID-19, and this may affect their sleep patterns.
• Chronic insomnia is when a person has difficulty falling asleep for at least a month. The person may get good sleep for a few nights and terrible sleep for some other nights.
Signs of insomnia could include the following-
1. You experience restless sleep and you cannot get enough sleep to keep you refreshed for the day.
2. You have difficulty sleeping even if you are tired
If you think that you are exhibiting the signs of insomnia, you must consult a psychiatrist. The treatment for Insomnia can include either medication prescribed by a psychiatrist, or non-medical techniques such as cognitive behavior therapy, relaxation techniques, etc.
Sleep Apnea
In this disorder, your airways get blocked and therefore you are unable to breathe properly in your sleep. The signs of sleep apnea could include –
1. Snoring loudly
2. Waking up at night with a sore throat
3. Waking up at night choking
4. Feeling tired during the day
Treatment for a person who exhibits the signs of sleep apnea could include CPAP (Continuous Positive Airway Pressure) therapy which is a machine that is used to keep the person’s airways open, undergoing surgery, using a dental appliance, positional therapy, etc. ‘
This condition often leads you to fall asleep at any time, and it does not matter where you are. People experiencing this disorder are not able to regulate their wake and sleep cycle. The symptoms of narcolepsy include –
1. Fall asleep without any warning
2. If you feel drowsy during the day
3. Experience temporary loss of muscle control that makes you feel weak
4. Disturbed sleep at night
The first step to treating someone who shows the symptoms of narcolepsy is to contact a general physician so he or she can tell the patient what to do next, and so they can figure out how extreme your case is.
Restless Legs Syndrome (RLS)
RLS refers to an uncontrollable desire to move your legs when you are sleeping or resting. You could have RLS if you experience
1. A strong urge to move your legs
2. Crawling sensation or ache in your legs
3. The symptoms mentioned above get worse at night
4. Relief when you move or stretch
Medications and behavioral therapy are generally used to treat Restless Legs Syndrome, so it is important to reach out to the nearest doctor as soon as possible.
These disorders are very common and if you find yourself experiencing any of the symptoms of these types of sleep disorders, there is no need to panic. The good news is that there is always help and many patients have overcome such disorders. To stay healthy and safe, you must contact the respective doctors to make sure your condition is treated before it gets worse.
About The Author
Goutam Singh is the Digital Marketing Manager at MFine, an AI-powered healthcare company in India. He has experience working in the IT and services industry and is passionate about SEO, SEM, and marketing strategy.
• Follow Goutam Singh on:
Filed Under:
Leave a Reply
Your email address will not be published.
|
Essential University Physics: Volume 1 (4th Edition)
Published by Pearson
ISBN 10: 0-134-98855-8
ISBN 13: 978-0-13498-855-9
Chapter 13 - For Thought and Discussion - Page 245: 1
A possible answer is below. There are a variety of answers.
Work Step by Step
First, the particles are extremely small compared to macroscopic systems. Because more energy is required to move more massive objects as opposed to less massive ones, a far greater amount of power would be needed to move a macroscopic system than a microscopic system in a given time interval. Furthermore, the electrostatic force is large compared to the size of the molecule, making fast oscillations possible.
Update this answer!
Update this answer
|
Feed-IWhich is the Best Angle for Solar Panels?
How to Angle Solar Panels?
Solar panels work by converting the sun’s energy into electricity for a home or business. The more of the sun’s rays a panel is receiving the more energy it will be able to create. Solar panels are working at their optimum efficiency when the panels are pointed directly at the sun. However, the sun is a constantly moving object changing its angle in the sky throughout the day and as seasons change. One option is to invest in a solar panel tracker which moves the panels to follow the sun throughout the day, but these can be an expensive option, and the increased energy they create may not offset the purchase price. The other option is to calculate what angle to mount your panels at to receive the maximum amount of sun in a day.
Which is the Best Angle for Solar panels?
The best angle for your solar panels will depend upon your location and when you want you solar panels to be most efficient. For most installations, this will be during the summer months, and your calculations will be based on the location of the sun in the summer. At solar noon, which is exactly halfway between sunrise and sunset, the energy from the sun is at its greatest so the solar panels should be positioned to take advantage of this time. At solar noon in the UK, the sun is due south, which is why solar panels are installed on south facing walls or roofs. Installers can then use online calculators to calculate what angle the panels should be at to maximise the energy generated at solar noon. It varies across the UK, in London in June it is typically around forty degrees off vertical, but this will be different in other cities. In Manchester, it is more in the region of thirty five degrees.
Solar Irradiance Meters
A more accurate alternative to using an online calculator is to invest in a solar irradiance meter. Solar irradiance is a measure of how much of the sun’s energy is hitting the earth at any point. The advantage of solar irradiance meters is they can exactly monitor the levels at your site. The meters can then also be used to calculate the optimum pitch for your panels based on measurements taken at your site. Solar irradiance meters are available to buy online for between sixty and three hundred pounds depending on the sophistication of model you require.
About the author
View all posts
|
Skip to content
Press release
UK public continues to flush wet wipes, condoms and nappies despite fatberg warnings
20 March 2019
Nearly 40% of people in the UK remain unaware of fatbergs, with significant percentages of people flushing fatberg-causing items, such as wet wipes, tampons and condoms, according to new research by the Institution of Civil Engineers (ICE).
‘Don’t Feed the Fatberg’ is a key message behind ICE’s new exhibition on water, opening to the public on World Water Day (Friday 22 March), which explores how civil engineers help provide clean water and sanitation and tackle issues such as drought and flooding. The public survey was commissioned by ICE to help highlight the need for greater public education on fatbergs – major sewer blockages caused by build-up of non-biodegradable items, such as wet wipes and congealed fats.
The research shows that just under a third of people (29%) have flushed wet wipes, with nearly a fifth of people (17%) flushing wet wipes some or all of the time. People also admitted to flushing other items, including tampons (29%), condoms (19%), plasters (15%), sanitary pads (13%) and nappies (5%). The main reasons people gave for flushing these items were that it’s more convenient or because they thought it was fine to do so.
A third of people (33%) reported that they pour fats and oils down the drain some or all of the time, with the majority doing so because they find it easy or more convenient than bin disposal.
Martyn Harvie, a principal civil engineer and ICE member, who appears in ICE’s water exhibition as the superhero character Drainage Dyno, said:
“Fatbergs are a growing problem for society today and urban areas are particularly affected, with their older infrastructure and dense populations. People are prone to ‘flush and forget’, not thinking about the environmental consequences. But responsible water management is vital for a sustainable future.
“By revealing the secrets beneath the sewers, ICE hopes to warn people ‘Don’t Feed the Fatberg’ and raise awareness of all the behind-the-scenes work that civil engineers do to manage our precious water resources. The public can play their part by binning rather than flushing items such as wet wipes. Oils and fats should be binned, or recycled where possible, rather than poured down the drain.”
Among the 61% of people who are aware of fatbergs, the research found that nearly 75% are also aware that wet wipes, including those which are currently branded flushable, do not break down in sewers and can contribute to fatbergs.
However, despite the overwhelming majority (95%) believing that they have a personal responsibility to help prevent fatbergs, only half would be willing to pay more to purchase ‘fine to flush’ wet wipes that will break down in sewers.
• Vienn McMasters, communications business partner for ICE
|
The NYTimes carried a story on March 10 about a controversy over plans to build a very large home in Berkeley, CA. The plans which have been approved show a total area of about 10,000 square feet, of which 3,500 are for a garage. The owner, Mitch Kapor, is the founder of Lotus and has used his ample wealth for many philanthropic ends including many concerned with the environment. Perhaps he lost so much of his money in the crash that he plans to operate a public parking lot.
The controversy here rose from the designation by a city board that the house qualified as being “green.” Such designation comes via an evaluation scheme that gives points to green features of a building, for example, the use of low-flow faucets and low-volatility paint. The Kapor plan received a score of 91 points, far above the minimum of 30 needed to qualify for a green designation.
The architect noted Kapor’s environmental largess but offered no details on the process. Neighbors and others are appealing the decision to approve the plans. Another architect, William Harrison who builds big houses for wealthy clients is quoted as defending the practice.
Harrison misses the point entirely. It’s not at all about goodness or intention. It is simply a matter that such large houses create enough negative impacts to overcome the benefits by implementing green features whether according to the LEED or any other scoring system such as is used in Berkeley. What this has to do with socialism is beyond me.
In a 2005 article on the environmental impact of house size in the Journal of Industrial Ecology (Disclosure: I am one of the editors of this journal), the authors, Alex Wilson and Jessica Boehland say:
As house size increases, resource use in buildings goes up, more land is occupied, increased impermeable surface results in more storm-water runoff, construction costs rise, and energy consumption increases. In new, single-family houses constructed in the United States, living area per family member has increased by a factor of 3 since the 1950s. In comparing the energy performance of compact (small) and large single-family houses, we find that a small house built to only moderate energy-performance standards uses substantially less energy for heating and cooling than a large house built to very high energy-performance standards.
The article continues with data that show that the impact of house size is not linear; the impact increases disproportionately with size. A house twice the size of the average dwelling (about 2,500 square feet) would typical have about three times the impact based on the materials used in construction. Heating and cooling energy use depend on the details of the design and cannot be compared in a general way.
There’s another very important lesson here besides the substantive issues of the actual environmental impact. Green scores simply do not tell the whole or even enough of the environmental story to be meaningful. There is always an “other things being equal,” qualifier in the background. In this case it would be another 10,000 square foot house using less effective features. The billionaire’s Prius sounds good compared to a Hummer, but can’t come close to a bicycle’s low impact. I say this not as a value judgment on the choice of a large house, hybrid vehicle, or anything for that matter, but as a criticism of the utility of scores as valid indicators of greenness. Quantity or volume almost always trumps lower scores.
One Reply to “The “Green” House Effect”
1. If Mr. Harrison’s Los Angeles client opened up his 25,000 square foot home to house the homeless, that would be a different story. It is more likely, however, that five or six people will occupy the house; four family members, and two servants. If everyone in the world occupied 4,000 square feet of living space (nicely-appointed no doubt), then the Earth would be toast already.
Leave a Reply
Your email address will not be published.
|
Weather Alert
Alaska air pollution holds clues for other Arctic climates
FAIRBANKS, Alaska (AP) — In the pristine expanse of Alaska’s interior lies a dirty secret: some of the most polluted winter air in the United States can be found in and around Fairbanks.
The Fairbanks North Star Borough, which includes Alaska’s second largest city, routinely exceeds limits set by the U.S. Environmental Protection Agency for particle pollution that can be inhaled and cause myriad health problems.
Over seven weeks this winter, nearly 50 scientists from the U.S. and Europe descended on Fairbanks to study the sources of air pollution, how the contaminants interact in the city’s cold and dark climate and to come up with a list of best practices for people living across the circumpolar north.
What they find could help city planners make better decisions on where to place power plants or smelters in northern climates and guide lawmakers on how to regulate chemicals in fuel oil or other sources to reduce the harm.
The task becomes even more important as climate change is driving people away from places that are getting hotter toward northern areas, even though climate change is warming the Arctic twice as fast as the rest of the planet. In Fairbanks, the average winter temperature rose 2.7 degrees F (1.5 degrees C) since 1992, according to the National Oceanic and Atmospheric Administration.
Like Salt Lake City and other cities surrounded by mountains, Fairbanks suffers from winter inversions, layers of warmer air that trap cold, dirty air and keep it from dissipating. Even though wind is blowing aloft, the cold air prevents the wind from getting down to ground level.
“Just like an open top freezer in an old grocery store, that cold air just pools into the bottom of that freezer and air can just go right over the top,” said Bill Simpson, an atmospheric chemistry professor at the University of Alaska Fairbanks Geophysical Institute and the UAF College of Natural Science and Mathematics.
“It’s calm down here, and the pollution that’s emitted down here stays down here, unfortunately,” added Simpson, the project leader.
Youtube video thumbnail
The problem isn’t unique to cold climates in the United States. The study is of interest to researchers in northern European cities because of the similar problems with inversions.
In Fairbanks, a major source of pollution comes from wood-burning stoves, which are common in this area where wood is plentiful and cheap, temperatures routinely reach minus 40 degrees F (minus 40 C) or colder and heating fuel is expensive. Other sources are vehicle exhaust systems, power plant emissions and heating oil.
Owen Hanley practiced pulmonary medicine in Fairbanks for about 35 years. The retired doctor says the air pollution problem in Fairbanks can permanently harm respiratory function and cause many other problems.
The mixture of pollutants from smoldering wood fires, cars, coal and other sources releases additional chemicals that can be more harmful than cigarette smoke.
“We know with air pollution, there’s more dementia in adults, there’s more kidney failure and young pregnant women have more miscarriages and preterm births, and little kids don’t get full lung development,” said Hanley.
Power plants in Fairbanks emit plumes of smoke into the air, and researchers in the Alaskan Layered Pollution and Chemical Analysis project are trying to understand whether these remain up high, at the level of smokestacks, or drift down to ground level, where people live.
Seven French teams made detailed measurements of the air in downtown Fairbanks in efforts to better understand how small particles and droplets are formed. Meanwhile, a Swiss team used a tethered balloon, equipped with specialized instruments, to measure characteristics of aerosols and different trace gases at 1,200 feet (365 meters) above the ground. Another instrument allowed them to measure vertical profiles of the atmosphere.
“We are trying to understand what is happening higher up” because ground level data can be different, said Roman Pohorsky, a doctoral student at the EPFL, a science and technology institution in Switzerland.
Another experiment led by Sarah Johnson, a graduate student and researcher at the University of California, Los Angeles, used a special device to measure trace gases or pollutants at different heights in the atmosphere. The instrument, called a Long Differential Optical Absorption Spectrometer, collects information by beaming light from a parking garage to reflectors set at different heights in Fairbanks, and then studying the information that comes back.
“What we’re really looking for is information about where the pollution is accumulating as well as where it’s going,” she said, adding that she hopes the research can benefit other areas with similar weather and dirty air.
Another goal of the research came from members of the Fairbanks community: People wanted to know what the air is like inside their homes.
Researchers took over a house in Fairbanks, setting up shop in the garage with tubes running from both inside the house and outside to study the air.
Ellis Robinson, a post-doctoral researcher at Johns Hopkins University in Baltimore, noted that most public health information about the dangers of air pollution comes from studying outdoor air.
“But we really need to be studying indoor air, just as much if not more,” said Robinson.
Sulfur can be a major pollutant for people who use heating oil in their houses or live near coal-fired power plants.
Scientists are working to better understand how the sulfur that’s emitted, mostly as a gas, sulfur dioxide, turns into particles in colder and darker locations.
While the research is not a formal regulatory project, Simpson, the project leader, said the team would be willing to share the results with the EPA, the agency charged with determining Clean Air Act violations.
The Fairbanks area has been out of compliance with air quality standards since 2009. The EPA is reviewing the state of Alaska’s latest plan to bring the borough into compliance.
The researchers are expected to deliver the findings back to the university by late summer. The results will be shared with the Alaska Department of Environmental Conservation, Fairbanks’ air quality division and with residents, who will have the chance to weigh in on possible solutions.
“We can compare and contrast those situations and try and build a set of kind of best practices for understanding how pollution works in cold and dark places,” Simpson said.
Connect With Us Listen To Us On
|
About Archives and Archivists
What is an archives?
Archives hold historical records of organizations and individuals. In the case of the Seattle Municipal Archives, it is the records of City government: records created by or for City agencies and elected officials. Archives differ from libraries in that their collections do not circulate and may not be checked out. Additionally, their records are unique; a letter or photograph held by an archives may be the only one in existence. The size of a collection in the archives can range from a single item to hundreds of boxes.
What kinds of records are in an archives?
The records may be in many different formats, including documents, photographs, audio, film, maps, architectural drawings. Records reflect the everyday activities of the agency or person who created them, but may be used by researchers for an entirely different purpose than for which they were created. For example, a letter to the City Council in 1893 complaining about the noise from cows (and their bells) running loose in the City tells us about the nature of the City changing from a rural to an urban environment, even though the reason the individual wrote the letter was to get action regarding a noise problem.
What types of archives are there?
Archival repositories are diverse. They can be located in federal, state, and local governments; schools, colleges, and universities; religious institutions; businesses; hospitals; museums; labor unions; and historical societies.
What does an archivist do?
Archivists preserve and provide access to these historical records. This involves identifying and selecting permanent records due to their enduring value, then arranging those records to make them usable and housing them in acid-free materials in order to aid in their long-term preservation. Archivists often create guides to collections to provide information about the scope of the records as well as contextual information about why, how, and by whom they were created. They also provide reference services and help researchers find records that are relevant to their area of interest. Additionally, archivists create exhibits, publications, and other outreach programs to increase awareness about the records in the archives.
See the Society of American Archivists' online publication Using Archives to learn more about archives and how to use them.
|
Moslavina Gastronomy Heritage
Updated: Mar 2, 2019
In all parts of Kutina, in the orchards and in the house yards, in addition to schools and public buildings, there are visible quinces. Ever since the human mind remembers, quince was part of the Kutina’s walls separating households, along with other fruit trees. Therefore, the tourist initiative was that the quince (dunja) became the symbol of Kutina, using the old name Kotonja in the connection of Kutina-Kotunja-Dunja. The family farms started producing quince jams, and traditionally they made brandy and quince liqueurs in Moslavina. Commitment to this fruit was transformed into the preparation of quince dishes that were presented at Hotel Kutina, together with the County Chef Association and the Sisak-Moslavina Tourist Board. Now over 400 seedlings of quince is visible in the town, an effort attributed also to the well-known ethnographer Slavica Moslavac.
We talked with her at the Moslavina Museum, where she tirelessly prepares books and records of the history and present of Moslavina's folk life. She introduced us to the traditional nutrition of Moslavina which was somehow tied mostly to flour and bread. Meat was found only in traces, but also fish, which is a bit odd since the rich Lonjsko polje is round the corner. But the fish was only available to fishermen from Krapje and other Lonjsko polje villages, and in Kutina fish was eaten only by wealthy ones who bought it as dried or fresh. Carps and perch today make up a gastronomic offer of Lonjsko polje, especially as a fish stew or a baked on wooden sticks.
But the fish offer ceases as soon as one leaves the Lonjsko polje. Next is the area of flour, bread, squash soups, pastries, or anything else based on flour and water. The specialty of Moslavina is the production of numerous breads. Usually it was baked in a baker's oven, which was owned by almost all families, and the bread was large and a fresh one was not baked until the old one was eaten. Bread was kept in special dry chambers on specially made wooden stands (crossbows, križanice). We can imagine the taste of the bread of the last days before baking a new one - usually sewn into water to soften.
Where is the flour, there are mills. The historical landscape of Moslavina is surrounded by mills on the rivers Lonja, Česma, Ilova, Kutinica and various other smaller streams and brooks. In Kutina the most famous was the Hafner mill, named after the family which it owned. There are numerous remains and ruins of mills in Moslavina, which outline the history of milling in this Croatian region. Nevertheless, the locals themselves grafted the grain for their own needs. As in other Slavic peoples and Moslavina folk related their festivals to sowing and harvest, so folk customs were closely related to the field. Once, the field was ploughed with the plug, and the seed was sowed with hand. In spring, around St George and St Mark (April 25), the Moslavina people maintained marvellous processions through the fields, performed the blessing of the field, and the house was decorated with the blessed green grain. Harvesting was carried out with sickle, scythe or machines, where many harvest songs were made, which are still kept by a dozen cultural and artistic societies in Moslavina.
Everyday bread was baked with cornflour, which is becoming popular again today. The bread was also baked with barley and rye flour, and wheat flour was used for the festive bread. The ceremonial breads, or the sacred honeybread, were ritual and sacred breads that were prepared for Easter, Christmas and births, and were decorated with various motifs that can be seen today at the annual Bread Days. Festive bread and wheat bread was brought into homes as the gift of newborns. At the same time, special breads and cakes were prepared for the main Christian holidays as well as numerous festivals.
Outside the holidays, the Moslavina inhabitants had a very simple diet. In the morning, they ate yellow (maize) polenta with sour milk and fat, as well as potato stews and porridges. Sometimes, instead of maize, white potato polenta were used. Many potatoes and flour naturally enriched the table as various dumplings, which were often stuffed with plums. From the ordinary flour they cooked tačkrle dumplings, and if the dough was teared, then the čipanci would be made. Lunch, but also dinner, consisted mostly of beans, cabbage, dumplings, potatoes, cauliflower and green beans.
In addition to kuglof and other sweet doughs, the Moslavs are also known for rich cakes gibanica. They are not as big and plentiful as in the north of the country, but they are made of cheese, poppy, carob. There are also numerous strudels left in heritage.
Moslavina traditions based on life in fields, mills and bread can be found in Slavica Moslavac's book entitled "The Bread Story" in the edition of the Moslavina Museum.
Photos by: Andrea Seifert, Mint Media
#Moslavina #Flour #Bread #Harvest #Tradition #SlavicaMoslavac #English
Recent Posts
See All
|
Plastic pollution; a global issue…!
Ms.Urooj Fatima, Green Blogger, Environmental Science, GCWUS
Plastic pollution is a global issue that is causing damage. Plastic is a substance the earth cannot digest.
“Plastic” is actually a shortened form of “thermoplastic,” a term that describes polymeric materials that can be shaped and reshaped using heat. China contributes the highest share of mismanaged plastic waste with around 28 percent of the global total, followed by 10 percent in Indonesia, 6 percent for both the Philippines and Vietnam. One million plastic bottles are bought every minute around the world and that number will top half a Trillion by 2021. Less than half of those bottles end up getting recycled. 8 million metric tons of plastic winds up in our oceans each year. According to 2014 WHO data, Pakistan ranked first in the list of countries with the most polluted urban areas (cities).
More than 5 million people die in Pakistan each year due to waste related diseases. Plastic waste is one of many types of wastes that take too long to decompose. Normally, plastic items can take up to 1000 years to decompose in landfills. But plastic bags we use in our everyday life take 10-1000 years to decompose, while plastic bottles can take 450 years or more. The first plastic based on a synthetic polymer was made from phenol and formaldehyde, with the first viable and cheap synthesis methods invented in 1907, by Leo Hendrik Baekeland, a Belgian-born American living in New York state.
Plastic debris in the oceans was first observed in the 1960s, a decade in which Americans became increasingly aware of environmental problems. Plastic is harmful because it is 'Non-Biodegradable'. When thrown on land it makes the soil less fertile. When thrown in water it chokes our ponds, rivers and oceans and harms the sea life because the bacteria in their stomach cannot break the plastic up into smaller pieces. The biggest problem with plastic bags is that they do not readily break down in the environment, with estimates for the time it takes them to decompose ranging from 20 to 1000 years.
Plastic bags also clog drains and waterways, threatening not only natural environments but also urban ones. Trash Travels estimates that plastic bags can take 20 years to decompose, plastic bottles up to 450 years, and fishing line, 600 years; but in fact, no one really knows how long plastics will remain in the ocean. With exposure to UV rays and the ocean environment, plastic breaks down into smaller and smaller fragments. Only a few types of plastic are recyclable. Plastic that is recyclable can usually be recycled only once. Plastic bags consumed by animals, birds and marine life accumulate in their gut causing disease leading to their slow and painful death. Humans consuming the meat or milk of an animal who have ingested plastic, are also affected by various diseases including cancer. Approximately 85% of the plastic in our environment is microplastic.Microplastics are tiny pieces of plastic that are formed when bigger pieces of plastic break down into smaller bits.
Sources of microplastic
They can break off from certain synthetic clothes when we wash them. For example Fleece. Microplastic is also present as microbeads in everyday use household items like some toothpastes, face washes, scrubs and shampoos.
Microplastic ingested by fish and other animals enter in our food chain. It has been scientifically proven that microplastic contaminates our food, wood and drinking water. Researchers have found microplastic in bottled water, salts, beer and recently in honey. Styrofoam contain highly toxic chemicals which can leach into food and drinks and adversely affect our nervous system, lungs and reproductive organs
What can we do?
Stop using plastic straws, even in restaurants, or use reusable stainless steel or glass straw. Use a reusable produce bag, purchase or make your own reusable produce bags. Give up gum. Gum is made of a synthetic rubber, aka plastic. Buy boxes instead of bottles. Often, products like laundry detergent come in cardboard which is more easily recycled than plastic. Purchase food, like cereal, pasta, and rice from bulk bins and fill a reusable bag or container. You can save money by avoiding unnecessary packaging. Reuse containers for storing leftovers or shopping in bulk. Use a reusable bottle or mug for your beverages. Bring your own container for take-out or your restaurant doggy-bag since many restaurants use Styrofoam. Use matches instead of disposable plastic lighters or invest in a refillable metal lighter. Avoid buying frozen foods because their packaging is mostly plastic. Even those that appear to be cardboard are coated in a thin layer of plastic. The EPA estimates that 7.6 billion pounds of disposable diapers are discarded in the US each year. Use cloth diapers to reduce your baby's carbon footprint and save money. Make fresh squeezed juice or eat fruit instead of buying juice in plastic bottles. It's healthier and better for the environment. Make your own cleaning products that will be less toxic and eliminate the need for multiple plastic bottles of cleaner. Pack your lunch in reusable containers and bags. Use a razor with replaceable blades instead of a disposable razor.
Urooj Fatima
Dept. of Environmental Science, GCWUS
The Earth Needs Love
Post a Comment
|
Climate Smart Agriculture
Climate change is turning the lives of farmers upside down. Unpredictable weather patterns, shorter growing seasons, droughts, extreme temperatures, and increased exposure to pests and crop diseases pose daunting problems to smallholder farmers around the world especially in the tropics, where people tend to be more reliant on natural resources. Climate-smart agriculture techniques can help farmers adapt to and prepare for impacts in order to preserve and even improve their livelihoods.
With a population expected to balloon to 9.8 billion by 2050, climate-smart agriculture is crucial to global food security, as well: Smallholder farmers currently provide more than 80 percent of the food consumed in large parts of the developing world, particularly South Asia and sub-Saharan Africa.
Climate-smart agriculture isn’t distinct from sustainable agriculture; rather it’s a way of combining various sustainable methods to tackle the specific climate challenges of a specific farming community. The first step is to assess the particular climate risks, since a farm facing prolonged water shortages will need different strategies than one confronting frequent flooding, for example. We use a variety of tools to assess the climate risk and vulnerability of a landscape, taking the local ecosystems and the specific crop into account. Finding the right combination to manage a specific farm’s climate challenges and to build resilience to future impacts is what makes climate-smart agriculture “smart.
“Where drought and prolonged dry seasons are the main risks, a climate-smart approach might focus on planting cover crops or mulching to improve soil structure, water infiltration and retention, and overall soil fertility,” Rainforest Alliance environment director Martin Noponen explains. “In places where the risks are heavy rain and flooding, a climate-smart approach would likely focus on trenching, planting cover crops, and controlling surface water runoff with activities like vegetation barriers.”
“In other words,” Noponen adds, “climate-smart agriculture is not a one-size-fits-all approach.”
The 3 Pillars of Climate-Smart Agriculture
Any climate-smart program aims to:
• Improve farmer productivity, and as a result, livelihoods;
• make farms more resilient to climate impacts they’re facing now, and to those likely to hit in the future;
• and, where feasible, curb greenhouse gas emissions associated with growing food.
Here are some of the areas in which we help implement climate-smart methods:
Crop Management
Once an assessment of climate impacts and risks has been conducted, climate-smart strategies tailored to a particular landscape, farming community, or even individual farm can be determined. In cocoa, for example, pruning is essential, but it has to accord with the local climate risks: Where there is extreme rainfall, pruning should be done more often to ensure stronger trees that recover faster, whereas in prolonged dry periods, a farmer needs to avoid pruning so much that primary branches and trunks are exposed to too much sunlight.
Soil Management
Heavy rainfall can wash away fertile top soil, especially on sloping land. Planting ground cover helps keep soil in place in the event of heavy rains—and it’s extremely beneficial in drought-prone regions, too, because it helps retain moisture in the soil. In flood-prone areas, farmers can build drainage systems to keep nutrient-rich topsoil from being washed away; trenches can also help control excess water and keep soil where it needs to be. Planting on contours, such as hills or natural terraces, is an effective way to cut down on soil erosion, as well. Mulching applying organic matter from crop residues to the soil—can also help.
Pest And Disease Management
Global warming can give rise to pests and diseases that can reduce yields drastically and even destroy entire farms. Rising temperatures have helped the roya fungus, for example, to proliferate and wipe out coffee farms all over Central America. In a changing climate, the tried-and-true ways of battling pests and diseases often fail; desperate farmers may be tempted to increase the amount of pesticides, but over-application will only increase costs, harm beneficial insects, and increase the risk of contaminating people and the environment.
Water Conservation
Our commitment to climate-smart agriculture
The Rainforest Alliance has long been at the forefront of developing and implementing climate-smart agriculture solutions. Climate-smart methods are a key part of our 2020 Sustainable Agriculture Standard. In collaboration with the World Cocoa Foundation and our research partners CIAT and IITA, we created science-based training materials for specific cocoa-growing regions and made them available to the public online in 2018; we’re continuing to create more such guides for other landscapes and crops. For smallholder farmers, learning to adapt to climate changes now and to prepare for climate shocks in the future can mean the difference between surviving and perishing.
About the Author: Sidra Sarwer is an environmental science student and connected with many organizations. She is very much zealous for bringing the change.
Editor: Muhammad Nazim
Post a Comment
|
Sign In
Wellness Academy
Latest News
6 ways a Therapist can help your Child to deal with Separation Anxiety
child Why my Child is distressed when we are away from them: 6 ways a Therapist can help your child to overcome Separation Anxiety Disorder (SAD).
Children often start throwing tantrums and get distressed when separated from their parents or caregivers. While it is normal for a child to be slightly intimidated in the absence of their parents, some children face extreme trouble in coping without their parents being around them constantly. It is technically a mental health problem called Separation Anxiety Disorder (SAD).
Separation Anxiety | Symptoms | Self Help | Treatment | Take Away
This disorder is characterized by the child’s fear and constant worry about being away from family or other caregivers.
They are constantly distressed about something happening to the people they love while being away from them. Feeling some sort of anxiety during childhood or teen years is usually a normal part of growing up. This disorder is more common in toddlers. Usually, Separation Anxiety Disorder is a result of a genetic or biological imbalance of neurotransmitters in the brain. It can also result from a traumatic event a child has endured in the past. Children whose parents have an anxiety disorder are more likely to develop separation anxiety themselves.
child separation anxiety
Here are a few symptoms to look out for.
1. Refusing to go to bed alone.
2. Nightmares of being away from family.
3. Extreme distress when away from family.
4. Constant fear of getting lost and separated from family.
5. Refusing to go to school or play.
6. Always fearful of being left alone.
7. Frequent headaches and stomach problems.
8. Muscle tension.
9. A constant fear of being harmed if left alone.
10. Being extremely clingy even while being in the same room.
11. Temper tantrums at times of being separated from caregivers.
childIn some cases, the symptoms could be a result of momentary physical problems or due to a recent traumatic event. It is important to get a an Expert diagnosis for separation anxiety before concluding anything. A Therapist or a Child Counselor, can successfully diagnose separation anxiety disorder in kids. A physical and mental health evaluation of the child is conducted in a professional environment for an accurate diagnosis. Although you should only be worried if the symptoms last for more than 4 weeks at a time. If this distress or temper tantrums are momentary or due to a certain person or event, there isn’t much to worry about since it could be a result of an environmental factor.
A Therapist can help diagnose and treat Separation Anxiety using a mix of therapies for the same. It completely depends on the severity of the disorder. If you are child is also facing social problems due to this condition, it is even more important for you to help them get rehabilitated to function normally in society. A combination of therapies like Cognitive Behavior Therapy (CBT) and family therapy can be used in the treatment of Separation Anxiety Disorder. Only a registered mental health practitioner can carry out this process successfully and accurately.
It isn’t much you can do as a parent to prevent Separation Anxiety in your child. However, if you notice any signs of SAD in your child, you should seek a diagnosis as soon as possible. Timely treatment can lessen the impact of the disorder and enhance the child’s development. In some cases, if separation anxiety is left untreated, the child can develop problems with social interaction too. They might not be able to adjust with peers in school, face trouble while interacting with teachers and face difficulty in expressing themselves later on.
Untreated SAD can also result in various mental health problems at later stages. Bedwetting, for example, is a result of the child feeling insecure and anxious. SAD could progress into a Generalized Anxiety Disorder after the child grows up. This could also hamper your relationship with your child because they often develop resentment towards their parents, assuming that the parents don’t care for them and might stop bonding with you at all.
There are some things you can do on an individual level to help your child with separation anxiety disorder.
1. Give your child unconditional support and reassurance.
2. Help your child in gaining independence while doing basic tasks like going out to play with friends.
3. Encourage them to be independent.
4. Understand situations that trigger your child. Planning and preparing your child can reduce their distress and make them feel more comfortable even in a new environment.
5. Make sure your child’s primary caregivers and teachers know about your child’s condition.
6. Seek therapy and rehabilitation for your child from a professional mental health practitioner.
childHere are a few next steps which will help you and your child get the most benefit from visiting a Child Counsellor or a Therapist.
1. Get a clear understanding of the disorder and know what to expect.
2. Take note of the questions and problems you have so that you can seek a solution to them during the session.
3. Make sure you write down the key takeaways from the session.
4. If your child is being prescribed any medicine for the physical symptoms, find out the side effects of those medicines too.
5. Know what to expect from the medicines and the session. Take a clear understanding of the pace of recovery for your child and understand that a proper recovery will take time.
6. Make sure you know how to get in touch with the mental health practitioner in case you face an avoidable problem outside of the therapy session.
Key takeaways.
1. Separation anxiety disorder is a mental health problem that can be treated. It’s not very difficult if the diagnosis is done on time and therapy is started.
2. Expect a few other symptoms of SAD. Social withdrawal, lack of communication, and difficulty in expressing oneself are a few common symptoms of SAD. Eventually, after treatment is done the child will get rehabilitated and significant improvements will be seen.
3. The symptoms of SAD should be present for at least four weeks until you seek a diagnosis.
4. Only a mental health practitioner, doctor, therapist, counselor, is qualified enough to diagnose and treat SAD.
5. Both medicines and therapy are used, usually in combination for the treatment of separation anxiety disorder.
Read more here:
Parental Neglect: Leading To Early Signs Of Depression In Children
Child Anger Management – How to Deal With Your Child’s Anger
5 Tips on Building Healthy Self-esteem in Children
Related Posts
Leave a Reply
Your email address will not be published.
|
Urban Forest
Cosumnes CSD takes a proactive approach to managing its urban forest of 46,000 trees. The District maintains a regular pruning and inspection schedule for trees and works closely with an arborist to promote the best possible health of trees at its facilities. Arborists are tree specialists who use their education, knowledge, training, and experience to examine trees, recommend measures to enhance the beauty and health of trees, and attempt to reduce the risk of living and recreating near trees.
Trees can be managed, but they cannot be controlled. To live and recreate near trees is to accept some degree of risk. The District strives to mitigate that risk to the fullest extent possible with regular pruning and inspections. Occasionally, the only way for the District to comfortably mitigate the risk of tree or limb failure is to remove the tree.
As part of the District's urban forestry management program, regular plantings occur between fall and early spring each year in anticipation of future tree failures, and in response to past failures. While the replacement trees will be smaller initially, they will continue to add to the enjoyment and health of our community for generations to come.
A tree in bloom at Del Meyer.
|
Panretin (Alitretinoin)- FDA
Apologise, Panretin (Alitretinoin)- FDA whom
These processes continue until the temperature of the blood bathing the hypothalamus reaches the new set 397. The hypothalamus (see gif R), which sits at the base of the brain, acts as the body's thermostat.
It is triggered by pyrogens, which flow from sites where the immune system has identified potential trouble to the hypothalamus via the bloodstream. When the hypothalamus detects them, it (Alltretinoin)- the body to generate and retain more heat, thus producing a fever. Children typically get higher and quicker fevers, reflecting the effects of the pyrogens upon an inexperienced immune system.
Human beings live in a very narrow survival zone. Significant deviations from average in this tightly controlled physiological (Alitretionin)- can quickly become life-threatening.
Internal body temperatures in excess of 105 degrees F expose proteins and body fats to direct temperature stressors. Cellular DFA, infarctions, necrosis, seizures and delirium are (Alitretinojn)- the potential consequences of prolonged, severe fevers. The receptor environment (Alittretinoin)- the hypothalamus maintains limitations on high fevers. Postoperative fever is a common occurrence of all surgery.
Because there are so many causes of fever, the Panretn is usually managed by an interprofessional team of healthcare professionals. The nurse is Panretin (Alitretinoin)- FDA the first person who monitors the patient and discovers the fever. In order to know the cause:In frail older adults, infection is less likely to cause fever, and if elevated by Panretin (Alitretinoin)- FDA, temperature often it is lower than the standard definition of fever.
Also inflammatory symptoms, such as focal pain, may be less prominent. Fever Panretin (Alitretinoin)- FDA should take into consideration the protection of the brain from secondary insults as well as the capacity to fight against infections. Fever should most likely be treated aggressively in the first days of TBI, SAH, or stroke. A fever is an attempt to create an environment that is not conducive to viral replication, while the increase in sleep Panretjn the body Panretin (Alitretinoin)- FDA devote most of its energy on fighting the virus instead of focusing its energy on wakeful tasks.
The hypothalamus induces a need for sleep during a fever. Sleep deprivation is a no-no when it comes fighting the flu. Sleeping lets the body focus its energy on fighting the virus, allowing the body to harm away the waste products lodged in between the brain cells.
Fever of unknown origin (FUO) was first Panretin (Alitretinoin)- FDA in 1961. Diagnosing FUO requires a thorough history, repeated physical examinations, (Alitfetinoin)- selective diagnostic testing. Brain temperature: physiology and pathophysiology after brain injury.
Anesthesiology research ibuflam practice. Fever of Unknown Origin (FUO). Updated: 15 Sep 2021, 09:18 AM IST Livemint In Firozabad, 60 children have died due to dengue, and 465 children are still admitted to the child ward of the medical college in the district Several Panretin (Alitretinoin)- FDA in the Ingenol Mebutate (Picato)- FDA are grappling with an outbreak of dengue fever.
However, Uttar Pradesh is the worst-affected state from dengue. On Wednesday, the northern state's Prayagraj district reported 97 cases of dengue, so far. Out of these 97 cases, around nine dengue Panretin (Alitretinoin)- FDA were admitted to hospitals at present. The surge in dengue cases is majorly seen among children in Uttar Pradesh.
In Firozabad, 60 children have died due to dengue, and 465 children are still admitted to the child ward jordyn johnson the medical college in the district. In UP's Agra district, 35 people have been infected with dengue. While in A(litretinoin)- six dengue cases have been confirmed. Recently, ICMR Director-General Dr Balram Bhargava informed that a D2 strain of dengue, found in Mathura, Agra, and Firozabad districts, was fatal and might cause hemorrhaging.
According to experts, dengue virus serotype 2 (DENV-2 or D2) is known to be the most virulent strain and can cause severity in disease. Mumbai has reported 305 cases of Panretin (Alitretinoin)- FDA since January 2021, (Akitretinoin)- 85 in September. So far, no death due to the mosquito-borne disease has been Panretin (Alitretinoin)- FDA this year. There were 85 dengue cases in Mumbai between September 1 and 12 Panretin (Alitretinoin)- FDA year, while 144 cases were reported Pandetin month.
Mumbai's pest Panretin (Alitretinoin)- FDA department inspected 4,46,077 houses and detected and destroyed (Alitretinion)- mosquito-breeding spots as a preventive measure against the disease. Haryana's Chilli village in the (Alitretihoin)- Hathin of Palwal district has reported (Alitretinkin)- cases of high fever and two deaths.
As a result, health department teams have rushed (Alitretiboin)- the Amphadase (Hyaluronidase Injection)- FDA to contain the spread of the fever. Vijay Kumar, Senior Medical Officer, Panretin (Alitretinoin)- FDA said, Panretin (Alitretinoin)- FDA are being provided and an Out Patient Department (OPD) is being run.
Spraying is being done. We have taken samples of 80 people who are having a fever. No malaria cases have been reported. A total of 139 dengue cases have been Panretin (Alitretinoin)- FDA in the city, so far. The district administration has started holding Panretin (Alitretinoin)- FDA larva survey and also conducting fumigation to kill the larva. Additionally, Madhya Panretln Home Minister Narottam Mishra directed officials to carry out an anti-mosquito fogging drive to contain dengue disease in the state.
In the national capital Delhi, 158 cases have been reported, (Alitretinin)- far.
25.04.2019 in 23:39 geostitducvi:
26.04.2019 in 21:10 Владлен:
Спасибо за интересную статью. Буду ждать новых анонсов.
28.04.2019 in 15:05 zestdisnessmi:
|
Make 2021 the Year You Go Vegan
2020, most of us will agree, was a terrible year. Who knows what awaits us in 2021? We hope for the best, but we can be proactive about some aspects of life, like our health. Let’s do what we can to improve our health while making the world a more compassionate place. Let’s make this the year you go vegan.
A vegan diet is best for your heart. The leading cause of death of both men and women in the United States is heart disease. Every day, nearly 2,600 Americans die of some type of heart disease, the most common form being coronary heart disease, also known as coronary artery disease or atherosclerosis. Atherosclerosis occurs when hard layers of plaque, usually cholesterol deposits, accumulate in major arteries and begin constricting flow of blood and oxygen to the heart. Arterial plaque is also a leading cause of stroke, the fourth greatest killer of Americans each year.
While other factors can affect cholesterol levels and heart disease (including smoking, exercise, blood pressure, and body weight) one of the single most significant causes of heart disease is dietary cholesterol. Our bodies make all the cholesterol we need, so consuming animal products contributes excessive levels. Animal products are also loaded with saturated fats, which, unlike unsaturated fats, cause the liver to produce more cholesterol.
Fortunately, for most people, preventing coronary heart disease is as simple as eliminating animal products, eating a healthy plant-based diet, exercising, and avoiding cigarette smoking. But beyond prevention, a plant-based diet is the only treatment that has been scientifically proven to reverse heart disease. There is no cholesterol in plant foods.
Vegan diets have also repeatedly shown to reduce levels of LDL, or “bad” cholesterol. According to a study published in the American Journal of Cardiology, a low-fat vegetarian diet reduces LDL by 16 percent, but a high-nutrient vegan diet reduces LDL cholesterol by 33 percent. The high fiber content of plant-based foods also helps to slow the absorption of cholesterol. Animal products contain no fiber.
In addition to providing for a healthy heart, a whole foods, plant-based diet can prevent and in some cases even reverse many of the worst diseases. Leading U.S. health care provider Kaiser Permanente, published an article in its medical science journal recommending that physicians consider recommending a plant-based diet for all their patients. The article notes, “Healthy eating may be best achieved with a plant-based diet, which we define as a regimen that encourages whole, plant-based foods and discourages meats, dairy products, and eggs as well as all refined and processed foods … Physicians should consider recommending a plant-based diet to all their patients, especially those with high blood pressure, diabetes, cardiovascular disease, or obesity.”
Oh, but the naysayers insist that humans are “meant to eat meat.” “Humans,” they will tell you, “have a carnivore’s or omnivore’s teeth.” No, we don’t. Check this chart and see where your teeth are in relation to the rest of the world’s mammals.
“Consider again the anatomy of the carnivore and the omnivore, including an enormous mouth opening, a jaw joint that operates as a hinge, dagger-like teeth, and sharp claws. Each of these traits enables the lion or bear to use her body to kill prey. Herbivorous animals, by contrast, have fleshy lips, a small mouth opening, a thick and muscular tongue, and a far less stable, mobile jaw joint that facilitates chewing, crushing, and grinding. Herbivores also generally lack sharp claws. These qualities are well-adapted to the eating of plants, which provide nutrients when their cell walls are broken, a process that requires crushing food with side-to-side motion rather than simply swallowing it in large chunks the way that a carnivore or omnivore swallows flesh.
Herbivores have digestive systems in which the stomach is not nearly as spacious as the carnivore’s or omnivore’s, a feature that is suitable for the more regular eating of smaller portions permitted with a diet of plants (which stay in place and are therefore much easier to chase down), rather than the sporadic gorging of a predator on his prey. The herbivore’s stomach also has a higher pH (which means that it is less acidic) than the carnivore’s or omnivore’s, perhaps in part because plants ordinarily do not carry the dangerous bacteria associated with rotting flesh.
The small intestines of herbivores are quite long and permit the time-consuming and complex breakdown of the carbohydrates present in plants. In virtually every respect, the human anatomy resembles that of herbivorous animals (such as the gorilla and the elephant) more than that of carnivorous and omnivorous species. Our mouths’ openings are small; our teeth are not extremely sharp (even our “canines”); and our lips and tongues are muscular. Our jaws are not very stable (and would therefore be easy to dislocate in a battle with prey), but they are quite mobile and allow the side-to-side motion that facilitates the crushing and grinding of plants.” — Read the full excerpt on comparative anatomy by Sherry F. Colb, from her book, Mind if I Order the Cheeseburger? and Other Questions People Ask Vegans
It has been estimated that 98% of our harm to animals comes from our food choices. Yet science has irrefutably demonstrated that humans do not need meat, dairy or eggs to thrive. Once we understand that eating animals is not a requirement for good health, and if we have access to nutritious plant-based foods, then the choice to continue consuming animal products anyway is a choice for animals to be harmed and killed for our pleasure — simply because we like the taste. But harming animals for pleasure goes against core values we hold in common.
The only way for our values to mean anything — the only way for our values to actually be our values — is if they are reflected in the choices we freely make. And every day, we have the opportunity to live our values through our food choices. If we value kindness over violence, if we value being compassionate over causing unnecessary harm, and if we have access to plant-based alternatives, then veganism is the only consistent expression of our values.
Best wishes for a happy and healthy 2021, and Peace to ALL the animals with whom we share this planet.
|
Cosmological Argument
First published Tue Jul 13, 2004; substantive revision Wed Feb 3, 2021
The cosmological argument is less a particular argument than an argument type. It uses a general pattern of argumentation (logos) that makes an inference from particular alleged facts about the universe (cosmos) to the existence of a unique being, generally identified with or referred to as God. Among these initial facts are that particular beings or events in the universe are causally dependent or contingent, that the universe (as the totality of contingent things) is contingent in that it could have been other than it is or not existed at all, that the Big Conjunctive Contingent Fact possibly has an explanation, or that the universe came into being. From these facts philosophers and theologians argue deductively, inductively, or abductively by inference to the best explanation that a first cause, sustaining cause, unmoved mover, necessary being, or personal being (God) exists that caused and/or sustains the universe. The cosmological argument is part of classical natural theology, whose goal is to provide evidence for the claim that God exists.
On the one hand, the argument arises from human curiosity as to why there is something rather than nothing or than something else. It invokes a concern for some full, complete, ultimate, or best explanation of what exists contingently. On the other hand, it raises intrinsically important philosophical questions about contingency and necessity, causation and explanation, part/whole relationships (mereology), possible worlds, infinity, sets, the nature of time, and the nature and origin of the universe. In what follows we will first sketch out a very brief history of the argument, note the two basic types of deductive cosmological arguments, and then provide a careful analysis of examples of each: first, three arguments from contingency, one based on a relatively strong version of the principle of sufficient reason and two others based respectively on a very strong and on a weak version of that principle; and second, an argument from the alleged fact that the universe had a beginning and the impossibility of an infinite temporal regress of causes. In the end we will consider an inductive version of the cosmological argument and what it is to be a necessary being.
1. Historical Overview
Although in Western philosophy the earliest formulation of a version of the cosmological argument is found in Plato’s Laws, 893–96, the classical argument is firmly rooted in Aristotle’s Physics (VIII, 4–6) and Metaphysics (XII, 1–6). Islamic philosophy enriches the tradition, developing two types of arguments. Arabic philosophers (falasifa), such as Ibn Sina (c. 980–1037), developed the argument from contingency, which was taken up by Thomas Aquinas (1225–74) in his Summa Theologica (I,q.2,a.3) and in his Summa Contra Gentiles (I, 13). Influenced by John Philoponus (5th c) (Davidson 1969), the mutakallimūm—theologians who used reason and argumentation to support their revealed Islamic beliefs—developed the temporal version of the argument from the impossibility of an infinite regress, now referred to as the kalām cosmological argument. For example, al-Ghāzāli (1058–1111) argued that everything that begins to exist requires a cause of its beginning. The world is composed of temporal phenomena preceded by other temporally-ordered phenomena. Since such a series of temporal phenomena cannot continue to infinity because an actual infinite is impossible, the world must have had a beginning and a cause of its existence, termed Allah or God (Craig 1979: part 1). This version of the cosmological argument entered the medieval Christian tradition through Bonaventure (1221–74) in his Sentences (II Sent. D.1,p.1,a.1,q.2).
Enlightenment thinkers, such as Gottfried Wilhelm Leibniz and Samuel Clarke, reaffirmed the cosmological argument. Leibniz (1646–1716) appealed to a strengthened principle of sufficient reason, according to which “no fact can be real or existing and no statement true without a sufficient reason for its being so and not otherwise” (Monadology, §32). Leibniz uses the principle to argue that the sufficient reason for the “series of things comprehended in the universe of creatures” (§36) must exist outside this series of contingencies and is found in a necessary being that we call God. Samuel Clarke likewise employed the principle of sufficient reason in his cosmological argument (Rowe 1975: chap. 2).
The cosmological argument came under serious assault in the 18th century, first by David Hume and then by Immanuel Kant. Hume (1748) attacked both the view of causation presupposed in the argument (that causation is an objective, productive, necessary power relation that holds between two things) and the Causal Principle—every contingent being has a cause of its existence—that lies at the heart of the argument. Kant contended that the cosmological argument, in identifying the necessary being, relies on the ontological argument, which in turn is suspect. We will return to these criticisms below.
More recently, Michael Almeida constructed a new version of the argument based on modal realism. In short, in contrast to the first half of the last century, contemporary philosophers contribute increasingly detailed, complex, and sophisticated arguments on both sides of the debate.
2. Typology of Cosmological Arguments
3. Complexity of the Question
It is said that philosophy begins in wonder. Thus it was for the ancients, who wondered what constituted the basic stuff of the world (κóσμος) around them, how this basic stuff changed into the diverse forms they experienced, and how it came to be. These origination questions related to the puzzle of existence that, in its metaphysical dimensions, is the subject of our concern.
Rutten (2012, 13–15) develops an a priori reductio ad absurdum argument for the impossibility of there being nothing. Suppose nothing exists. If nothing exists, then no actual states of affairs exist, and if no actual states of affairs exist, no merely possible states of affairs exist, since there is nothing to actualize or bring them about. Hence, there are no possible states of affairs, since to be possible, something must either be actual or merely possible. However, one can conceive of a possible world with at least one actual and hence possible state of affairs S, for example, a world with one atom. But if S is possible, then by S5, necessarily, S is possible, that is, S is possible in all possible worlds. However, this contradicts the original conclusion that if total nothingness is metaphysically possible, there are no possible states of affairs in that possible world. Hence, the reductio against the original thesis that nothingness can exist. One might counter this reductio by contending that the argument trades on a confusion between metaphysical necessity, as evidenced by appeal to an Aristotelian principle regarding the relationship between actuality and possibility, with logical necessity, which in invoking S5 addresses logical possibility across possible worlds (for this distinction, see Burgess 1999, 81).
Second, why are there these particular contingent beings? The starting point here is the existence of particular things, and the question posed asks for an explanation for there being these particular things. If we are looking for a causal explanation and accept a full explanation (in terms of contemporary or immediately prior causal conditions and the relevant natural laws that together necessitate the effect), the answer emerges from an analysis of the relevant immediate causal conditions present in each case. Hume argues that an explanation in terms of immediately conjoined factors is satisfactory.
Heil suggests that the answer depends on how one understands the Big Bang (2013: 178). If it was spontaneous, the question has no answer. If not spontaneous, there might be an answer. Theists broaden the explanatory search to include final causes or intentions appropriate to a personal cause. It leads us to ask the question, “Supposing that God exists, why did God bring about contingent beings?” This assumes that God exists and now inquires about the reasons for creation. On the one hand, we might argue that this question is unanswerable in that only God would know his reasons for bringing the universe into existence (O’Connor 2008). On the other hand, God acts out of his nature; Swinburne (2004: 47, 114–23) emphasizes God’s goodness, from which we can infer possible reasons for what God brings about (although at this point the problem of evil has bite). God also acts from his intentions (Swinburne 1993: 139–45; 2007: 83–84), so that God could reveal his purposes for his act of creating (Richard Swinburne, The Evolution of the Soul: 309).
Fourth, if the universe has a beginning, what is the cause of that beginning? This is the question that is addressed by the kalām cosmological argument, given its central premise that everything that begins to exist has a cause. Many, however, deny the antecedent in the conditional, that the universe had a beginning.
Fifth and fundamentally, why are there contingent beings? This may be asked about particular finite beings and, if the universe is contingent, the universe. Several responses have been given. One is that particular things exist because of their causes, and their causes because of their causes, and so on. Had those causes not existed, the effect in question would not exist. If one speaks about the universe, then either it exists because it is caused (e.g., brought about by the intentional act of a supernatural being) or it is inexplicable (the universe just exists; its existence is a brute fact; it has always existed, though perhaps through many phases). This is the question that traditional cosmological arguments address.
4. Argument for a Non-contingent Cause
4.1 A Deductive Argument from Contingency
1. A contingent being (a being such that if it exists, it could have not-existed) exists.
2. All contingent beings have a sufficient cause of or fully adequate explanation for their existence.
3. The sufficient cause of or fully adequate explanation for the existence of contingent beings is something other than the contingent being itself.
4. The sufficient cause of or fully adequate explanation for the existence of contingent beings must either be solely other contingent beings or include a non-contingent (necessary) being.
5. Contingent beings alone cannot provide a sufficient cause of or fully adequate explanation for the existence of contingent beings.
6. Therefore, what sufficiently causes or fully adequately explains the existence of contingent beings must include a non-contingent (necessary) being.
8. The universe, which is composed of only contingent beings, is contingent.
9. Therefore, the necessary being is something other than the universe. (For a Thomistic version of this argument, see Siniscalchi 2018: 690–93).
Over the centuries philosophers have suggested various instantiations for the contingent being noted in premise 1. In his Summa Theologica (I,q.2,a.3), Aquinas argued that we need a causal explanation for things in motion, things that are caused, and contingent beings.[1] Others, such as Richard Swinburne (2004), propose that the contingent being referred to in premise 1 is the universe. The connection between the two is supplied by John Duns Scotus, who argued that even if the essentially ordered causes were infinite, “the whole series of effects would be dependent upon some prior cause” (Scotus [c. 1300] 1964: I,D.2,p.1,q.1,§53). Richard Gale (1999) calls this the “Big Conjunctive Contingent Fact”. Whereas the contingency of particular existents is generally undisputed, not the least because of our mortality, the contingency of the universe deserves serious defense (see section 4.2). Premise 2 invokes a moderate version of the Principle of Causation, according to which there must be a sufficient cause for any contingent being or event, or of the Principle of Sufficient Reason, according to which there must be a fully adequate explanation for any contingent being or event. Applied here, the Principles of Causation or of Sufficient Reason lead to the contention that if something is contingent, there must be a sufficient cause of its existence or a fully adequate reason or explanation why it exists rather than not exists. The point of premise 3 is simply that something cannot cause or explain its own existence, for this would require it to already exist (in a logical if not in a temporal sense). Premise 4 is true by virtue of the Principle of Excluded Middle: what explains the existence of the contingent being either are solely other contingent beings or include a non-contingent (necessary) being. Conclusions 6 and 7 follow validly from the respective premises.
For many critics, premise 5 (along with premise 2) holds the key to the argument’s success or failure. The truth of 5 depends upon the requirements for an adequate explanation. Using the Principle of Sufficient Reason (PSR), what is required here is an account in terms of sufficient conditions that provides an adequate explanation why the cause had the effect it did, or alternatively, why this particular effect and not another arose. Swinburne (2004: 75–79), and Alexander Pruss (2006: 16–18) after him, note diverse kinds of explanations. In a full explanation the causal factors—in scientific causation, contemporary or immediately precedent causal conditions and natural laws; in personal causation, persons and their intentions— are sufficient for the occurrence of an event. They “together necessitate the occurrence of the effect” (Swinburne 2004: 76).
One worry with understanding the PSR in this way is that it may lead to a deterministic account that not only may bode ill for the success of the argument but on a libertarian account may be incompatible with the contention that God created freely. Pruss, however, envisions no such difficulty, for giving reasons neither makes the event deterministic nor removes freedom.
What gives sufficiency to explanation is that mystery is taken away, for example, through the citing of relevant reasons, not that probability is increased.... Once we have said that \(x\) freely chose \(A\) for \(R\), then the only thing left that is unexplained is why \(x\) existed and was both free and attracted by \(R\). (Pruss 2006: 157,158)
One might reply that an explanation needs to be given for why \(x\) was attracted to \(R_1\) rather than to \(R_2\), and that if that explanation is given, \(x\)’s choice is not free but determined by the degree to which \(x\) is attracted to different reasons. However, Pruss might reply that being “attracted by” is not to be understood in any deterministic sense. One might freely consider an option to be the best without being necessitated to choose it. The debate hinges on how one understands how reasons function in human agency.
Whether 8 and 9 are an intrinsic part of the cosmological argument is debated. Kant argued that the argument had two parts, the first establishing the existence of an absolutely necessary being; the second part, identifying this being as the most real being (1787, B633–40). Without the second part, the concept of a necessary being was empty. The issue achieves significance when the question arises whether the argument has religious significance, that is, whether necessary being to which the argument concludes is God. Some contend that from the concept of a necessary being flow properties appropriate to a divine being (Siniscalchi 2018, 693). Timothy O’Connor (2004) argues that being a necessary being cannot be a derivative emergent property, otherwise the being would be contingent. Likewise, the connection between the essential properties must be necessary. Hence, the universe cannot be the necessary being since it is mereologically complex. Similarly, the myriad elementary particles cannot be necessary beings either, for their distinguishing distributions are externally caused and hence contingent. Rather, he contends that a more viable account of the necessary being is as a purposive agent with desires, intentions, and beliefs, whose activity is guided but not determined by its goals, a view consistent with identifying the necessary being as God. Koons (as are Craig and Sinclair 2009: 192–94) also is willing to identify the necessary being as God, constructing corollaries regarding God’s nature that follow from his construction of the cosmological argument. Oppy (1999, 381–84), on the other hand, in critiquing Koons’s presentation “of seven corollaries to his (cosmological) proof which are intended to establish that the First Cause has at least some of the attributes which are traditionally attributed to God,” expresses significant skepticism about Koons’s arguments and the possibility of such a deductive move to determine its properties.
4.2 Objection 1: The Universe Just Is
Swinburne replies that
We do not need to experience every possible referent of the class of contingent things to be able to conclude that a contingent thing needs a cause. “To know that a rubber ball dropped on a Tuesday in Waggener Hall by a redheaded tuba player will fall to the ground”, I do not need a sample that includes tuba players dropping rubber balls at this location (Koons 1997: 202).
Morriston (2002a: 235) responds that although it is true that we do not need to experience every instance to derive a general principle, the universe is a very different thing from what we experientially reference when we say that things cannot come into existence without a cause. Tuba players are not “anything remotely analogous to the ‘initial singularity’ that figures in the Big Bang theory of the origin of the universe”.
Defenders of the argument respond that there is a key similarity between the cosmos and its content, namely, both are contingent. However, why should we think that the cosmos is contingent? Defenders of the view contend that if the components of the universe are contingent, the universe itself is contingent. Russell replies that the move from the contingency of the components of the universe to the contingency of the universe commits the Fallacy of Composition, which mistakenly concludes that since the parts have a certain property, the whole likewise has that property. Hence, whereas we legitimately can ask for the cause of particular things, to require a cause of the universe based on the contingency of its parts is mistaken.
It is worth noting that on the one hand, “universe” can refer to what is spatio-temporally connected to us. On the other hand, “universe” can refer to the totality of contingent beings (Oppy 1999: 384). This argument for the contingency of the universe from its component, contingent parts coalesces these two understandings in the cosmological argument.
Whether this argument for the contingency of the universe is similar to that advanced by Aquinas in his Third Way depends on how one interprets Aquinas’s argument. Aquinas holds that “if everything can not-be, then at one time there was nothing in existence” (ST I,q.2,a.3). William Rowe (1975: 160–67) argues that what looks like a similar argument in Samuel Clarke for the contingency of the universe is fallacious, for even if every contingent being were to fail to exist in some possible world, it may be the case that there is no possible world that lacks a contingent being (on Aquinas, see Plantinga 1967: 5–6; Kenny 1969: 56–66). That is, although no being would exist in every possible world, every possible world could possess at least one contingent being. In such a case, although each being is contingent, something must exist. Rowe gives the example of a horse race.
4.3 Objection 2: Explaining the Individual Constituents Is Sufficient
4.4 Objection 3: The Principles of Causation and Sufficient Reason Are Suspect
Second, some suggest a pragmatic-type argument to show that the Causal and Sufficient Reason principles are true, namely that the principles are necessary to make the universe intelligible. Without such principles, Pruss argues, science itself would be undercut. “Claiming to be a brute fact should be a last resort. It would undercut the practice of science” (Pruss 2006: 255). Utilization of the principles best accounts for the success of science, indeed, for any investigatory endeavor (Koons 1997; see also Koons 2008: 111–12, where he argues that it is “a subjectively required presumption needed for immunity to internal defeaters”). The best explanation of the success of science and other such rational endeavors is that the principles are really indicative of how reality operates..
Critics reply that the principles then only have methodological or practical and not ontological justification. As John Mackie argues, we have no right to assume that the universe complies with our intellectual preferences for causal order. We can simply work with brute facts; beginning with them, science would work just as well.
The problem with the claim of self-evidence is that it is a conversation ender, not a starter. One who denies its self-evidence might think that those who hold to the principle are the ones who experience conceptual blindness. In contrast to analyticity, self-evidence holds in relation to the knowers themselves, and here intuitions vary, perhaps according to philosophical or other types of perspectives. Furthermore, if the principle truly is self-evident, it would be strange to respond to skeptics by attempting to give reasons to support that contention, and were such demanded, the request would itself invoke the very principle in question.
However, as Pruss notes (2006: chaps. 6 & 7), “The word sufficient can be read in two different ways: the reason given can be logically sufficient for the explanandum, or it can sufficiently explain the explanandum (2006: 103).” According to Pruss, we need not hold to the strong claim of logical sufficiency about the relation between explaining and entailment in cases where the explanation is brought about by libertarian free agency. Although God is a necessary being, his connection with the world is through his free agency, and free actions explain but do not entail the existence of particular contingent states.
Clearly, the soundness of the deductive version of the cosmological argument hinges on whether principles like that of Causation or Sufficient Reason are more than methodologically true and on the extent to which these principles can be applied to things, events, and facts. Critics of the argument will be skeptical regarding the universal application of the principles; defenders of the argument generally not so, at least as limited to contingencies. Perhaps the best one can say, with Taylor, is that even those who critique the PSR (understood broadly that every contingent thing, event. or fact must have a sufficient cause, reason, or ground) invoke it when they suggest that defenders of the principle have failed to provide a sufficient reason for thinking it is true.
It might be denied that the request for a sufficient reason for the truth of principle itself invokes the PSR on the grounds that “reason” has two senses, explanation and evidence. However, in requesting and giving a sufficient reason for the truth of the principle, the explanation of why it is true should provide sufficient evidence to support the claim. (See the introduction to the entry Principle of Sufficient Reason).
Finally, critics have argued that an argument for the application of the Causal Principle to the universe cannot be drawn from inductive experience. Even if the Causal Principle applies to events in the world, we cannot extrapolate from the way the world works to the world as a whole (Mackie 1982: 85). The type of causation we experience in the empirical world is different from the kind of causation proposed to hold between a necessary being and the cosmos (Kant 1787: B638).
4.5 Objection 4: Problems with the Concept of a Necessary Being
Kant argued that the cosmological argument introduced an empirical premise to evade the difficulties of the ontological argument. Although in the ontological argument the perfect being is allegedly determined to exist through its own concept, in fact nothing can be determined to exist in this manner; one has to begin with existence (see entry on Ontological Arguments). The cosmological argument, on the other hand, proceeds from an empirical premise about my existence to the existence of an unconditioned, absolutely necessary being, a being whose nonexistence is “impossible”, “absolutely inconceivable” (1787: B621). This concept has the same status as geometrical concepts, which though necessary do not establish the existence of anything corresponding to the concept. So, when we think of an absolutely necessary being, are we thinking about anything at all? What this absolutely necessary being is, what properties it has, can be determined not through experience but only through reason, that is, from a priori concepts alone. Since the only concept that suffices to determine its properties is that of a most real being, the concept of an absolutely necessary being presupposes that concept. However, that the most real being necessarily exists is the burden of the ontological argument. Hence, the CA depends on the ontological argument to determine the absolutely necessary being. But since the ontological argument is defective for the above and other reasons, the cosmological argument that depends on or invokes it likewise must be defective (1787: B634; for an alternative interpretation of Kant’s argument see Proops 2014).
Mackie replies that if God has mere metaphysical or factual necessity, God’s existence is logically contingent, such that some reason is required for God’s own existence (Mackie 1982: 84). As Swinburne notes, God is a logically contingent being, and so could have not-existed (2004: 79, 148). Why, then, does God exist? The PSR can be applied to the necessary being.
5. Argument from a Strong Principle of Sufficient Reason
Michael Almeida (2018) builds on the critical arguments of van Inwagen and others regarding the PSR. He contends that the version of the PSR used by defenders of the cosmological argument is inadequate because it fails to provide the best explanation for the universe. The best explanation, and hence the one required of a sound cosmological argument, is an absolute explanation, where everything is explained completely. There are no brute or contingent facts. He notes that in constructing their respective cosmological arguments, Pruss and Swinburne reject absolute explanation for complete explanations, where the effect is explained fully by the cause operating at a given time but where no explanation of the cause at the time of the occurrence is required. According to him, traditional defenders of the cosmological argument cannot invoke the requirement of an absolute explanation because if they did, given their metaphysic of actualist realism, they would incur a host of problems. Since all is determined on an absolute explanation, they would face the problems of the impossibility of libertarian free will, of indeterministic quantum effects, of modal imagination about lawless worlds where things pop into existence, and the collapse of modal distinctions. These problems, he says, arise not from an absolutist PSR per se but from its conjunction with actualist realism (only the actual is real).
The way around this, he contends, if one is going to defend the cosmological argument, is to opt for a different ontology, namely, genuine modal realism (mere possibilities are also real), which he claims not only can legitimize the cosmological argument but avoids the above problems. According to Almeida, modal realism makes libertarian free will compatible with necessitarianism in that two possible worlds can have the same history H up to time t, but at t, A occurs in one world and not in another world. The two histories do not determine whether A or -A occurs, but all possibilities necessarily occur. To make this work Almeida fudges on the principle of the identity of indiscernibles. Although the two series H and H* up to t are identical, there is not one series H that forks at t. Rather, there are two series, such that at t A can occur in one series and -A can occur in another. The past does not necessitate the future. Similarly, lawless or chaotic worlds, i.e., worlds lacking relations following a causal principle, are possible, so that it is possible and hence necessary that causeless events occur. In such a world the cosmological argument would still hold, he claims, because the principle of sufficient reason, compatible with the falsity of the causal principle, still holds. This analysis, he thinks, frees the defender of the cosmological argument from problems that trouble traditional formulations.
We cannot digress here into modal realism (for discussion of possible worlds, see entries on Possible Worlds and David Lewis: Modal Metaphysics), but turn specifically to Almeida’s cosmological argument. He argues that whereas cosmological arguments in the past commenced with an initial premise that was taken to express a contingent fact known a posteriori, “facts about change, causation, contingency, and objective…becoming are not usefully characterized as a posteriori facts” (2018: 3). He advances a cosmological argument with what he takes is an a priori fact: the pluriverse and everything in it, including all actualia and all possibilia, exist necessarily. This, he claims, is knowable a priori and according to the PSR requires an absolute explanation. Part of his novel approach is his contention that every proposition in the argument expresses a necessary fact known a priori, and that a priori propositions also require an explanation.
Since Almeida does not advance a detailed version of the cosmological argument, we might attempt to reconstruct his view.
1. “Possible worlds are composite concrete objects… [that] necessarily coexist” (2018: 75). This is a central contention of his Lewisian modal realism (75).
2. The pluriverse exists as “the collection of all possible worlds” (7,75).
3. Everything that exists has an absolute explanation for its existence. Strong PSR
4. “Therefore, there is an absolute explanation for the pluriverse” (75).
5. An absolute explanation is possible “only if there are no contingent facts,” that is, only if everything exists necessarily (78–79).
6. Therefore, there are no brute or contingent facts.
7. This absolute explanation is found in the fact that God necessarily exists (75,82).
The pluriverse is the necessary, creative manifestation of the necessarily existing God (5).
Although from necessary propositions contingent propositions cannot follow, necessary propositions can follow. That is, from God’s necessary existence we can conclude that the pluriverse necessarily exists. This avoids the van Inwagen objection to the PSR as employed in the cosmological argument. Almeida holds that it also avoids the other problems associated with the cosmological argument in that it allows for contingency within absolute explanation. He contends that contingency is protected by lowering the standards of similarity between worlds; that is, contingency is possible where we do not require exact identity between things held to exist in different worlds. He gives the example of his speaking Finnish, something he cannot do in the actual world. If someone who is identical to Almeida exists in another world, metaphysically he must have identical properties. However, it makes sense to say that in another possible world Almeida could speak Finnish and still be Almeida. We lower the standards of similarity in our everyday consideration of existence in alternate worlds to allow for such possibilities and hence for the contingency of his not speaking Finnish in the actual world.
Several objections might be raised against this version of the cosmological argument. Perhaps most basic is the question why one would accept modal realism. It is, as Almeida and others note, “ontologically extravagant”. Second, whereas necessity characterizes the metaphysical world, for Almeida contingency appears to be a subjective, epistemic contribution. That is, metaphysically, everything necessarily is what it is, has all its properties essentially, and is not something else. Epistemically, we can lower the standards of similarity, so that two things with somewhat differing essential properties can be similar (named the same), although strictly or metaphysically speaking, they are not the same. Similarity is an epistemically expansive concept to allow for contingency, but it does not allow for metaphysical contingency. (Conversely, as noted above, two things can have identical properties and yet not be identical.) Third, he contends that there are no brute facts on his theory. However, if there must be an absolute explanation for everything, what is the explanation for God’s existence? He gives God as an absolute explanation for the necessary existence of the pluriverse, but no absolute sufficient reason for God’s existence. He might reply that God’s existence is explained by being metaphysically necessary. However, if this explains God’s existence, since every component of the pluriverse and the pluriverse itself necessarily exist, why could not their metaphysical necessity be a sufficient reason or absolute explanation for their existence? Could they, like God, simply be necessary?
6. Argument from a Weak Principle of Sufficient Reason
Several objections have been raised about the argument from the weak principle of sufficient reason. Almeida and Judisch (2002) construct their objection via two reductio arguments. They note that, according to Gale’s argument, \(q\) is a contingent proposition in the actual world that reports the free, intentional action of a necessary being. As such, since the actual world contains the contingent proposition \(q\), non-\(q\) is possible. That is, there is a possible world \(W_{2}\) that contains \(p\), non-\(q\), and the proposition that \(q\) does not explain \(p\). However, by Gale’s own reasoning, \(W_{2}\) is identical to the actual world. But the actual world cannot contain both \(q\) and non-\(q\). Thus, \(q\) cannot be a contingent proposition.
Graham Oppy (2000) similarly argues that suppose \(p_1\) is the BCF of some possible world, and \(p_1\) has no explanation. Then, given \(r\) (namely, that \(p_1\) has no explanation) there is a conjunctive fact \(p_1\) and \(r\). Since by hypothesis the conjunctive fact \(p_1\) and \(r\) is true in some world, on Gale’s account it is true in the actual world. Then by the weak PSR there is a world in which this conjunction of \(p_1\) and \(r\) possibly has an explanation. If there is an explanation for the conjunction of \(p_1\) and \(r\), there is an explanation for \(p_1\). Thus, we have the contradiction that \(p_1\) both has and does not have an explanation, which is absurd. Hence, no world exists where the BCF lacks an explanation, which is the strong principle of sufficient reason that Gale allegedly circumvented. Since accepting the weak PSR would commit the nontheist to the strong PSR and ultimately to a necessary being, the nontheist has no motivation to accept the weak PSR.
Jerome Gellman has argued that the Gale/Pruss conclusion to a being that is not necessarily omnipotent also fails; this being is essentially omnipotent and, if omnipotence entails omniscience, is essentially omniscient. This too Gale and Pruss concede, which means that the necessary being they conclude to is not significantly different from that arrived at by the traditional cosmological argument that appeals to the moderate version of the PSR (that contingent beings need a sufficient reason or explanation for their existence).
7. The Kalām Cosmological Argument
A second type of cosmological argument, contending for a first or beginning cause of the universe, has a venerable history, especially in the Islamic mutakalliman tradition. Although it had numerous defenders through the centuries, it received new life in the recent voluminous writings of William Lane Craig. Craig formulates the kalām cosmological argument this way (Craig and Sinclair 2009; Craig and Smith 1993: chap. 1).
2. The universe began to exist.
4. No scientific explanation (in terms of physical laws and initial conditions of the universe) can provide a causal account of the origin (very beginning) of the universe, since such are part of the universe.
5. Therefore, the cause must be personal (explanation is given in terms of a non-natural, personal agent).[2]
The kalām argument has been the subject of much recent debate, only some of which can be summarized here. (For greater bibliographic detail, see Craig and Sinclair 2009 and Copan and Craig, eds. 2017 & 2019.)
7.1 The Causal Principle and Quantum Physics
not all physicists agree that subatomic events are uncaused…. Indeed, most of the available interpretations of the mathematical formulation of [Quantum Mechanics] are fully deterministic. (Craig and Sinclair 2009: 183. Jean Bricmont 2017, chap. 5, argues that Bohm’s causally deterministic interpretation of quantum phenomena is superior to the nondeterministic interpretation.)
For another, Craig argues, a difference exists between predictability and causality. It is true that, given Heisenberg’s principle of uncertainty, we cannot precisely predict individual subatomic events. What is debated is whether this inability to predict is due to the absence of sufficient causal conditions, or whether it is merely a result of the fact that any attempt to precisely measure these events alters their status. The very introduction of the observer into the arena so affects what is observed that it gives the appearance that effects occur without sufficient or determining causes. However, we have no way of knowing what is happening without introducing observers into the situation and the changes they bring. In the above example, we simply are unable to discern the intermediate states of the electron’s existence apart from introducing conditions of observation. When Heisenberg’s indeterminacy is understood not as describing not simply the events themselves but these events relative to our knowledge of the events, the Causal Principle still holds and can still be applied to the initial singularity, although we cannot expect to achieve any kind of determinate predictability about what occurs in particular cases on the sub-atomic level given the cause.
Supporting the Causal Principle, Andrew Loke (2017: chapter 5) offers a Modus Tollens argument that he thinks is immune to the criticisms in 4.4 and that responds to the suggestion that only the initial state of reality began to exist uncaused (Oppy 2015). Loke argues that (a) if x begins to exist without a causally antecedent condition, then other kinds of things that can begin to exist can do so without a causally antecedent condition, because (b) there would be no causally antecedent condition that would make it the case that only x (rather than these other kinds of things) begins to exist, and (c) the properties of x and the properties of other kinds of things that differentiate between them would be had by them only when they had already begun to exist. (b) and (c) jointly imply that there would be no essential difference between x and other kinds of things where beginning to exist uncaused is concerned. However, (d) it is not the case that other kinds of things that can begin to exist would also begin to exist without a causally antecedent condition. Therefore, (e) it is not the case that x begins to exist without a causally antecedent condition. For the critic, the critical question concerns the grounds on which (d) is true (see the discussion of Quantum Physics above).
7.2 Impossibility of an Actual Infinite
1. An actual infinite cannot exist.
Since conclusion 8 follows validly, if premises 6 and 7 are true the argument is sound. In defense of premise 6, he defines an actual infinite as a determinate totality that occurs when a part of a system can be put into a one-to-one correspondence with the entire system (Craig and Sinclair 2009: 104). Craig argues that if actual infinites that neither increase nor decrease in the number of members they contain were to exist in reality, we would have rather absurd consequences. For example, imagine a library with an actually infinite number of books. Suppose that the library also contains an infinite number of red and an infinite number of black books, so that for every red book there is a black book, and vice versa. It follows that the library contains as many red books as the total books in its collection, and as many red books as black books, and as many red books as red and black books combined. However, this is absurd; in reality the subset cannot be equivalent to the entire set. Likewise, in a real library by removing a certain number of books we reduce the overall collection. However, if infinites are actual, a library with an infinite number of books would not be reduced in size at all by removal of a specific number of books (short of all of them or all but a specific number), for example, all the red books or those with even catalogue numbers (Craig and Smith 1993: 11–16). The absurdities resulting from attempting to apply basic arithmetical operations, functional in the real world, to infinities suggest that although actual infinites can have an ideal existence, they cannot exist in reality.
Critics fail to be convinced by these paradoxes of infinity. For example, Rundle (2004: 170) agrees with Craig that the concept of an actual infinite is paradoxical, but this, he argues, provides no grounds for thinking it is incoherent. The logical problems with the actual infinite are not problems of incoherence but arise from the features that are characteristic of infinite sets. When the intuitive notion of “smaller than” is replaced by a precise definition, finite sets and infinite sets just behave somewhat differently, that is all. Cantor and all subsequent set theorists define a set \(B\) to be smaller than set \(A\) (i.e., has fewer members) just in case \(B\) is the same size as a subset of \(A\), but \(A\) is not the same size as any subset of \(B\). The application of this definition to finite and infinite sets yields results that Craig finds counter-intuitive but which mathematicians see as our best understanding for comparing the size of sets. They see the fact that an infinite set can be put into one-to-one correspondence with one of its own proper subsets as one of the defining characteristics of an infinite set, not an absurdity. Say that set \(C\) is a proper subset of \(A\) just in case every element of \(C\) is an element of \(A\) while \(A\) has some element that is not an element of \(C\). In finite sets, but not necessarily in infinite sets, when set \(B\) is a proper subset of \(A\), \(B\) is smaller than \(A\). However, this does not necessarily hold for infinite sets—as above where \(B\) is the set of squares of natural numbers and \(A\) is the set of all natural numbers. The proper subset \(B\) might be “smaller” than \(A\). What is crucial is that \(B\) is not smaller in the sense of having a smaller number of members than \(A\) (i.e., a smaller cardinality).
Loke (2017: 55–61; see Craig and Sinclair, 2009: 105–6) replies to the above objections by arguing that what is mathematically possible is not always metaphysically possible. For example, the quadratic equation \(x^2=4\) can have two mathematically consistent results for \(x\): 2 or −2, but if the question is “how many people carried the box home”, the answer cannot be −2, for in the concrete world it is metaphysically impossible that −2 people carry a box home. Thus, the conclusion of 2 people rather than −2 people derives not from mathematical equations alone but also from metaphysical considerations. Loke proceeds to argue that concrete infinities violate metaphysically necessary truths concerning causal powers.
Craig is well aware of the fact that he is using actual and potential infinite in a way that differs from the traditional usage in Aristotle and Aquinas [Craig and Sinclair 2009: 115. For Aristotle, all the elements in an actual infinite exist simultaneously, whereas a potential infinite is realized over time by addition or division. Hence, the temporal series of events, as formed by successively adding new events, was a potential, not an actual, infinite (Aristotle, Physics, III, 6)]. For Craig, however, an actual infinite is a timeless totality that cannot be added to or reduced. “Since past events, as determinate parts of reality, are definite and distinct and can be numbered, they can be conceptually collected into a totality” (Craig, in Craig and Smith 1993: 25). The future, but not the past, is a potential infinite, for its events have not yet happened.
Turning to premise 7, why should one think that it is true that a beginningless series, such as the universe up to this point, is an actual rather than a potential infinite? For Craig, an actual infinite is a determinate totality or a completed unity, whereas the potential infinite is not. Since the past events of a beginningless series can be conceptually collected together and numbered, the series is a determinate totality (1979: 96–97). And since the past is beginningless, it has no starting point and is infinite. If the universe had a starting point, so that events were added to or subtracted from this point, we would have a potential infinite that increased through time by adding new members. The fact that the events do not occur simultaneously is irrelevant.
Bede Rundle rejects an actual infinite. His grounds for doing so (the symmetry of the past and the future), if sustained, make premise 7 false. He argues that the reasons often advanced for asymmetry, such as those given by Craig, are faulty. It is true that the past is not actual, but neither is the future. Likewise, that the past, having occurred, is unalterable is irrelevant, for neither is the future alterable. The only time that is real is the present.
For Rundle, the past and the future are symmetrical; it is only our knowledge of them that is asymmetrical. Any future event lies at a finite temporal distance from the present. Similarly, any past event lies at a finite temporal distance from the present. For each past or future event, beginning from the present, there can always be either a prior past event or a subsequent future event. Hence, for both series an infinity of events is possible, and, as symmetrical, the infinity of both series is the same. Since the series of future events is not an actual but a potential infinite (or, better, an “indefinitely extendible” series, 2004: 168; Craig and Sinclair 2012, 104–5), the series of past events is also indefinitely extendible. It follows that although the future is actually finite, it does not require an end to the universe, for there is always a possible subsequent event (2004: 180). Similarly, although any given past event of the universe is finitely distant in time from now, a beginning or initial event can be ruled out; for any given event there is a possible earlier event. However, since there is a possible prior or possible posterior event in any past or future series respectively, the universe, although finite in time, is temporally unbounded (indefinitely extendible); both beginning and cessation are ruled out. [How Rundle (2004: 176–78) gets from the possibility of a subsequent event to actually ruling out cessation and beginning is unclear.] Since there is no time when the material universe might not have existed, it is not contingent but necessary. Hence, although the principle of sufficient reason is still true, it applies only to the components of the material universe and not to the universe itself. No explanation of the universe is possible. The universe, as matter-energy, is neither caused nor destructible, not in the sense that it could have been caused or could cease, but in the sense that “the notions of beginning and ceasing to exist are inapplicable to the universe” (2004: 178).
However, one might wonder, are the past series and future series of events really symmetrical? It is true that one can start from the present and count either forward and backward in time. Rundle thinks that …\(x_5\), \(x_4\), \(x_3\), \(x_2\), \(x_1\), \(t_0\), \(y_1\), \(y_2\), \(y_3\), \(y_4\), \(y_5\) are all on the same continuum, so that we cannot distinguish ontologically the time dimension of the future and past series. The two series, going into the past and into the future, would be the same in that however far we count from the present \(t_0\) remains finite although indefinitely extendible. However, is it true that, as he claims, with regard to the past, “any movement currently terminating can be redescribed as extending back”, that counting backward from the present is the same as counting from the past to the present (2004: 176)?
One cannot just reverse the temporal sequence of the past, for we do not ontologically engage the sequence from the present to the past. Rundle’s two movements are quite disparate, such that the two sequences—of the past and of the future—are not symmetrical, which leaves intact Craig’s claim that a beginningless past would result in an actual and not a potential infinite.
Morriston (2010) constructs an argument to show that, contrary to Craig, there is no relevant difference between a beginningless past and a determinate, endless future, such that if one is impossible because of absurdities so is the other, and if one is possible so is the other. He creates a fictional scenario where God commands angels Gabriel and Uriel to praise God alternatively for an eternity.
If you ask, “How many distinct praises will be said?”, the only sensible answer is, “infinitely many.… Each of infinitely many distinct praises will be said, precisely because there will be no future time at which all have been said ”. (Morriston 2010: 443–44)
However, an actually infinite number of future events is not impossible; it can be envisioned and determined by God.
Morriston proceeds to note that puzzles or absurdities parallel to those Craig finds in the concept of an actual infinite of past events also occur in the infinite series of future events. Suppose that
God could instead have determined that Gabriel and Uriel will stop after praise number four. Infinitely many praises would be prevented, and the number of their future praises would be only four. Alternatively, God could have determined that Gabriel be silent during all the celestial minutes between Uriel’s future praises. In this case too, infinitely many praises would be prevented, but the number of future praises would instead be infinite. (Morriston 2010: 444)
Although this shows that an infinite future can have inconsistent implications, God could still bring it about that these angels utter distinct praises, one after another, ad infinitum. But then, Morriston concludes, since these inconsistent implications do not count against an actual infinity of future events, the puzzles Craig poses do not count against the possibility of an actual infinity of past events, i.e., a beginningless universe. If an infinite future is possible, as Craig concedes, so is an infinite past.
Morriston contends that Craig’s reply that in the one case the events have occurred and in the other they have not, and hence that the number of future praises is indefinite, is a distinction without a difference. God can determine that an infinite number of praises will be sung.
The non-existence of past events does not prevent us from asking how many have occurred. Nor should the non-existence of future events prevent us from asking how many will occur. In neither case will “indefinitely many” do as an answer. (2010: 449)
Craig’s defense is that Morriston has ignored the difference between a potential and an actual infinite. According to Craig, an actual infinite is a collection of definite and discrete members whose number is greater than any natural number, whereas a potential infinite is a collection that is increasing toward but never arriving at infinity as a limit (Craig 2010; Craig and Sinclair 2009: 116).
Morriston objects to Craig’s definition of the potential infinite. For one thing, there is no limit to which the future praises grow. The collection of praises continues to grow as the praises are sung, but it does not approach a limit, for always one more praise can be sung. The series of future praises is actually infinite.
Craig responds that Morriston is really attacking his notion of a potential infinite by claiming that no relevant distinction exists between a potential and an actual infinite. But this, he says, rests on confusing an A-theory with a B-theory of time. An infinite directed toward the future would be actual only on a B-theory of time, but not on an A-theory (Craig 2010: 452–53). On an A-theory of time, a change of tense makes a difference. That something actually has happened differs significantly from what may (even if determined) happen.
Cohen (2015: 177) continues Morriston’s argument, insisting that Craig invokes an unmotivated principle that Cohen terms the “The Actuality-Infinity Principle: In order for x to be actually infinite in quantity, x must be actual”. Cohen argues that this begs the question. However, Craig’s principle is different: In order for x to be actually infinite in quantity, x must be or have been actual or actualized (Craig 2010: 455–56). Cohen might respond, “Why not then say that for x to be actually infinite in quality x must be, have been, or will be actual or actualized?” Cohen argues that Craig’s presentism does not assist him here, since neither the past nor the future events are present and hence do not exist. Craig thinks otherwise (Craig and Sinclair 2009: 126), tacitly defending the principle in that temporal becoming sees to it that what has not occurred or is not occurring but is future is merely potential, even if determined or foreseen by God.
7.3 Successive Addition Cannot Form an Actual Infinite
Craig’s second argument addresses this very point.
3. Therefore, the temporal series of events cannot be an actual infinite (Craig 1979: 103; Craig and Sinclair 2009, 117).
Morriston argues that premise 10 presupposes what is to be shown, namely, that there is a beginning point. He asks,
Why couldn’t there have been an infinite series of years in which there was no first year? It’s true that in such a series we never “arrive” at infinity, but surely that is only because infinity is, so to speak, “always already there”. At every point in such a series, infinitely many years have already passed by.... Each event in a beginningless series terminating in the present could have been “added” to the infinitely many prior events. (2003: 290)
Thus, we don’t need a starting point to form an actual infinite by successive addition. Infinity is already present in the series.
Craig responds that we cannot just add “arriving at the present” to an already existing infinite.
Before the present event could occur, the event immediately before it would have to occur; and before that event could occur, the event immediately before it would have to occur; and so on ad infinitum. One gets driven back into the past, making it impossible for any event to occur. Thus, if the series of events were beginningless, the present could not have occurred, which is absurd. (Craig and Sinclair 209: 118)
In other words, Why this moment rather than another?
Finally, it is objected that Craig’s argument presupposes an \(A\) view of time, where time flows from past to present to future and not all events tenselessly coexist. It seems that Craig’s argument cannot be sustained if time is understood in the \(B\) sense, where all members of the series tenselessly coexist, being equally real (Grünbaum 1994). On a \(B\) view of time there is no beginning, and it would seem that on this view the argument would collapse.
7.4 The Big Bang Theory of Cosmic Origins
One picture, then, is of the universe beginning in a singular, non-temporal event roughly 13–14 billion years ago. Something, perhaps a quantum vacuum, came into existence. Its tremendous energy caused it, in the first fractions of a second, to expand or inflate and explode, creating the four-dimensional space-time universe that we experience today. How this all happened in the first \(10^{-35}\) seconds and subsequently is a matter of serious speculation and debate. What advocates of premise 2 maintain is that since the universe and all its material elements originate in the Big Bang, the universe is temporally finite and thus had a beginning. (For a detailed consideration of cosmogenic theories from the kalām perspective, see Craig and Sinclair 2009: 125–182; for the counter discussion see Grünbaum 1991). By itself, of course, this reasoning, even if accurate, leaves it the case that premise 2 and hence conclusion 3 are only probably true, dependent on accepted cosmogenic theories.
Several replies to this argument can be made. First, questions have been raised about the adequacy of the theory of inflation to explain the expansion of the universe. One problem is predictability, for on this view anything that can happen will happen, an infinite number of times (Steinhardt 2011: 42). Further, the argument presupposes that the General Theory of Relativity applies to the beginning of the universe, but some doubt that this is so, given that it cannot adequately account for the quantum gravity involved.
Second, some have suggested that since we cannot “exclude the possibility of a prior phase of existence” (Silk 2001: 63), it is possible that the universe has cycled through oscillations, perhaps infinitely, so that Big Bangs occurred not once but an infinite number of times in the past and will do so in the future. The current universe is a “reboot” of previous universes that have expanded and then contracted (Musser 2004).
Responding to these issues, recently proposed cosmologies based on string theory have given new life to a cyclic view. For example, Paul Steinhardt and Neil Turok have proposed a cyclic cosmological model where the universe repeatedly transitions from a big bang to a big crunch to a big bang, and so on. They contend that
the Universe is flat, rather than closed. The transition from expansion to contraction is caused by introducing negative potential energy, rather than spatial curvature. Furthermore, the cyclic behavior depends in an essential way on having a period of accelerated expansion after the radiation and matter-dominated phases. During the accelerated expansion phase, the Universe approaches a nearly vacuous state, restoring very nearly identical local conditions as existed in the previous cycle prior to the contraction phase. (Steinhardt and Turok 2002: 2)
Dark energy becomes a key player in all of this. On the kalām view, the amount of dark energy in the universe makes a return to its original state impossible. The universe is not cyclical but will die a cold death. On a cyclic view, dark energy accelerates the expansion of the universe needed “to dilute the entropy, black holes and other debris produced in the previous cycle so that the universe is returned to its original pristine vacuum state before it begins to contract, bounce, and begin a cycle anew” (Steinhardt and Turok 2001: 1436).
This specific cyclic theory has been challenged, and other cyclic cosmological theories have been proposed. What this shows is that any attempt to support the second premise of the kalām argument by accepting or refuting scientific cosmologies will encounter an ever changing scene, given the speculative nature of cosmology. Thus, while Craig and Sinclair (2009: 150–74) critically evaluate current contenders as not being viable, changes in and development of these theories and the inevitable development of others make for unending point-counterpoint.
7.5 The Big Bang Is Not an Event
One critical response to the kalām argument from the Big Bang is that, given the Grand Theory of Relativity, the Big Bang is not an event at all. An event takes place within a space-time context. However, the Big Bang has no space-time context; there is neither time prior to the Big Bang nor a space in which the Big Bang occurs. Hence, the Big Bang cannot be considered as a physical event occurring at a moment of time. As Hawking notes, the finite universe has no space-time boundaries and hence lacks singularity and a beginning (Hawking 1988: 116, 136). Time might be multi-dimensional or imaginary, in which case one asymptotically approaches a beginning singularity but never reaches it. And without a beginning the universe requires no cause. The best one can say is that the universe is finite with respect to the past, not that it was an event with a beginning. (Rundle 2004: chap. 8.)
2. The universe has a finite past.
4. The universe includes space-time.
5. Therefore, the cause of the universe transcends space-time (in the sense that it existed aspatially and, when there was no universe, atemporally).
6. If the cause of the universe’s existence transcends space-time, no scientific explanation can provide a causal account of the origin of the universe.
8. Therefore, a personal cause of the universe exists.
7.6 Personal Explanation
Finally, something needs to be said about premise 3 and conclusion 5, which asserts that the cause of the universe is personal. Defenders of the cosmological argument suggest two possible kinds of explanation.[3] Natural explanation is provided in terms of precedent events, causal laws, or necessary conditions that invoke natural existents. Personal explanation is given “in terms of the intentional action of a rational agent” (Swinburne 2004: 21; also Gale and Pruss 1999). We have seen that one cannot provide a natural causal explanation for the initial event, for there are no precedent natural events or natural existents to which the laws of physics apply. The line of scientific explanation runs out at the initial singularity, and perhaps even before we arrive at the initial singularity (at \(10^{-35}\) seconds). If no scientific explanation (in terms of physical laws) can provide a causal account of the origin of the universe (premise 4), the explanation must be personal, that is, in terms of the intentional action of an intelligent, supernatural agent.
Morriston (2000: 163–68) questions whether Craig’s argument for the cause being personal goes through. Craig argues that if the cause were an eternal, nonpersonal, operating set of conditions, then the universe would exist from eternity. Below freezing temperatures will always freeze whatever water is present. Since the universe has not existed from eternity, the cause must be a personal agent who chooses freely to create an effect in time. However, notes Morriston, if the personal cause intended from eternity to create the world, and if the intention alone to create is causally sufficient to bring about the effect, then the universe would also exist from eternity, and there would be no reason to prefer a personal cause of the universe over a nonpersonal cause. For a timeless eternal being before creation, which is Craig’s view, “There can be no temporal gap between the time at which it does the willing and the time at which the thing willed actually happens” (2000, 167). So the distinction in this respect between a personal and a nonpersonal eternal cause disappears. Craig (2002) replies that it is not intention alone that must be present, but the personal agent must also employ or exercise its personal causal power to bring about the world. However, Morriston retorts, exercising personal causal power is an action in time, a view that is unavailable to Craig, for there is no time when God would restrain his causal powers.
Paul Davies argues that one need not appeal to God to account for the Big Bang. Its cause, he suggests, is found within the cosmic system itself. Originally a vacuum lacking space-time dimensions, the universe “found itself in an excited vacuum state”, a “ferment of quantum activity, teeming with virtual particles and full of complex interactions” (Davies 1984: 191–92), which, subject to a cosmic repulsive force, resulted in an immense increase in energy. Subsequent explosions from this collapsing vacuum released the energy in this vacuum, reinvigorating the cosmic inflation and setting the scenario for the subsequent expansion of the universe. However, what is the origin of this increase in energy that eventually made the Big Bang possible? Davies’s response is that the law of conservation of energy (that the total quantity of energy in the universe remains fixed despite transfer from one form to another), which now applies to our universe, did not apply to the initial expansion. Cosmic repulsion in the vacuum caused the energy to increase from zero to an enormous amount. This great explosion released energy, from which all matter emerged. Consequently, he contends, since the conclusion of the kalām argument is false, one of the premises of the argument—in all likelihood the first—is false.
a sea of continually forming and dissolving particles that borrow energy from the vacuum for their brief existence. A quantum vacuum is thus far from nothing, and vacuum fluctuations do not constitute an exception to the principle that whatever beings to exist has a cause. (Craig, in Craig and Smith 1993: 143–44)
One might wonder, as Rundle (2004: 75–77) does, how a supernatural agent could bring about the universe. He contends that a personal agent (God) cannot be the cause because intentional agency needs a body and actions occur within space-time. However, acceptance of the cosmological argument does not depend on an explanation of the manner of causation by a necessary being. When we explain that the girl raised her hand because she wanted to ask a question, we can accept that she was the cause of the raised hand without understanding how her wanting to ask a question brought about her raising her hand. As Swinburne notes, an event is “fully explained when we have cited the agent, his intention that the event occur, and his basic powers” that include the ability to bring about events of that sort (2004: 36). Similarly, theists argue, we may never know why and how creation took place. Nevertheless, we may accept it as an explanation in the sense that we can say that God created that initial event, that he had the intention to do so, and that such an event lies within the power of an omniscient and almighty being; not having a body is irrelevant.
8. An Inductive Cosmological Argument
Swinburne is correct that if someone believes that a deductive cosmological argument (proof) for God’s existence is sound, then it would be incoherent for that same person to then deny that God exists. However, in their respective proofs defenders of the deductive cosmological arguments make a claim about incoherence, namely, that it would be contradictory for the same person to affirm the premises of the argument and to claim that God or a personal necessary being does not exist. And they believe both that the respective premises have the intuitiveness that Swinburne deems necessary and that the argument has not committed some “elementary error of logic”.
Swinburne begins his discussion with the existence of a physical universe that (a) contains odd events that cannot be fitted into the established pattern of scientific explanation (e.g., miracles, the appearance of conscious beings), (b) is too big in that science cannot explain why there are states of affairs at all or why the fundamental natural laws to which science appeals to explain things hold, and (c) is complex (its matter-energy has relevant powers) (2004: 74, 150).
He holds that we are looking for a complete explanation, where
we may reasonably conclude that the criteria for supposing that factors have no further explanation (scientific or personal) in terms of factors acting at the time and so that any explanation is a complete explanation over all (not just a complete explanation within scientific or within personal explanation) are that any attempt to go beyond the factors that we have would result in no gain of explanatory power or prior probability. (2004: 89)
Swinburne holds that the appeal to God as an explanation is simpler in all of these ways.[4] Not only is there one entity and that entity is simple, the explanation effectively has no organization of the features. The explanation itself is simple. The appeal to God’s causal activity satisfies understanding or interpretation 6 in that it involves no extraneous entities to do the explaining and requires no intermediaries. God can bring about the effect by himself alone.
9. Necessary Being
For him necessary existence is necessarily tied up with a particular nature (otherwise the existence would be contingent) but not derivative from it; God’s existence entails his nature (2008: 88). God’s necessity is not logical (for there is no contradiction in denying that such a being exists) but made possible on explanatory grounds (the cosmological argument). However, we might inquire, if God could not have failed to exist, how does an absolutely necessary being differ from a logically necessary being? O’Connor goes on to argue that God’s absolute necessity does not invoke the ontological argument. He agrees that by S5, if it is possible that a necessary being exists, it necessarily exists (2008: 71), but denies that this invokes the ontological argument, since it “gives no reason to think that the nature in question is genuinely possible, and not merely logically consistent”. However, one might wonder, what would one have to establish to show that the existence of a necessary being understood in this sense is genuinely possible? (see Plantinga, God, Freedom, and Evil, 1967: 112). Gale himself admits that, given this view of necessity and S5, the ontological argument works although we don’t know how to properly construct it (Gale and Pruss 1999: 462).
Given this reading of “necessary being”, God as the necessary being possesses metaphysical or factual necessity and logical contingency (Hick 1960; Swinburne 2004: 79). If the necessary being exists at any time, then necessarily it exists at all times. If the necessary being does not exist, it cannot come into existence. Nothing can bring it into existence or cause it to cease to exist. Thus, if God exists now, it is not coherent to suppose that any agent can make it false that God exists (Swinburne 2004: 249, 266). O’Connor objects that if the necessary being is contingent, it just happens to exist (2008: 70; see White (1979) for further objections). However, one might reply that God does not just happen to exist; God exists because of his nature (although his nature does not precede his existence).
Further considerations beyond the scope of the cosmological argument are in order to discern the relationship between a necessary being and the properties often associated with a religious Ultimate. While defenders of the cosmological argument point to the relevance and importance of connecting the necessary being with natural theology, critics find themselves freed from such endeavors.
After all is presented and developed, it is clear that every thesis and argument we have considered, whether in support or critical of the cosmological argument, is seriously contested. Perhaps that is as it should be when trying to answer the difficult questions whether the universe is contingent or necessary, caused or eternal, and if caused, why it exists or what brought it into being.
• Almeida, Michael, 2018, Cosmological Arguments, Cambridge: Cambridge University Press.
• Almeida, Michael and Neil D. Judisch, 2002, “A New Cosmological Argument Undone”, International Journal for Philosophy of Religion, 51: 55–64.
• Bricmont, Jean, 2016, Making Sense of Quantum Mechanics, Cham: Springer Nature.
• Burgess, John P., 1999, “Which Modal Logic Is the Right One?”, Notre Dame Journal of Formal Logic, 40(1): 81–93.
• Cohen, Yishai, 2015, “Endless Future: A Persistent Thorn in the The Kalām Argument”, Philosophical Papers, 44(2): 154–187.
• Copan, Paul, 2017, 2019, The Kalām Cosmological Argument (2 vols), New York: Bloomsbury.
• Craig, William Lane, 1979, The Kalām Cosmological Argument, London: Macmillan.
• –––, 2018, “The Kalām Cosmological Argument”, in Jerry Walls and Trent Dougherty (eds.), Two Dozen (or so) Arguments for God, New York: Oxford University Press.
• Davidson, Herbert A., 1969, “John Philoponus as a Source of Medieval Islamic and Jewish Proofs of Creation”, Journal of the American Oriental Society, 89(2) (April–June): 357–91. doi:10.2307/596519
• Dvali, George, 2004, “Out of the Darkness”, Scientific American, 290(2) (Feb.): 68–75.
• Harvey, Ramon, 2021, Transcendent God, Rational World, Edinburgh: University of Edinburgh.
• Hawking, Stephen W., 1987, “Quantum Cosmology”, in Stephen W. Hawking and Werner Israel (eds.), Three Hundred Years of Gravitation, Cambridge: Cambridge University Press, pp. 631–51.
• –––, 2008, “Epistemological Foundations for the Cosmological Argument”, in Jonathan L. Kvanvig, (ed.), Oxford Studies in the Philosophy of Religion, Oxford: Oxford University Press, pp. 105–133.
• –––, 2017, God and Ultimate Origins: A Novel Cosmological Argument, New York: Palgrave Macmillan.
• –––, 2002a, “Causes and Beginnings in the Kalam Argument: Reply to Craig”,Faith and Philosophy, 19(2): 233–44. doi:10.5840/faithphil200219218
• –––, 2002b, “Craig on the Actual Infinite”, Religious Studies, 38: 147–166.
• –––, 2003, “Must Metaphysical Time Have a Beginning?”, Faith and Philosophy, 20(3): 288–306. doi:10.5840/faithphil200320338
• Musser, George, 2004, “Four Keys to Cosmology”, Scientific American 290(2) (February): 42–43.
• –––, 2013, “The Cosmological Argument”, in Chad Meister and Paul Copan (eds.), The Routledge Companion to Philosophy of Religion, London: Routledge, pp. 401–10.
• –––, 2015, “Uncaused Beginnings Revisited”, Faith and Philosophy, 32(2): 205–10.
• Ostrowick, John, 2012, “Is Theism a Simple, and Hence, Probable, Explanation for the Universe?”, South African Journal of Philosophy, 31(2): 354–68. doi:10.1080/02580136.2012.10751781
• –––, 2018, Infinity, Causation, and Paradox, Oxford: Oxford University Press.
• Puryear, Stephen, 2014, “Finitism and the beginning of the Universe”, Australasian Journal of Philosophy, 92: 619–29.
• Rundle, Bede, 2004, Why There Is Something Rather than Nothing?, Oxford: Clarendon Press.
• Scotus, John Duns, [c. 1300] 1964, Ordinatio, in Philosophical Writings: A Selection, Allan Wolter (trans.), Indianapolis: Bobbs-Merrill, 1964.
• Siniscalchi, Glenn B., 2018, “Contemporary Trends in Atheistic Criticism of Thomistic Natural Theology”, Heythrop Journal, 59: 689–706.
• Steinhardt, Paul J., 2011, “The Inflation Debate”, Scientific American, 304(4): 36–45.
• Steinhardt, Paul J. and Neil Turok, 2001, “A Cyclic Model of the Universe”, Science, 296: 1436–1439.
• –––, 2002, “Cosmic Evolution in a Cyclic Universe”, Physical Review D, 65(12): 1–53. doi:10.1103/PhyRevD.65.126003
• –––, 2010, “God as the Simplest Explanation of the Universe”, European Journal for Philosophy of Religion, 2(1): 1–24.
• –––, 2012, “What Kind of Necessary Being Could God Be?”, European Journal for Philosophy of Religion, 4(2), 1–18.
• Tolman, Richard C., 1934, Relativity, Thermodynamics, and Cosmology, New York: Dover.
• Waters, Ben, 2013, “Methuselah’s Diary and the Finitude of the Past”, Philosophia Christi, 15(2): 463–69.
Other Internet Resources
Copyright © 2021 by
Bruce Reichenbach <>
The Encyclopedia Now Needs Your Support
Please Read How You Can Help Keep the Encyclopedia Free
|
GTC New Interior
The Superbug Era
hand_washing_homepage_shutterstock_630315251By Kathleen Misovic-Not too long ago, if people became ill with an infection caused by bacteria, they simply took a medication prescribed by their physician to kill the germ. Professionals tasked with cleaning physician’s offices, hospitals, and other facilities teeming with pathogens gave the buildings a thorough cleaning and disinfection to stop the spread of germs.
Today, some bacteria have become immune to the antibiotics that once killed them, making these pathogens a bigger threat to the general population. Germs are also becoming more difficult to eliminate with previous cleaning methods, allowing them to spread and infect more people. This is the era of the superbug. Read more
close (X)
|
Does Russia have recycling?
Is recycling popular in Russia?
Russia does not have a tradition of recycling, waste separation, and/or incinerating waste. … Moreover, many of Russia’s landfills are outdated, leading to a number of challenges for the local population and the environment, such as bad smells, pollution of ground water and even release of toxic gasses.
How much of Russia waste is recycled?
Every year, Russia loses two billion euros’ worth of potential revenue from recyclables and energy generation. The country’s recycling rate is four percent, compared to an EU average of 47 percent, and landfills occupy over 4 million hectares, or nearly as much land as Switzerland.
What country does not recycle?
At the bottom of the list are Turkey and Chile, which each recycle an abysmal 1% of total waste, according to the report. They are also the only countries to have become worse at recycling since 2000, with 33% and 78% declines, respectively. In Turkey, waste management is not a priority issue.
Are there any recycling programs in Moscow?
Not surprisingly, since the city of Moscow has no recycling program (despite an impending garbage crisis), they had trouble finding a recycling center that would take their paper.
IT IS INTERESTING: Which animal belongs to aquatic habitat?
How does Russia recycle?
Russia does currently not have a tradition of recycling, waste separation, and/or waste incineration. Meanwhile, the exist- ing landfills are increasingly reaching their capacity limits.
How many landfills are in Russia?
In Russia there are a total of about 1000 municipal solid waste landfills, 5500 authorized landfills, and 17000 unauthorized landfills. Waste disposal sites cover 4 million hectares and grow by 10% annually. Most landfills are outdated and pose challenges to the environment and the surrounding population.
How much waste does Russia produce?
The Ministry of Natural Resources and Environment of the Russian Federation has estimated that around 60 million metric tons of MSW (Municipal solid waste) is generated each year, amounting to more than 400 kg per capita. The volume of MSW in Russia has been steadily increasing in recent years.
Which country has zero percent garbage?
Published: Thursday 26 December 2019. Kamikatsu, a small town situated approximately 40 kilometres from Tokushima city in the mountains of Shikoku island in Japan, is fast moving towards becoming the country’s first fully zero-waste habitation by next year.
Which country has most waste?
Why is Germany so good at recycling?
Germany has been very successful in its fight against growing garbage heaps. … This clever system has led to less paper, thinner glass and less metal being used, thus creating less garbage to be recycled. The net result: a drastic decline of about one million tons less garbage than normal every year.
IT IS INTERESTING: You asked: What is the main cause of long term climate change?
How do you define recycling?
|
Which places have dry climate?
Which climates are dry?
There are two dry climate types: arid and semiarid. Most arid climates receive 10 to 30 centimeters (four to 12 inches) of rain each year, and semiarid climates receive enough to support extensive grasslands. Temperatures in both arid and semiarid climates show large daily and seasonal variations.
Where are dry places?
Is Philippines a dry climate?
Where are the driest climates?
The Atacama Desert in Chile, known as the driest place on Earth, is awash with color after a year’s worth of extreme rainfall. In an average year, this desert is a very dry place.
What city has a dry climate?
Many cities in the western United States have dry climates. Las Vegas and Phoenix get so little rain or snow, less than 10 inches (250 millimetres) a year on average, that they’re considered deserts. The rainfall at Riverside and San Diego amounts to just over the desert threshold.
IT IS INTERESTING: Question: Is faux leather environmentally friendly?
Which two locations have the driest climate?
Is Korea a tropical country?
|
Spanish Expressions: Echar Ojo (to throw eye)
Spanish mini-lesson of expressions and Mexican cultural insights by Scott Thompson of Livit Immersion Center
Echar – to toss/throw… an object somewhere or into something; …out the trash; …a person out; …to fire someone; …to pour liquid on something; …some clothing on oneself; and many more meanings, especially in expressions like, «echar aguas» (to be on the lookout).
We’ve already seen the expression, “Echar la sal”, meaning to bring bad luck or jinx.
In this mini-lesson I want to talk about “Echar Ojo” (tossing/throwing the eye).
In Latin American culture, you can throw the eye in SO many ways. It’s fascinating! Here are a few highlights:
Echar Ojo (a): When you throw your eye on someone else’s belongings, it sounds like you would be staking them out, but in fact it means you’re watching over them for protection.
Perhaps you have young children at home and you need to head out to complete a sudden and crucial errand for a bit. You would rather leave the kids at home to go quicker and you may call on a neighbor to “echar ojo a los niños un ratito” (watch over the kids for a little while).
Or in Mexico, when I park on a “public” street I look for a professional parking attendant. As I walk away from the car, I give him a head nod and ask him to, “echar ojo al carro”. This “professional”, will watch over the car and when I return to find my car and all its parts intact, I will, “echarle una propina” (throw him a tip – anywhere from $2-20 pesos depending on the venue and duration).
pulsera mal de ojo
Echar (Mal de) Ojo: To throw a bad eye at someone. You may know this as, “Giving someone the ole’ stink eye”, intended to cause some future harm. Many times, you will not know that you’ve been stink eyed until you have symptoms, which include loss of appetite, fever, diarrhea, nightmares, rashes, among others. Moctezuma may not have taken his revenge on you, but rather someone else.
So how can you protect yourself so that others cannot “echar mal de ojo”on you? For young babies and children, you can buy a red hand-braided bracelet with eye beads and other protective things, such as the medallion of San Benito that will overcome temptations, cast away evil spirits and even work small miracles. Prevention in adults includes purification rituals consisting of extracting the evil out of one’s body by rubbing it with a chicken egg, placing a glass of pure water under the bed where he/she sleeps and otherwise strengthening one’s positive energy fields.
Echar Taco de Ojo (to throw an eye taco): This is a great expression used to express hungry eyes that are attracted to someone they’re checking out. Devouring eye tacos is a specialty of men, but women admit to doing it too. The Chippendales come through Puebla every once in a while and women go a little locas while they echan unos tacos de ojo. At a crowded beach in summer, there is sure to be plenty of occassion to, “echar taco de ojo”. In Puebla, we consume many types of real food tacos, in addition to some taco de ojo.
If you missed the previous mini-lessons, you can catch up here:
Deja una respuesta
Logo de
Estás comentando usando tu cuenta de Salir / Cambiar )
Imagen de Twitter
Estás comentando usando tu cuenta de Twitter. Salir / Cambiar )
Foto de Facebook
Estás comentando usando tu cuenta de Facebook. Salir / Cambiar )
Conectando a %s
|
Expand description
Process Subplot input files.
Capture and communicate acceptance criteria for software and systems, and how they are verified, in a way that’s understood by all project stakeholders.
Resources for subplot.
An abstract syntax tree representation of a Markdown file.
A binding of a scenario step to its implementation.
Set of all known bindings.
A data file embedded in the document.
A collection of data files embedded in document.
A parsed Subplot document.
A code block with Dot markup.
Resources used to configure paths for dot, plantuml.jar, and friends
A scenario that has all of its steps matched with steps using bindings.
A matched binding and scenario step, with captured parts.
A list of matched steps.
Metadata of a document, as needed by Subplot.
A code block with pikchr markup.
A code block with PlantUML markup.
An acceptance test scenario.
A scenario step.
The text of a part of a scenario step.
Typesetting style configuration for documents.
A template specification.
A list of warnings.
Part of a scenario step, possibly captured by a pattern.
The kind of scenario step we have: given, when, or then.
Define all the kinds of errors any part of this crate can return.
A warning, or non-fatal error.
A code block with markup for a graph.
Generate code for one document.
Generate a test program from a document, using a template spec.
Get the base directory given the name of the markdown file.
Load a Document from a file.
Load a Document from a file.
Parse a scenario snippet into logical lines.
Type Definitions
A result type for this crate.
|
Skip to content
Uranus And Neptune: Two Totally different Worlds
Uranus And Neptune: Two Totally different Worlds
Bizarre issues occurred within the outer Photo voltaic System when it was first a’borning. The ice-giants, Uranus and Neptune, are the 2 outermost main planets of our Solar’s household, and in measurement, bulk, composition, and nice distance from our Star, they’re very a lot alike. Each distant worlds are clearly completely different from the quartet of small rocky internal planets–Mercury, Venus, Earth, and Mars–as nicely as from the duo of gas-giant planets, Jupiter and Saturn. Ice giants are planets that include components heavier than hydrogen and helium, resembling oxygen, carbon, nitrogen, and sulfur. Though the 2 planets needs to be nearly similar twins, they aren’t. In February 2020, a crew of planetary scientists from the College of Zurich in Bern, Switzerland, informed the press that they imagine they’ve found why.
“There are… putting variations between the 2 planets that require clarification,” commented Dr. Christian Reinhardt in a February 2020 PlanetS Press Launch. Dr. Reinhardt studied Uranus and Neptune with Dr. Alice Chau, Dr. Joachim Stadel and Dr. Ravit Helled, who’re all PlanetS members working on the College of Zurich, Institute for Computational Science.
Dr. Stadel commented in the identical PlanetS Press Launch that one of many putting variations between the 2 planets is that “Uranus and its main satellites are tilted about 97 levels into the photo voltaic airplane and the planet successfully rotates retrograde with respect to the Solar”.
As well as, the satellite tv for pc methods of the distant duo are completely different. Uranus’s main satellites are on common orbits and tilted with the planet, which means that they shaped from a disk, much like Earth’s Moon. In distinction, Triton–Neptune’s largest moon–is very inclined, and is subsequently thought of to be a captured object. Triton additionally shows necessary similarities to the distant ice-dwarf planet, Pluto, which means that the 2 might have been born in the identical region–the Kuiper belt that’s located past Neptune’s orbit, and is the frigid, dimly lit dwelling of myriad comet nuclei, small minor planets, and different frozen our bodies. Planetary scientists predict that sooner or later Triton’s orbit will decay to the purpose that it’ll crash down into its adopted parent-planet.
Along with different variations, Uranus and Neptune may additionally differ in respect to warmth fluxes and inner construction.
In astrophysics and planetary science the time period “ices” refers to risky chemical compounds that possess freezing factors above round 100 Okay. These compounds embody water, ammonia, and methane, with freezing factors of 273 Okay, 195 Okay, and 91 Okay, respectively. Again within the Nineteen Nineties, scientists first got here to the belief that Uranus and Neptune are a definite class of large planet, very completely different from the 2 different large denizens of our Solar’s household, Jupiter and Saturn. The constituent compounds of the duo of ice giants had been solids after they had been primarily integrated into the 2 planets throughout their historic formation–either straight within the type of ices or encased in water ice. Presently, little or no of the water in Uranus and Neptune stays within the type of ices. As a substitute, water principally exists as supercritical fluid on the temperatures and pressures inside them.
The general composition of the duo of ice giants is just about 20% hydrogen and helium in mass. This differs considerably from the composition of our Photo voltaic System’s two gas-giants. Jupiter and Saturn are each greater than 90% hydrogen and helium in mass.
Modelling the formation historical past of the terrestrial and gas-giant planets inhabiting our Photo voltaic System is comparatively easy. The quartet of terrestrial planets are usually thought to have been born as the results of collisions and mergers of planetesimals inside the protoplanetary accretion disk. The accretion disk surrounding our new child Solar was composed of fuel and dirt, and the extraordinarily positive mud motes possessed a pure “stickiness”. The tiny particles of mud collided into each other and merged to kind our bodies that step by step grew in size–from pebble measurement, to boulder measurement, to moon measurement, and in the end to planet measurement. The rocky and metallic planetesimals of the primordial Photo voltaic System served because the “seeds” from which the terrestrial planets grew. Asteroids are the lingering relics of this once-abundant inhabitants of rocky and metallic planetsimals that in the end grew to become Mercury, Venus, Earth, and Mars.
In distinction, the 2 gas-giant planets of our personal Photo voltaic System, in addition to the extrasolar gas-giants that circle stars past our Solar, are believed to have advanced after the formation of stable cores that weighed-in at about 10 occasions the mass of Earth. Subsequently, the cores of gas-giants, like Jupiter and Saturn, shaped on account of the identical course of that produced the terrestrial planets–whereas accreting heavy gaseous envelopes from the ambient photo voltaic nebula over the passage of some to a number of million years. Nonetheless, there are various fashions of core formation primarily based on pebble accretion which have been proposed extra not too long ago. Alternatively, among the large exoplanets might have emerged as the results of gravitational accretion disk instabilities.
The beginning of Uranus and Neptune by way of an analogous means of core accretion is way more complicated–and problematic. The escape velocity for the small primordial protoplanets (still-forming child planets) located about 20 astronomical items (AU) from the middle of our personal Photo voltaic System would have been akin to their relative velocities. Such our bodies crossing the orbits of Jupiter or Saturn might nicely have been despatched on hyperbolic trajectories that shot them howling out of our Solar’s household altogether, and into the frigid darkness of interstellar house. Alternatively, such our bodies, being snared by the duo of fuel giants, would doubtless have been accreted into Jupiter or Saturn–or hurled into distant cometary orbits past Neptune. One AU is the same as the common distance between Earth and Solar, which is about 93,000,000 miles.
Since 2004, regardless of the problematic modelling of their formation, many alien ice large candidates have been noticed orbiting distant stars. This implies that they might be widespread denizens of our Milky Means Galaxy.
Bearing in mind the orbital challenges of protoplanets located 20 AU or extra from the middle of our Photo voltaic System, it’s doubtless that Uranus and Neptune had been born between the orbits of Jupiter and Saturn, earlier than being gravitationally scattered into the extra distant, darkish, and frigid domains of our Solar’s household.
Two Totally different Worlds
“It’s typically assumed that each planets shaped in an analogous method,” Dr. Alice Chau famous within the February 2020 PlanetS Press Launch. This woould doubtless clarify their related compositions, imply orbital distances from our Solar, and their kindred plenty.
However how can their variations be defined?
Our primordial Photo voltaic System was a “cosmic capturing gallery”, the place impacts from crashing objects had been frequent occurrences–and the identical is true for alien planetary methods past our Solar. Because of this, a catastrophic large affect was beforehand proposed because the supply of the mysterious variations between Uranus and Neptune. Nonetheless, earlier work solely studied the impacts on Uranus or was restricted due to sturdy simplifications in respect to the affect calculations.
For the primary time, the crew of planetary scientists on the College of Zurich studied a variety of various collisions on each Uranus and Neptune utilizing excessive decision laptop simulations. Beginning with very related pre-impact ice giants they demonstrated than an affect of a physique with 1-3 occasions the mass of Earth on each Uranus and Neptune can clarify the variations.
Within the case of Uranus, a grazing collision would tilt the planet however not have an effect on its inside. In dramatic distinction, a head-on collision in Neptune’s previous, would have an effect on its inside, however not kind a disk. That is per the absence of huge moons on common orbits as seen at Neptune. Such a catastrophic crash, that churned the deep inside of the traumatized planet, can also be instructed by the bigger noticed warmth flux of Neptune..
Future NASA and European House Company (ESA) missions to Uranus and Neptune can present new and necessary constraints on these situations, enhance our understanding of Photo voltaic System formation, and likewise present astronomers with a greater understanding of exoplanets on this explicit mass vary.
“We clearly present that an initially related formation pathway to Uranus and Neptune may end up in the dichotomy noticed within the properties of those fascinating outer planets,” Dr. Ravit Helled commented to the press in February 2020.
This analysis was printed within the November 22, 2019 difficulty of the Month-to-month Notices of the Royal Astronomical Society (MNRAS) beneath the title “Bifurcation within the historical past of Uranus and Neptune: the function of large planets.”
condor s02e07 english subtitles
#Uranus #Neptune #Worlds
Uranus And Neptune: Two Totally different Worlds
Leave a Reply
Your email address will not be published.
|
How accurate is nutrition info?
It depends on the food matrix and the nutrient, but in general NIST’s measurements are accurate to within 2% to 5% for nutrient elements (such as sodium, calcium and potassium), macronutrients (fats, proteins and carbohydrates), amino acids and fatty acids.
What is the most reliable source for nutrition information? is a USDA-sponsored website that offers credible information to help you make healthful eating choices. It serves as a gateway to reliable information on nutrition, healthy eating, physical activity, and food safety for consumers.
What is the best nutrition database?
The Best Nutrition Apps of 2020
• Nutrients.
• MyFitnessPal.
• MyNetDiary.
• MyPlate Calorie Counter.
• Nutrition Facts.
• Calorie Counter & Diet Tracker.
• Protein Tracker.
• SuperFood.
Do nutrition facts lie?
What are sources of nutrition information?
Common sources of nutrition information identified in the literature include the internet, family members and friends, television, and books [4, 6, 13, 15, 22].
What is the most reliable source of nutrition information quizlet?
In general, registered dietitian nutritionists are reliable sources of nutrition information. A person with a PhD has the proper training to be a registered dietitian nutritionist. In general, registered dietitian nutritionists are reliable sources of nutrition information.
Who verifies nutrition facts?
The U.S. Food and medicine Administration (FDA) has updated the Nutrition Facts label on packaged foods and drinks. FDA is requiring changes to the Nutrition Facts label based on updated scientific information, new nutrition research, and input from the public. This is the first major update to the label in over 20 years.
Can companies lie about their nutrition facts?
Things like this are just there to catch the shopper’s eye. Most of the time it’s a trick to get the consumer to buy the product. The average grocery shopper does not look at the nutrition facts.
Why do nutrition labels lie?
Labels provide a number that likely overestimates the calories available in unprocessed foods. Food labels ignore the costs of the digestive process – losses to bacteria and energy spent digesting. The costs are lower for processed items, so the amount of overestimation on their labels is less.
What are the consequences of inaccurate nutrition information?
Nutrition fraud may lead to loss of money, failure to seek correct medical care, and/or lack of money for proper treatment. Substituting poor nutritional practices for sound ones, or disease itself, can also occur.
How accurate are nutrition labels Canada?
Consumers who scan nutrition labels on pre-packaged foods for fat content might be surprised to learn that some of the information may not be accurate, according to a Canadian researcher who has tested hundreds of products.
Is it illegal to not have nutrition facts?
Restaurants must provide nutritional information
Thanks to a new law enacted by the Federal medicine Administration (FDA), any restaurant with more than 20 locations must provide customers with a calorie-count on their food items. … Although calorie counts are required to be on the menu, all other nutritional facts are not.
Is it true that the nutritional claims in food products can be misleading Why?
This claim is misleading because it doesn’t mean the product has no sugar. It just means that there are no ‘refined’ sugars like white table sugar, but it may still contain sugar from ‘natural’ sources like honey, maple syrup, coconut sugar or dried fruit.
When you go to the Internet to research on nutrition topics or issues How do you determine if the information is correct or accurate?
If available, read the “About” section of the site to help determine the reliability of the information on the site. Look for the evidence. Health decisions are best based on medical and scientific research, not on opinion. Look to see the sources of information for the website.
Which of the following are ways to be careful consumer of nutrition information from the Internet?
Which of the following are ways to find reliable nutrition info on the internet? Search multiple websites, do not trust info on websites that don’t indicate valid sources, and avoid websites that provide online diagnosis and treatments.
|
Home > Free Essays > Warfare > World War I > The Worst Team in History: the Gallipoli Failure
The Worst Team in History: the Gallipoli Failure Research Paper
Exclusively available on IvyPanda Available only on IvyPanda
Updated: Sep 29th, 2020
Introduction/Short history
The Gallipoli Campaign is one of the greatest failures by the Allied forces during the First World War. Despite the superiority of the Allied forces in the war, a sequence of events occasioned by systemic failures and missed opportunities led to the premature withdrawal of the invading armies on 9 January 1916, thus, granting the Ottoman Empire a surprising victory. The main aim of the Gallipoli invasion by France and Britain was to prevent Turkey, which was a Germany ally, from participating in the First World War.
The Allied forces reasoned that if they captured Turkey’s capital, Istanbul, [known as Constantinople at the time], would destabilize the entire country, and cripple its attempts of helping Germany during the war. The campaign started by sending British battleships to invade Istanbul, but this movie failed miserably as the ships could not navigate the Dardanelles straits. On 18 March 1915, Britain lost a third of its battleships to the enemy (Cameron, 2011).
After this failure, the British army under the command of Sir Ian Hamilton was instructed to invade the Gallipoli peninsula where it would annihilate the Turkish land and shore war machinery. This annihilation would make Dardanelles accessible for the passage of the British navy. In the new plan of eliminating the Turkish forces, the British troops would invade and capture the peninsula’s tip on April 25, 1915, and head northwards (Cameron, 2011).
At the same time, the Australian and New Zealand Army Corps (ANZACS) would launch attacks from the western coast, which is located on the northern sides of Gaba Tepe. Unfortunately, the British and ANZACS troops succeeded marginally in their mission by securing a small and insignificant region on the peninsula. The Turkish troops were apparently strong, strategic, tactical, and prepared as opposed to the Allied forces’ assumptions. This paper explores some of the events and tactical decisions that led to the infamous Gallipoli failure.
The main players
Lord Fisher became the First Sea Lord in October 1914 after the controversial resignation of Prince Louis of Battenberg (Heathcote, 2002). Prince Louis’ resignation is termed as controversial because he was forced to step aside because he had a German name. Lord Fisher did not support the Gallipoli Campaign fully, which led to unnecessary arguments with Winston Churchill during the Gallipoli campaign.
General Sir Ian Hamilton
Became the commander of the Allied Mediterranean Expeditionary Force (MEF) in March 1915 (Heathcote, 2002). This force was tasked with the sole duty of conquering the Dardanelles straits to create a way for the capturing of Constantinople. Unfortunately, Hamilton was excluded from the decision-making process concerning the Gallipoli campaign, and thus, he only received instructions from Lord Kitchener.
Commodore Sir Roger Keyes
Was the “Naval Chief of Staff to Vice-Admiral Carden, who was the commander of the Royal Navy squadron off the Dardanelles” (Heathcote, 2002, p. 145). Keyes was one of the masterminds of the Dardanelles campaign. However, he rued Admiral Carden’s lack of foresight, which led to numerous casualties to the British troops after falling for the Turkish minefields. After the resignation of Admiral Carden, Keyes led the minesweeping exercise
Field Marshal Lord Kitchener
Was the Secretary of State for War during the Gallipoli campaign (Heathcote, 2002). Kitchener was an intelligent person and perhaps the only individual who could anticipate the Turkish moves in the war. Unfortunately, his decisions were opposed on many occasions. For instance, France was reluctant to accept Kitchener’s proposal of having additional troops to back the Western Front. Besides, Kitchener came up with the ANZAC idea. Despite his witticism and tactical advantage, the Gallipoli campaign failed perhaps due to the many centers of power and decision-making.
Reasons for failure
Poor communication
The first reason for failure during the Gallipoli campaign was the lack of proper and coordinated communication between the involved players. Signalers were used to giving orders to the frontline and deliver feedback to the headquarters. A Divisional Signal Company was tasked with the duty of providing timely communication platforms (Hart, 2011). The company decided to use signal lamps and heliographs, which were damaged or lost during the landing mission.
People carrying information to and from the front line died in the hands of the Turkish snippers, who had taken strategic places. In short, communication was a problem, and thus, different teams could not relay important information in time. For instance, upon landing, Hamilton thought that the navy would unleash unparalleled attacks as part of the strategy. However, Hamilton could not communicate this expectation, and thus, he ended up relying on assumptions.
On the other side, the navy had foreseen the likely losses from such attacks. Besides, the navy as opposed to the general feeling that any tactical loss of ships was bearable. Therefore, the navy did not launch attacks upon the landing of Hamilton, which led to a significant loss of British forces (Carlyon, 2002). In another example, after the soldiers involved in the first and second waves of attack were killed in Nek, Lieutenant Colonel Noel Brazier tried to stop the third wave of attack. However, he could not reach Colonel Hughes or convince Colonel John Antill to abort the attack. Sadly, Colonel Antill walked into an open trap and lost 80 of his men to the Ottoman forces (Carlyon, 2002).
Supply issues, not enough trained soldiers
By the time Britain embarked on the Gallipoli campaign, it had suffered a huge loss of trained soldiers, ammunition, and weaponry following the five-month confrontation in 1914. By the start of 1915, the British Army had lost close to 40% of its pre-war strength (Hart, 2011). Unfortunately, the New Armies needed time to train and muster the art of war. Besides, there was a shortage of artillery for all soldiers.
The combination of these factors led to the derailment of the campaign by giving the Ottoman forces time to reorganize and strategize. Prior (2010posits, “….substandard weaponry, finding itself short of machine guns, deploying artillery of Boer War vintage, and equipping its soldiers with a bewildering variety of inadequate hand grenades… these material weaknesses were compounded by the near-universal inexperience of officers and men” (p. 102).
For instance, when the MEF landed in March 1915, it had 70,000 soldiers, which was the same number fighting them from the Ottoman side. In most cases, the attacking side needs the superiority of numbers to be in a position to conquer the enemy especially in amphibious attacks (Laffin, 1980). The British strategists understood this proposition, but they assumed that the Fifth Ottoman Army would be weak, albeit the same in numbers. The Allied forces incurred many casualties and by August the same year, the Ottoman side had 250,000 men in the war as opposed to the MEF’s 190,000 (Laffin, 1980).
Faulty maps
The MEF and the ANZAC were unfamiliar with the local terrain, and thus, they relied on maps for directions. However, the provided maps were faulty, which misled the forces into landing on the wrong spots. For instance, instead of landing at Cape Tepe on 25 April 1915, the ANZAC forces ended up in the Anzac Cove (Doyle & Bennett, 1999). Navigating the cove in the dark became an impediment for the invading soldiers, and thus, they resorted to remain in silence for over 1 hour without any attack. Later on, the Ottoman forces came to inspect the cove, and the ANZAC forces opened fire killing over 70 rival soldiers (Fewster, Basarin, & Basarin, 2003).
However, even though the Allied forces did not suffer any loss of soldiers during this attack, the Ottoman counterparts seized huge amounts of supplies and stores. Besides, the failure by the ANZAC to land successfully compromised the Allied force’s initial plan, which required regrouping and fresh strategizing. By abandoning the initial plan, the entire ANZAC camp was thrown into confusion compounded by the issuance of mixed orders.
In the ensuing melee, some soldiers navigated to their designated areas, while others ended up in different locations. The divisional commanders wanted evacuation, but the Royal Navy advised otherwise due to the impracticability of such a venture. Ultimately, the ANZAC failed to achieve the set objectives, due to faulty maps.
The Battle of Nek
The battle at Nek was planned to start at 0430hrs on August 7, 1915. However, timing instructions were not followed; hence, the preparation of artillery stopped at 0423hrs, while the battle started 7 minutes later (Laffin, 1980). The 7-minutes lapse before starting the attack gave the Ottoman side time to prepare sufficiently. The time-lapse between the ceasing of artillery preparation and the launch of the attack eliminated the element of surprise, which disadvantaged the Allied forces’ side.
The Ottoman force knew that an attack was coming, and thus it prepared adequately. Therefore, when Colonel White launched the first wave of attack, he was killed together with his 150 men from the 8th Light Horse Regiment (Burness, 1990). The second and the third waves of attack suffered the same fate leaving hundreds of Australian soldiers dead.
Overconfidence in British abilities
One of the greatest blunders that the Allied forces did was to underestimate the fighting capabilities of the enemy, the Turkish forces. The British troops were overconfident of their abilities having won other challenging battles before (Fewster et al., 2003). Therefore, even when tactical and logical examination pointed out that the Allied forces needed to withdraw, leaders from London, and especially Winston Churchill, pushed harder to win the war (Gariepy, 2014).
For instance, after the failed landing at the Anzac Cove, the commanding officers wanted to evacuate, but the Royal Navy declined the request. Similarly, after losing the August offensive at Nek, which led to the death of many soldiers, evacuation would have worked, but the pride in the British mightiness overruled logic and the war continued. Ultimately, the Allied forces lost 130,000 soldiers with close to 300,000 being invalidated due to sickness from dysentery and typhoid (Gariepy, 2014).
The Gallipoli Campaign failed miserably and granted the Ottoman forces unprecedented victory during the First World War. Some of the contributing factors to the failure included the assumption that British troops were superior to their Turkish counterparts, and thus, the battle would be a walkover. Additionally, poor communication impeded the progress of the attacks coupled with faulty maps and substandard artillery.
Besides, coordination glitches at the battle of Nek allowed the Ottoman forces time to prepare and inflict massive casualties on the Allied forces. Ultimately, the Allied forces were evacuated on 9 January 1916, having lost over 130,000 men.
Burness, P. (1990). White, Alexander Henry (1882–1915): Australian Dictionary of Biography. Carlton, VIC: Melbourne University Press.
Doyle, P., & Bennett, M. (1999). Military geography: the influence of terrain in the outcome of the Gallipoli Campaign, 1915. The Geographical Journal, 165, 12-13.
Cameron, D. (2011). Gallipoli: The Final Battles and Evacuation of Anzac. Newport, NSW: Big Sky Publishing.
Carlyon, L. (2002). Gallipoli. New York, NY: Pan Macmillan.
Fewster, K., Basarin, V., & Basarin, H. (2003). Gallipoli: The Turkish Story. Crows Nest, NSW: Allen and Unwin.
Gariepy, P. (2014). Gardens of Hell: Battles of the Gallipoli Campaign. Lincoln, NE: Potomac Books.
Hart, P. (2011). Gallipoli. London, UK: Profile Books
Heathcote, T. (2002). The British Admirals of the Fleet 1734 – 1995. Barnsley, UK: Pen & Sword Ltd.
Laffin, J. (1980). Damn the Dardanelles!: The Story of Gallipoli. London, UK: Osprey.
Prior, R. (2010). Gallipoli: The end of the myth. New Haven, CT: Yale University Press.
This research paper on The Worst Team in History: the Gallipoli Failure was written and submitted by your fellow student. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly.
Removal Request
Request the removal
Need a custom Research Paper sample written from scratch by
professional specifically for you?
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
Writer online avatar
certified writers online
Cite This paper
Select a referencing style:
IvyPanda. (2020, September 29). The Worst Team in History: the Gallipoli Failure. https://ivypanda.com/essays/the-worst-team-in-history-the-gallipoli-failure/
IvyPanda. (2020, September 29). The Worst Team in History: the Gallipoli Failure. Retrieved from https://ivypanda.com/essays/the-worst-team-in-history-the-gallipoli-failure/
Work Cited
"The Worst Team in History: the Gallipoli Failure." IvyPanda, 29 Sept. 2020, ivypanda.com/essays/the-worst-team-in-history-the-gallipoli-failure/.
1. IvyPanda. "The Worst Team in History: the Gallipoli Failure." September 29, 2020. https://ivypanda.com/essays/the-worst-team-in-history-the-gallipoli-failure/.
IvyPanda. (2020) 'The Worst Team in History: the Gallipoli Failure'. 29 September.
Powered by CiteTotal, essay referencing tool
More related papers
|
Between 2010 and 2018, the number of compute instances in the world increased by 550%. The global energy consumption of data centres also increased—but only by 6%. This means that data centres managed to deliver an annual decrease in their energy intensity of 20%, a huge achievement.
This achievement was delivered through technological advances and innovations in both the IT itself (the servers and hardware used in data centres) and improved data centre infrastructure. For example, one of the most obvious changes to make is to reduce the reliance on air conditioning. Instead, 80% of new projects now use outside air cooling, also known as free cooling.
To enable all data centres to become as good as the best, France Datacenter, the trade association for French data centres, has gathered together and published a selection of best practice. Here are the highlights.
Committed participants
France Datacenter notes that one of the most important aspects of reducing environmental impact is the commitment of data centre operators. This cannot be treated as a ‘box-ticking’ exercise. Data centre operators have demonstrated their commitment by taking responsibility for measuring their own environmental impact, for example, through models like those provided by DATA4. They have also implemented ISO standard frameworks and other European standards frameworks, to ensure that their data centres follow best practice.
Many have also embraced elements of the circular economy and biodiversity. For example, CIV repurposes its servers at the end of their lives, using them as computers to provide heating to other buildings. Schneider has started to recycle its product components much more extensively. The final element in this section is training staff on best practice in energy conservation, to increase awareness and commitment.
Improvements to data centre design
There are many ways in which data centre design has contributed to improve energy efficiency. These include:
• Innovative cooling systems, including one company that draws water from an underground canal, which is therefore cool all year round, even during summer. The water is then allowed to cool to reduce the impact of returning it to the sea.
• Some companies now harvest the heat generated by their data centres, and use it to heat offices and homes. One site in Paris, run by Dalkia, uses the heat generated during the cooling process to heat the local business park.
• Separate hot and cold aisles in the data centre, with physical partitioning to improve efficiency.
Better operational practices
Similarly, better operational practices can also contribute to improved environmental performance. Small changes can have a much bigger impact. Options include increasing airflow temperatures, and maintaining constant air pressure to stop any hot spots developing. Energy audits can also help to highlight areas for improvement.
Some data centre operators are also starting to use artificial intelligence to control their ventilation. AI-powered systems use sensors to measure temperature in server rooms, and then use models to predict how best to optimise the performance of ventilation systems. This can eliminate over 90% of the hot spots in a server room, hugely increasing efficiency.
Several operators have invested in renewable energy, often consuming the energy directly. One operator, for example, has installed a solar farm that generates enough energy to provide for 3% of the site’s needs. This may not seem much, but the company plans to expand its generation capacity—and over 10% of its employees have also decided to install solar panels as a result of the scheme.
Using energy efficient equipment
The final area for improving energy efficiency is the equipment used in data centres. Many of these solutions are relatively simple—but have a disproportionately large impact on energy use. For example, there have been a number of changes to IT hardware over time, making newer hardware much more efficient. Similarly, replacing older chillers with newer, more efficient designs can make a big difference to energy consumption. One operator found that it could improve its energy efficiency by three times by using an air cooling system.
Another option includes evaporative cooling, by spraying water droplets outside to evaporate, which cools the air. This air can then be used for cooling purposes. Using speed variators on pumps can reduce energy consumption so much that data centre operators will achieve a full return on their investment in less than 2 years. Similarly, the use of a flywheel can help to reduce the demand for energy to create uninterruptible power supplies.
Leave a Reply
Your email address will not be published.
|
Responsive discipline
Responsive discipline
There is no control, only connection
What is responsive discipline?
What comes to mind when you think about discipline? Do you think of discipline as something that is negative? Maybe you think of punishment or withdrawal of privileges? Or is discipline something that can be done gently and responsively? What makes responsive discipline different from mainstream discipline?
Discipline is crucial when you’re a parent. Often, people equate responsive parenting with permissive parenting, where parents are on an equal footing with their children, parents allow them to make all their own decisions or to “rule the roost”. However, deep down, our children want us to be in charge, they want us to guide and direct them. This makes them feel secure and safe. Of course, there is sometimes pushback and resistance, which I’ll discuss in more detail further on. But the bottom line is that our children need to know we are in charge. This doesn’t mean that we control our children, though. There is a subtle difference.
Are you raising a puppy or a child?
If you’ve ever trained a puppy, you’ll know that you don’t punish them for peeing indoors, but you reward them for peeing outside instead. With time, they stop peeing indoors. This approach will work (to a certain extent) with children. Indeed, this behavioural approach (which seeks to control behaviour) has been the underlying principle for most child rearing/discipline books of the last few hundred years. We could be forgiven for thinking that positive discipline is when we don’t punish our children, but instead ignore the “bad” behaviour and reward the “good” behaviour. Is this really an approach that we want for our kids? Dogs are pack animals and need to know who the boss is (us, hopefully!) Children aren’t puppies though. We don’t want blind obedience, or life long dependence, from our children. We want them to grow up to be well rounded individuals who can make good decisions for themselves, who have good emotional stability, and have healthy relationships, based on respect, rather than control. And this is where discipline comes in. The sort of discipline that we aspire to as responsive parents.
When we discipline our children as responsive parents, this discipline is quietly authoritative and empathetic. We don’t focus on the behaviour, but instead, we focus on the emotion that drives the behaviour (Neufeld, Intensive 1 Making Sense of Kids). We empathise with our children and gently guide them from a place of connection, not control. We can’t control our children’s emotions, and therefore, ultimately we can’t really control the behaviour either, although we can set limits on the behaviour that we see. It’s a paradigm shift, and a really hard mental shift to make, especially if the only way we know how to deal with kids is to try to control their behaviour. With responsive discipline, there is no place for physical punishment, separation, threats, bribes, rewards, coercion or white lies. Instead, we hold boundaries with love and compassion and sit with the big emotions that our children display, rather than try to get rid of their frustration or anger, or suppress those emotional outbursts.
It’s ok for our kids to get upset
It is ok for our kids to have big emotions! When they are babies, we often do everything we can to avoid crying and upset. It is 100% appropriate to meet our babies’ needs. They can’t manipulate us or become spoilt. Therefore, when they cry, it is always important to act promptly and respond. As children get older, however, it is really important for them to learn how to navigate big emotions. It can be hard to see our children get upset, and often, our impulse is to step in and pacify them. But we have to understand that there are no “negative” emotions, there are just emotions. In order for us to live full and meaningful lives, we have to experience a full range of emotions, and also learn how to process those emotions safely. Our role as adults is to help our children process those emotions. This can be really difficult, especially if as adults we struggle to deal with our own big emotions, and we’re trying to stay calm. So many of us, as children, didn’t have a safe place to express our own emotions. So, when our children experience big emotions, we have to make a shift as adults. We have to learn how to sit with the discomfort, and perhaps examine how those big emotions show up for us. (Markham, 2012) Are we able to process them appropriately, or do we avoid or suppress them? Often, in order to be a responsive parent, we have to grow emotionally too. We have to learn how to deal with big emotions that show up for us, and that make us feel uncomfortable.
So what should you do when you need to hold a boundary?
Start with connection
The first thing you should do is focus on that connection you have with your child. Children under the age of six aren’t able to focus on more than one thing at a time. Deborah MacNamara talks about “collecting” your child (2016). This just means that you have to shift their focus from what they are doing back onto you, so that you are the focus of their attention. Then it becomes much easier to gain their cooperation. For example, if your child is watching tv, and they need to get ready to go out, don’t just turn the tv off. Instead, sit down beside them, talk to them about the show they are watching and get them to make eye contact with you as part of the conversation. When their attention is fixed on you, you can prepare them for the next transition. When children feel connected to us, they are more likely to cooperate
Use empathy, not logic
Don’t reason with your child, empathise instead. If they get upset because they can’t have a cookie before dinner, or because they have to leave the park, arguing rationally with them won’t help. If they are upset, they are being ruled by their emotions, not their logical thinking brain. When you empathise with them instead, you connect with the emotion behind the behaviour. They feel heard, understood and validated. (Markham, 2016) “I know, cookies are so yummy. I’d love to eat a cookie right now too.” Or “You’re having so much fun at the park and it’s hard to leave, isn’t it?”
Help your child move from mad to sad
Gordon Neufeld talks about moving from “tears of frustration” to “tears of futility” (Neufeld, Intensive One, Understanding Kids). This is how we know that children have accepted our guiding role as parents, and that they’ve processed the big emotions that they’re feeling when we hold a boundary. So, if your child is angry because you won’t let them do something, but you empathise and hold the boundary, then they have a little cry because they are sad, this is GOOD! This means they have worked through those big emotions in a way that is healthy and cathartic. They are learning that it’s ok to not get what they want, that they can survive disappointment and frustration, because you are there to support them through the process. When children feel like we are in charge, but on their side, and they do what we ask without controlling or coercing them, that is responsive discipline.
D McNamarra (2016) Rest, Play, Grow: Making Sense of Preschoolers (Or Anyone Who Acts Like One), Aona Books, Vancouver
L Markham (2012) Peaceful Parent, Happy Kids: How to Stop Yelling and Start Connecting, Penguin Group, New York
G Neufeld, Intensive 1 Understanding kids, accessed online: » NEUFELD INTENSIVE I: Making Sense of Kids (
|
Image Jellyfish Bloom © L. Gershwin
Toxic Bullets
Salps are a strange kind of creature. They're clear. They're squishy. They have no bones or shell. They don't sting or bite. They look kind of like jellyfish, but they aren't true jellyfish. In fact, salps are actually more closely related to humans than to other types of jellyfish.
Gobsmacking growth
Under the right conditions, salps become ridiculously numerous. They've been clocked at growing 10% of their body length per hour and are capable of multiplying at a rate of two generations per day [1, 2]. These incredible individual and population growth rates are fuelled by their enormous appetite for phytoplankton, or microscopic single celled algae. And unfortunately, phytoplankton blooms are one of the primary outcomes of excess nutrient waste, such as from salmon farming.
Lethal and sub-lethal concentrations of toxins
The stomach contents of 134 mackerel were examined from a mass mortality event in Argentina; all dead specimens were found to have contained salps [3]. This occurred during a bloom of the toxigenic dinoflagellate Alexandrium tamarense, leading the study's authors to hypothesise that salps may have acted as 'vectors of toxicity', leading to the death of the mackerel.
Mackerel aren't the only species affected by salps. In fact, lots of species consume salps [4, 5], and from time to time, dead predators are discovered at autopsy to have a stomach filled with salps. It appears that salps may act as toxic bullets of algae. This includes dolphins and seabirds [unpublished notes], as well as farmed salmon [6].
The exact cause of death in these events has not been conclusively linked to toxic algae; however, the potential of salp-mediated toxic algae ingestion in farmed salmon is particularly worrisome. What becomes of the dead fish? If near harvest size and found freshly dead, they would look fine: would they be harvested anyway, on the assumption that the toxins would not have had time to penetrate to the meat? Would they be used for human food or pet food? And what about the fish with a sub-lethal dose of toxins? How long would these toxins remain in the fish, and what would be the effect on people who consume them? To our knowledge, these questions have not yet been researched, but we believe that they should be.
Bioaccumulation of heavy metals
Perhaps even more worrisome than algae concentration is the role that salps may play in biomagnification of heavy metals. Simply, bioaccumulation is where toxic substances build up in the tissues of an organism, and can be lethal. Biomagnification is even more dangerous, in that toxic substances build up in prey, and then compound faster in predators that repeatedly consume contaminated prey. Through this mechanism, species higher on the food chain have the highest concentrations of heavy metals.
Phytoplankton, as bad luck would have it, concentrate heavy metals from the surrounding water [7, 8]. And as further bad luck would have it, salps concentrate these metals [9]. Therefore, any species consuming contaminated salps is likely to biomagnify these heavy metals... like salmon.
Click here to go to Salp References
|
Please STOP drinking the Kool-Aid
How can Square Holes help?
How can we help?
Drinking Kool-Aid is dangerous to economic and social health and well-being. Side effects include herd like behavior, myopic thinking, getting stuck in a rut, blindness to reality and failing to see and avoid dire situations. It reduces critical individualism as people, segments of the population, businesses and governments seek social proof in order to overcome their uncertainty.
(Read shorter version via here)
Social proof comes from the psychological tendency for people to seek conformity to ensure they behave in a socially acceptable manner. It makes people feel confident, a sense of belonging and sharing commonality. Organisations able to harness social proof can build strong cultures, and even communities, as a shared vision is developed towards an end goal or mission.
When people are not sure how to behave in a given situation, social proof guides the way from the information and behaviours of others. Restaurant and other online reviews provide social proof, and people typically seek multiple reviews from people like themselves, in order to define normal. What should we do? Where should we go? How should we think?
WARNING: Drinking too much Kool-Aid, can have negative repercussions.
How can Square Holes help?
How can we help?
Norms can be set with very little information or actual evidence. Attitudes and behaviours are formed by what people believe to be true and pass on to others. The Internet escalates the ease of sharing information, perceived as true, unable to be proven so, seemingly plausible, and leveraging inherent social vulnerabilities or entrenched perceptions. Such information cascades through the Internet and beyond, and create false norms.
Social pressures can be so strong that people are swept up in the current, and act and think in a way they may even know is irrational, but view as the most acceptable and least risky or embarrassing way to think and behave. People can also be moved in a way that information affirms their confirmation bias, and pushes them away from what evidence clearly defines.
WARNING: Kool-Aid can be fatal.
People can converge too quickly towards a perceived norm or default as they seek comfort from ambiguity and uncertainty. Politicians such as Donald Trump and Pauline Hanson proclaim their divisive attitudes and behaviours as normal and acceptable. They absorb and share fake news, aligning with vulnerabilities of segments of the population, that supporters adopt, and propel to their friends, family and others. Creating growing circles of social proof and norms, that are more fear than fact.
Mass suicide cults, and toxic corporate, political and sporting club cultures all come from drinking social proof warping group think, making socially unacceptable attitudes and behaviour acceptable to those who wish to believe what the leaders proclaim. Copycat suicides are when people seek to emulate the behaviour of others they identify with in publicised suicides, celebrity or otherwise.
WARNING: Kool-Aid skews the conversation away from reality.
In a social media context, social proof typically comes from likes, forwards, comments and followers. A post with five likes, has far less social proof than one with hundreds or thousands, even if hacked with a plethora of hashtags (e.g. #tonyrobbins #garyvee #startuplifestyle #startupbusiness #dropshipper #laptoplifestyle #entrepreneurship #health #fitness #workout #bodybuilding #sixpack #gym #training #photooftheday #healthy #instahealth #instagood #travel #yawn).
Hashtags seek people with the same interests (and vulnerabilities), and often people seeking to build their own followers and likes. Our social media insecurity is growing as the attention seekers, and an increasing volume of paid influencers, swamp feeds with carefully manicured posts seemed to be normal (e.g. the perfect life et cetera). Those with fewer followers and lesser ability to attract likes feel intimidated and are encouraged to remain quiet, and allow the guys with a more endorsed perspective to set the norm.
WARNING: Our leaders’ thinking can be biased from drinking the Kool-Aid.
We need to start a public awareness campaign, to warn people to not drink the Kool-Aid. The problem is the most prolific consumers of Kool-Aid are our government and business leaders, all seeking to be liked and comply with category and other norms. To follow the herd, not to path their own route. It is hard to not converge, considering study pathways and work colleagues with similar perspectives. There is a right and wrong way in most professions, and different can seem awkward, uncomfortable and risky.
Politicians and their support teams can start believing their own rhetoric, with non-aligned staff asked to leave. Innovation in business sectors is generally not from within the sector, but from entrants seeking to disrupt false norms and mismatches between social proof and the reality of consumers and evidence.
Start-up entrepreneurs, their advisers and ‘ecosystems’ can drink their own Kool-Aid, believing their idea is GENIUS, because those around (e.g. their mum, family, friends et cetera) think so. There is likely no confirmation of any product-market fit, or sense of reality as to the world beyond the carefully chosen inner circle. Unfortunately, often those there to support, fund and guide entrepreneurs are blindly drinking their own rhetoric. It is high-fives all around, the sales do not eventuate, money washes away, panic grows and the business dies.
WARNING: Kool-Aid can look pretty, but have little nutritional value.
Even many of those who are charged with communicating to the masses are caught up in their own conformity. News media can be seen as sharing more biased fiction than fact, and audience trust is declining. Digital marketing specialists can seek evidence to prove their way is best, and that traditional is dead. Social media managers can ignore the fact that no one is listening, or caring or acting on what they share. Traditional marketers can seek and share information or spokespeople to back their case as to why they are still the best. Media buying agencies can seem at times to sell what is on trend, has social proof and is therefore easier to sell, rather than what actually has an impact on brand and other targets.
Some marketers can have a tendency to seek and share experts, academics and influencers supporting their case, more so than taking a balanced perspective on the evidence. It is alarming the volume of research we conduct that displays ‘little if any measurable impact on awareness, sales or otherwise.’
WARNING: Kool-Aid biases our information filtering and seeking.
Generally categories behave as they do with little actual robust evidence. Confirmation bias (the tendency to search for, interpret, favor, and recall information in a way that confirms one’s preexisting beliefs or hypotheses) means people seek information that provides social proof to their norms. The evidence that supports an agenda, strategic preference or perspective.
Studies illustrate gaps in perceptions of business and government and those of real people (e.g. Adland out of touch with ‘real Australians’ and doesn’t understand their media habits), and politicians are well recognised to be out of touch (e.g. More voters see Malcolm Turnbull as out of touch and arrogant, Guardian Essential poll shows).
Rather, perceptions are based on their own lives, family, workmates and staff on how real people behave, use media and/or product needs. Discussions with colleagues skews the group think towards this view of normal, yet are likely such government and business leaders and workers are paid above average, have quite different lifestyles and stages of life, complexities and needs of their target audience.
WARNING:Leaders such as Donald Trump are trusted brands of their own popular, yet unhealthy, Kool-Aid.
Donald Trump has strong and vocal opposition, yet he has taken controversial positions on everything from abortions, to trade barriers and putting international relations at risk. He has been polarising, however, his approval rating is up on last year and only just behind the rating of Barack Obama the same time in his election cycle (via Gallup). Clearly, Trump has gained social proof amongst a significant proportion of voters, even if his controversy (and insanity) is second to none.
If we can learn anything from Trump, it is that conforming to social norms is not the only way. It is okay for people to stand-up for what they believe in, to follow a different path and to question the status-quo. It is a scary world if everybody simply follows the herd to avoid uncertainty and confusion. Fearing stepping outside the track, or sharing a post that only attracts a few likes. The need to be confident and speak out is critical, even if outside the norm and acceptable.
Because in many ways what may be viewed as the default or correct way, is more so group think. A political party, business or even a group of friends and family are likely simply conforming to what they believe to be true, with little if any evidence of any validity or a wider perspective. In reality, the evidence from market research and other data and analysis, will often prove perceived norms otherwise, and what people actually think, do and want are counter to what social proof indicates.
Moving forward, it is critical that business, government and the wider community keep an open, beginners mind and be ever curious. The future will be created not by those who conform, but by those who question the norm. And, please do not drink the Kool-Aid.
I will leave the last words to Apple on Steve Jobs’ return in 1997 …
In case you didn’t know or haven’t worked out, “Drinking the Kool-Aid” is an expression used to refer to a person or group who goes along with a doomed or dangerous idea because of peer pressure. It is also an actual product (Kool-Aid), which is likely interesting to drink.
Thank you so much for reading. You rock.
How can Square Holes help?
How can we help?
|
Nowadays, data breaches are a grim reality and a party of the daily grind for organizations – no matter what sector or size. Enterprises are exposed to hackers and malicious insiders daily. As ubiquitous as they are today, threats and vulnerabilities are only advancing and becoming more serious as technology matures. Not only because of the advances in tech, but also because bad actors are enlisting increasingly sophisticated ways to compromise information systems, while behaving more like crime syndicates with increasingly complex organizational structures.
To top that, data breaches are a costly issue. According to the Global Overview from IBM Security and Ponemon Institute, the global average cost of a data breach is $3.86 million, up 6.4 percent from 2018. The average worldwide cost (from 2018) for each stolen record that contains classified information is at $148 per record, which stands for 4.8 percent increase from 2017.
Why Is the Cost of Data Breaches on the Rise?
The Ponemon Institute revealed that 36% of the cost of a data breach comes from the loss of business that occurs from a loss of consumer trust after a cyber-attack. This is the equivalent of $1.44 million. It now costs an average of $3.92 million dollars. The average size of a data breach is 25,575 records, and the average cost per record is $150.
Since the damage from a breach is rarely restricted to just one part of a corporation’s procedures, the cost of losing sensitive data unavoidably harms a business in various areas, generating responsibilities and constraints that can take years to overpower. The actual cost of a breach is practically higher than the Ponemon calculation, because it entails lost opportunities and viable disadvantages that are difficult to calculate. When assessing its risks, though, a company should contemplate each one of the costs it might encounter after a data breach.
According to the IBM-Ponemon study, certain factors contributed to the increased cost of a data breach, including failure to achieve compliance, extensive cloud migration, and IT infrastructure and system complexity. Additionally, the involvement of third-party contractors and consultants significantly increased the cost. The total increase was $370,000
Airbnb Reported a Net Loss of $322 for the First Nine Months
Case in point with Airbnb, whose customers complained that their accounts were hacked, services booked in their names to the tune of thousands of dollars and more. Airbnb reported a net loss of $322 million for the first nine months of 2019, compared to turning a $200 million profit the year prior. Brian Chesky, CEO and Head of Community for Airbnb, announced a security initiative that costs $150 million to implement.
What Are the Consequences of a Data Breach
Data breaches instigated by cybercriminals occur most often and are pricey. More than half of the cases are produced by a malicious attack, and it takes more than 10% longer to remediate a breach of this sort. This explains why a breach caused by a malicious attack costs up to 25% more than one caused by human error – an average of $4.5 million vs. $3.5 million.
Location and Industry Determining Factors in Data Breach Cost
The location and industry of targeted organizations can also influence the cost of a data breach. The country with the top costs in data breaches is the United States, where the average cost is $8 million. The industry with the highest costs is healthcare with an average of $6 million for a breach. A perfect example of a cyberattack frequently used by hackers is ransomware. In 2020, it is predicted that malicious actors will target victims, for example, users of medical devices, with the awareness that targets will payout the ransom quickly to protect the safety of patients.
Learn more about Collateral damage of a privileged information data breach here
Threat Modeling Could Save You Millions a Year
There are measures companies take to reduce the costs curtailing from a data breach. These measures include the implementation of a threat modeling tool that can integrate security into the software development lifecycle. Along with this, it is important to have the right the team of security professionals able to take a proactive approach with an incident response plan. The combination of these two components can decrease the cost of a data breach by up to $1.5 million.
No matter which industry you work in, no organization wants to cover the cost of a data breach. Worst part, this damage goes beyond finance, a company’s reputation can generate a huge loss. To avoid these damages, it is vital that you have thorough control over sensitive data you are managing. This is where Threat modeling can aid your company to execute a proactive strategy and retrieve measures in your company.
ThreatModeler to Help Your Business Reduce the Likelihood of a Data Breach
The chance of suffering a data breach in the next two years is nearly 30%. With ThreatModeler, you can radically reduce that risk. ThreatModeler provides a great way to visualize your attack surface and map out the various threats and attacks vectors that it may contain. Threat modeling traditionally uses process flow diagrams to lay out the different components, user behaviors and communication flows. Now with the internet, mobile and IoT-embedded devices, the attack surface increases, thereby increasing the amount of attack vectors also increase.
Threat modeling used to be a manual process and took many hours to complete. ThreatModeler is a leader in the threat model creation space and has automated key tasks to save organizations up to 80% on time-cost. ThreatModeler comes out-of-the-box integrated with trusted threat libraries and security guidelines as outlined by AWS, OWASP, the NVD and others. ThreatModeler lends itself to IT project management with its Jira integration, enabling DevSecOp teams to assign tasks, and keep track and communicate on progress as needed. To learn more about how ThreatModeler can help you, schedule a live demo with our team. You can also contact us directly to speak with a threat modeling expert.
|
Neuropsychiatry: Innovative and Effective Treatment Options
Neuropsychiatry is in constant evolution. The fields of neurology and psychology have always been intertwined, therefore identifying a correlation between mental and physical health. Despite this, mental illness continued to be treated exclusively through psychotherapy for many years. Excitingly, that is changing. Current treatments are combining innovative technology with talk therapy to address mental disorders and addiction at their source: the brain. Certified by the American Board of Psychiatry and Neurology in psychiatry, Dr. Jonathan Beatty uses his expertise to shed light on how these treatments work, how they are beneficial, and what they mean for the future of neuropsychiatry.
What Is Neuropsychiatry?
Neurology refers to the function of the brain while psychiatry focuses on the behavioral aspects or outcomes that result from the brain’s inner-communications. The brain communicates through electric signals and hormones. These signals activate or inactivate certain channels, impacting the travel of neurotransmitters. These neurotransmitters cause the symptoms associated with mental disorders and are essentially the “driving force” behind the behaviors initiated by them.
What Are Common Misconceptions About Neuropsychiatry Treatment?
Mental illness and addiction are often misunderstood, which causes unfortunate stigmatization. Many people believe that addiction is a choice and that those struggling with it are weak or even bad. These offensive stereotypes could not be farther from the truth. Dr. Beatty explains that addiction has a science. When a new substance is introduced to the brain, a pleasure pathway is ignited. Your brain favors this new substance, such as alcohol, drugs, or even caffeine. When that substance runs out, a anti-reward pathway is ignited, spurring withdrawal symptoms. Essentially, the brain is then programmed to crave those substances, causing a physiological dependency.
Additionally, there are misconceptions about treatments for mental and substance use disorders. Popular movies and news outlets have inaccurately portrayed treatment. They show seizure-inducing electric shocks and demonize psychiatric staff members. In reality, these treatments are not extreme and people working with patients are compassionate. This misconception is dangerous as it sparks fear in those needing treatment.
How Has Treatment Improved Over the Years?
Over the course of history, the treatment of mental illness and substance use disorders has drastically improved. The more people understand the human brain and are empathetic to the behaviors that result from mental disorders and addiction, the more accurate treatment becomes. In addition, intertwining treatment options has been beneficial for patients. Dr. Beatty explains that combining medication with talk therapy is an excellent way to ensure patients understands why they are receiving said medication and how it is going to help.
What Is TMS Therapy?
Transcranial magnetic stimulation (TMS) is a form of brain stimulation that can be administered outpatient. In TMS, a device is placed over the patient’s head. First a baseline is established, identifying areas in the brain that contribute to certain behaviors. Once these are identified, they can be targeted by magnetica stimulation in order to promote proper function. The procedure is quick, non-invasive, and painless.
What Is MAT Therapy?
Medication-assisted therapy (MAT) uses medication to make detox easier for many addicts. There are a few different types of medication now available to help those battling alcohol and drug addiction. Some medications reduce withdrawal symptoms while others take away the desirable effects of these substances—even replacing them with an undesirable result.
What Is Psychedelic Psychiatry?
Unlike MAT, which uses more maintenance medications, psychedelic psychiatry uses psilocybin, LSD, ketamine, and ecstasy to treat mental disorders. Though much of this is still being experimented with and refined, a form of ketamine is available to treat depression. Called SPRAVATO or S ketamine, it is a nasal spray that can be inhaled by patients to improve their symptoms of depression.
Would These Treatments Work for Me?
Overall, it is a revolutionary time in the treatment of mental disorders and addiction. With options that treat these disorders at their source multiplying, help is more available than ever. If you or a loved one is struggling with mental illness or a substance abuse disorder the time to get help is now. Contact Wave Treatment Centers at (215) 242-0420 to find the right course of treatment for you.
Recent Blog Posts
One Step Can Change Your Life
Start your path of mental health recovery today.
Copyright © 2021 Wave Treatment Centers · Privacy Policy
Philadelphia Mental Health and Substance Abuse Treatment
|
Discarded Plastic Garbage Bags DIY Floral Decoration Crafts
1. Cut the garbage bags of your favorite color into long strips.
2. The long strip is rolled up as a flower, and a thin wire is tied between the rolled long flowers.
3. Wrap the roots of flowers and thin iron wire with green tape.
4. Use scissors to trim the flowers neatly, then flick the flowers to make them fluffy.
5. Repeat the above steps to prepare many small flowers.
6. Use green tape to tie the small flowers to a thick wire.
7. Sort out the tied flowers to make the flowers on a wire more natural. Finally twist the wire to make the wire bend, which will make it more like a real bouquet and look more artistic.
1. The fourth and third steps can be swapped so that the flowers will be more fluffy.
2. Prepare fluffy blooming flowers, and don’t forget to prepare some flowers that are not yet in full bloom.
3. More flowers on thick wire will be more beautiful, too little will be monotonous.
1. Be careful when using scissors
2. Children need to operate under the supervision of adults to avoid accidental injury by scissors, wire, etc.
Leave a Reply
|
36 engaging argumentative essay topics
Home » Essay Examples » 36 engaging argumentative essay topics
An argumentative essay is the most common essay given to students for assessment. Teachers give them to check the critical skills of their students. It helps a lot in improving the critical and analytical skills of students like you. You should be precise and to the point in writing such essays
The argumentative essay must have solid pieces of evidence to prove the arguments. The essay should be written with smooth transitions. Such essays are not easy to write. I also used to hire a professional essay writer to write essay for me to guide me on some advanced argumentative strategies. Using those write-ups I practiced and trained myself. It is advised that you should follow the same path.
Argumentative write-ups require fine skills and a lot of practice. You should try to practice daily. Get yourself prepared for writing such essays. Some of the Important argumentative essay topics are as follows. You should practice these and be prepared for them.
1. Is a smart lockdown a viable way to tackle Covid-19?
2. Has the internet positively or negatively affected human society?
3. Why are psychologists not labeled as doctors?
4. Is gun control an effective way to control time?
5. Positive and negative outcomes of feminism?
6. Should student textbooks be replaced by notebook computers?
7. Should wealthy nations provide economic assistance to the Covid trodden states?
8. Should engines be converted to hybrid technologies?
9. Should the forthcoming generations be encouraged to practice religion?
10. Was the US election fair?
11. Will the world counter the economic tremulous post-Covid-19?
12. Has the world faced negative Political impacts from the pandemic in 2021?
13. How Social integration is affected after the lockdown?
14. Impact of social distancing on human psychology?
15. What are the challenges in the post-Covid-19 world?
16. How the potential of technologies is essential for the future?
17. Is the conspiracy of biowarfare real?
18. The importance of emerging technologies and their impacts?
19. What are the impacts of the corona vaccine and the changing strain of the virus?
20. What are the changing dimensions of warfare: a case study of the colonized world?
21. Should children be provided with electronic gadgets?
22. Is there a suitable way of waste management for underdeveloped states?
23. Why are we still lacking environmental control?
24. Is there a need for human colonies on Mars?
25. Should there be effective control over drug use like marijuana?
26. Are we heading towards economic repression?
27. Is there a chance of more covid waves?
28. Will the pandemic be controlled after the covid vaccine?
29. Is Pfizer's corona vaccine effective?
30. Should there be a check and balance on digital currency internationally?
31. Will we witness a covid-19 free world in 2021?
32. Should the lockdown violators be punished?
33. Will China surpass the US economy?
34. Is there a universal rule of men being the bread runners?
35. Are nuclear arms guarantees of peace?
36. Will we ever witness a nuclear-free world?
All these topics have relevance to contemporary affairs. Each one of them is a potential topic to be given in assessments by teachers. Make yourself ready and start writing. You would learn with time. But as you are starting it is not possible for you to write with perfection, so it is advised to contact a paper writing service. Look closely at how they write and then compare it with your writing. Try to find the loopholes and fill them. It is not a one-day process, you would need time to master the right skills.
So, learn as much as you can because this is the best time. You would be writing many such argumentative essays in your student life so it is better to learn the required skills. Also, ask someone to write my paper.
About us
Why us?
• 100+ subject matter experts.
• 100% original writing (zero plagiarism, ever).
• Short deadline service.
• Love our work or don’t pay (satisfaction guaranteed).
Paper Due? Why Suffer?
That's our Job
|
Continuous or discrete variable
In mathematics and statistics, a quantitative variable may be continuous or discrete if they are typically obtained by measuring or counting, respectively. If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value.[1] In some contexts a variable can be discrete in some ranges of the number line and continuous in others.
Continuous variableEdit
A continuous variable is a variable whose value is obtained by measuring, i.e., one which can take on an uncountable set of values.
For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range. The reason is that any range of real numbers between and with is uncountable.
Methods of calculus are often used in problems in which the variables are continuous, for example in continuous optimization problems.
In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions.
In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation. The instantaneous rate of change is a well-defined concept.
Discrete variableEdit
In contrast, a variable is a discrete variable if and only if there exists a one-to-one correspondence between this variable and , the set of natural numbers. In other words; a discrete variable over a particular interval of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The number of permitted values is either finite or countably infinite. Common examples are variables that must be integers, non-negative integers, positive integers, or only the integers 0 and 1.
Methods of calculus do not readily lend themselves to problems involving discrete variables. Examples of problems involving discrete variables include integer programming.
In discrete time dynamics, the variable time is treated as discrete, and the equation of evolution of some variable over time is called a difference equation.
In econometrics and more generally in regression analysis, sometimes some of the variables being empirically related to each other are 0-1 variables, being permitted to take on only those two values. A variable of this type is called a dummy variable. If the dependent variable is a dummy variable, then logistic regression or probit regression is commonly employed.
See alsoEdit
1. ^ K.D. Joshi, Foundations of Discrete Mathematics, 1989, New Age International Limited, [1], page 7.
|
Get local results
Current general location:
Enter your location to see results closest to you.
We do not share your location with anyone.
What is a cannabinoid?
February 17, 2022
Cannabinoids are a fundamental part of the cannabis plant. These naturally occurring chemical compounds contribute to the myriad effects cannabis consumers experience when they light up a joint, eat an edible, or drop a cannabis-infused tincture under their tongue.
While it’s true that cannabis represents a rich source of cannabinoids, cannabinoids actually encompass any compound capable of influencing the body’s endocannabinoid system. Cannabinoids are also found within the human body, in several other plant species, and can even be formulated synthetically.
Distinct cannabinoids can induce effects as far-ranging as euphoria, pain relief, paranoia, sleepiness, and even increased appetite—yes, certain cannabinoids can even cause the munchies.
In this guide to cannabinoids, we’ll explore different kinds of cannabinoids, unpack how these potent compounds can affect the body, and explain the role cannabinoids play in plants.
Types of cannabinoids
Cannabinoids can be categorized into three groups:
• Phytocannabinoids are found in the cannabis plant and a handful of other plants
• Endocannabinoids, or endogenous cannabinoids, can be found in the bodies of mammals, including humans
• Synthetic cannabinoids are formulated in laboratories
Cannabis represents the most abundant and diverse source of phytocannabinoids, or plant cannabinoids on the planet. More than 150 different cannabinoids are found in the cannabis plant.
However, the cannabis plant doesn’t directly produce cannabinoids. Rather, it produces cannabinoid acids, like THCA and CBDA, that must be activated to become the cannabinoids consumers know and love, like THC and CBD.
Heating or leaving cannabis to dry out over time activates these acids, thanks to a process known as decarboxylation. For example, when you light a joint or heat weed before making cannabutter, THCA is converted into THC: the psychoactive, intoxicating cannabinoid that so many covet. THCA can’t get you high—but THC can.
While the consumption of raw cannabinoid acids is beginning to garner a following, most people prefer to consume cannabinoids that have been activated.
Until recently, it was believed that cannabinoids were unique to the cannabis plant, but new findings suggest otherwise. Black pepper, cacao, echinacea, rhododendrons, and black truffles also contain compounds that interact with the body’s endocannabinoid receptors. These compounds are not the same as the cannabinoids found in cannabis, but are cannabimimetic—they can induce effects similar to cannabis’ cannabinoids.
Endocannabinoids (endogenous cannabinoids)
Endocannabinoids form part of the body’s endocannabinoid system. These compounds, also known as endogenous cannabinoids—”endo” or “endogenous” means “within”—are produced by different organs and tissues in the body and have a similar structure to cannabinoids found in cannabis.
The body can synthesize endocannabinoids to help regulate processes as diverse as pain, memory, mood, immunity, sleep, and responses to stress. The two main endocannabinoids are anandamide (AEA) and 2-AG (2-arachidonoylglycerol). In short, endocannabinoids help to keep critical bodily functions running smoothly.
Synthetic cannabinoids
These compounds don’t occur naturally in plants or people but are synthesized using chemical processes. There are more than 200 synthetic cannabinoids, nearly all of which are designed to exert powerful effects on the body’s cannabinoid receptors.
One such example, AMB-FUBINACA, is reported to be 75 times stronger than THC, the main psychoactive cannabinoid found in cannabis. However, the safety of certain synthetic cannabinoids is questionable as they can have detrimental effects on consumers, causing anxiety, paranoia, and impaired brain function.
Synthetic cannabinoids can also be made by chemically manipulating CBD, which can be extracted from industrial hemp. Another example of synthetic cannabinoid production is the “pharming” of cannabinoids using brewer’s yeast, which acts as a medium for growth—bacteria and algae have been used as mediums too.
While cannabinoids grown from yeast are structurally and chemically the same as those that appear in cannabis, they’re technically synthetic because they are the product of genetic engineering.
The effects of cannabinoids on the body
Ultimately, everyone’s endocannabinoid system, or endocannabinoid tone, is unique. Different bodies vary in their responses to phytocannabinoids. While some of us feel chill when sharing a joint with friends, others can be plunged into a state of anxiety. Researchers are still uncovering how specific cannabinoids influence our bodies.
The endocannabinoid system
Cannabinoids interact with the body’s endocannabinoid system, or the ECS. The endocannabinoid system helps to maintain equilibrium in bodily processes such as sleep, memory, mood, appetite, and pain.
In the simplest terms, the ECS is a signaling network that extends throughout the body. This extensive network is made up of cannabinoid receptors, endocannabinoids—cannabinoids that the body produces—and enzymes that help to create and break down endocannabinoids after they’ve been used.
Cannabinoid receptors form a fundamental part of this system. There are two known types in the body: CB1 and CB2. These receptors are located in the brain, spinal cord, organs like the gastrointestinal tract, and peripheral parts of the body. Endocannabinoids can stimulate these cannabinoid receptors, provoking responses such as feelings of sleepiness, relaxation, or hunger.
How cannabinoids interact with the endocannabinoid system
However, endocannabinoids aren’t the only substance that can stimulate endocannabinoid receptors. The cannabinoids found in cannabis—phytocannabinoids—are structurally very similar to those produced by the body. These plant cannabinoids can bind with the cannabinoid receptors in our endocannabinoid system, triggering responses throughout the body.
THC, for example, can induce euphoria, relieve pain, slow perception of time, and stimulate appetite. CBD can alleviate inflammation, ease anxiety, and suppress seizures.
However, evidence is growing that cannabinoids don’t just interact with cannabinoid receptors. Cannabinoids also appear to work on other receptors in the body, such as serotonin 5-HT receptors. The vast array of effects that cannabinoids can trigger are determined by how strongly they can bind to these receptors.
Consumption method
The method of cannabis consumption also influences effects. Different delivery methods, such as smoking, oral consumption, or transdermal application, can strongly influence the bioavailability of cannabinoids. Bioavailability refers to the extent to which a substance enters the bloodstream and can deliver an active effect.
The bioavailability of inhaled THC averages 30%, for example, and the effects can kick off in as few as ten minutes. On the other hand, when THC is eaten in a brownie, the bioavailability is 4-12%, with effects taking up to one hour or more to kick in.
This reduced bioavailability and delayed onset occurs because cannabinoids must make their way through the gastrointestinal tract and into the liver where it is processed. The majority of the THC is broken down in the liver and converted into other products, resulting in low bioavailability.
The role of cannabinoids in the plant
Cannabinoids also have a highly functional role in safeguarding the well-being of the cannabis plant. Cannabinoids accumulate in the sticky, resinous trichomes of cannabis, which are most readily found on female buds.
According to recent research, cannabinoids act as a sunscreen, absorbing harmful UV-B radiation that may damage the plant’s growth. What’s more, studies have shown that increased cannabinoid production occurs in cannabis flowers when they are exposed to extra UV-B radiation.
It’s likely that cannabinoids hold a range of other defensive roles too. For example, trichomes, where cannabinoids are mostly found, are common to many plant species and help to protect against predatory insects and pests, water loss, and overheating.
Cannabis also appears to produce more cannabinoids when exposed to certain stressors, like heat, low soil moisture, or even soil that lacks nutrients. Ironically, it looks like a little bit of stress may be a good thing for cannabinoid production.
Major cannabinoids
THCA and CBDA are by far the best known cannabinoid acids produced by cannabis. These two cannabinoids occur in much higher concentrations than other cannabinoids present in the plant. THC potency has increased over time, suggesting that cannabis aficionados have deliberately bred plants that yield increasingly high THC content.
Both THC and CBD are psychoactive cannabinoids, so they can alter nervous system function and temporarily change perception, mood, cognition, and behavior. THC is intoxicating and can get you high, while CBD does not. Both cannabinoids also boast a range of other physical and mental effects. There’s a vast repository of research exploring their therapeutic uses.
These two major cannabinoids also form the basis for defining cannabis. Nowadays, cannabis strains, or chemovars, are often categorized by their major cannabinoid content. There are three main types:
• Type I: High concentrations of THC
• Type II: Equal levels of THC and CBD
• Type III: High levels of CBD
Minor cannabinoids
More than 150 cannabinoids have been identified in cannabis, and counting. However, the vast majority of these are minor cannabinoids, which make up less than 1% of the cannabis bud. Nevertheless, consumer and expert interest in minor cannabinoids has risen in recent years as many are curious about the untapped potential of these little-known cannabinoids.
Certain intoxicating minor cannabinoids, like delta-8 THC and delta-10 THC, are fast developing a reputation for providing a high. These minor cannabinoids occur naturally in very low concentrations and are often synthesized from cannabinoids in hemp plants to sidestep legal issues.
Researchers are also beginning to delve into the therapeutic properties of prominent minor cannabinoids, like THCV (tetrahydrocannabivarin), CBG (cannabigerol), and CBN (cannabinol). In the future, we may see minor cannabinoids combined with terpenes, flavonoids, and other compounds to formulate personalized cannabis healthcare that targets individual issues and conditions.
Emma Stone's Bio Image
Emma Stone
Emma Stone is a journalist based in New Zealand specializing in cannabis, health, and well-being. She has a Ph.D. in sociology and has worked as a researcher and lecturer, but loves being a writer most of all. She would happily spend her days writing, reading, wandering outdoors, eating and swimming.
View Emma Stone's articles
|
Functional inks: what are they and why use them in your project?
Functional inks used to create printed and flexible circuits offer a cost-effective alternative to conventional methods such as etched copper flex circuits and printed circuit boards (PCBs). While the latter options are still widely used, functional inks make it possible for manufacturers to print on flexible substrates for mass-scale circuit manufacturing.
What are functional inks, and how do they work? Keep reading to learn everything you should know about this handy technology and harness its potential in your next project.
What are functional inks?
Functional inks are inks that can be applied to a broad range of rigid and flexible surfaces using different printing processes:
• screen printing (sheet-fed and roll-fed),
• aerosol jet printing,
• and gravure printing.
The choice of the printing technique depends on the ink type and ultimate product use. Functional inks are generally more environmentally friendly than traditional methods. Why is that? Because etching copper on PCBs requires the use of acid baths, while the additive process of employing functional inks generates no waste and uses no harmful chemicals.
We can divide functional inks into two categories: conductive and non-conductive. Let’s take a closer look at each to discover their benefits and drawbacks.
Conductive inks
Conductive inks are inks that conduct electricity. You can find them in a broad range of applications – for example:
• capacitive and membrane switches,
• RFID tags,
• touch screens
• biological and electrochemical sensors,
• Positive Temperature Coefficient (PTC) heaters,
• electromagnetic interference/radio frequency interference (EMI/RFI) shielding,
• wearable electronics (stretchy conductive inks).
What factors determine the selection of conductive ink over other options? It all boils down to cost and conductivity – but also other factors like substrate compatibility, ink molecular structure, ultimate product use, and power efficiency requirements are some of the other major elements that influence decisions. Conductive inks come in several variants.
Silver and silver chloride inks
Silver inks are very conductive and have low resistance. They work with a variety of substrates, including polyester, polycarbonate, glass, and vinyl, and are abrasion, fold, and wrinkle resistant. They’re suitable for medical electrodes and membrane circuits because of their great stickiness, flexibility, and printability.
Carbon inks
In comparison to silver inks, carbon inks have stronger resistance, lesser conductivity, and longer durability. They prevent circuits from shorting, protect silver inks from silver migration, and are less expensive than silver inks. They also have comparable qualities to silver inks in terms of adhesion, printability, and substrate compatibility. To achieve the required mix of resistivity, conductivity, and cost, carbon inks are frequently combined with silver inks.
Gold and platinum inks
Due to the high cost of metals such as gold and platinum, these inks are typically manufactured and used in very limited amounts when the performance benefits outweigh the cost. Gold, for example, is used in applications that need a high level of oxidation resistance, whereas platinum is utilized in applications that need a high level of conductivity.
Metal-based inks
Because of its strong conductivity, copper ink may be used as a less expensive alternative to silver inks. However, the ink’s low stability limits its application. Nickel inks are extremely durable, but also more costly than carbon inks.
Non-conductive inks
Non-conductive inks don’t conduct electricity but are used in a variety of important functional and ornamental applications: sensors, membrane switches, graphic overlays, and labels. Here are a few examples of non-conductive inks:
Graphic design inks
Graphic inks are used in a variety of components and brand identification products, including nameplates, labels, ornamental signs, decals, placards, elastomer keypads, and graphic overlays. Solvent-based inks, water-based inks, UV curable inks, epoxy inks, and air-dry inks are a few examples of graphic inks.
How to select the right graphic ink? Consider factors such as the substrate’s surface energy and surface tension, ambient conditions, and cost. At Melrose, we use techniques such as screen printing, digital printing, lithographic printing, UV inkjet printing, and UV flexographic printing.
Dielectric inks
Dielectric inks are electrically insulating inks that preserve conductive inks as they operate together. Dielectric inks prohibit the various layers of conductive ink from interacting with one other in a multi-layer circuitry design. They prevent electrical shorting and silver migration by forming insulating barriers.
Dielectric inks can be used on both stiff and flexible substrates, including bare or print-treated polyester, polycarbonate, and glass. They have excellent adhesive qualities, exceptional flexibility, moisture and abrasion resistance, and are unaffected by folds and bends. Membrane switches, Radio Frequency Identification (RFID) tags, antennas, and electrodes all employ dielectric inks.
Specialized inks
Specialized inks are slowly creeping their way into common product development processes as well. Here are the most common types of specialized inks used today:
• Thermochromic inks – temperature-sensitive inks that change color when the ambient temperature rises over a set point. They come in a variety of colors, including neon, blue, purple, and other hues. Labels, print advertising, textiles, biomarkers, and sensors are all common use cases for thermochromic inks.
• Photochromic inks – when exposed to UV light, these inks change color immediately. These photochromic inks, like thermochromic inks, come in a variety of hues. They can be found in light-sensitive eyeglasses, body patches that sense sunlight exposure, and clothes.
• Hydrochromic inks – when this type of ink interacts with or is immersed in water, it changes color. Packaging, ornamental umbrellas, and apparel are all examples of common usage.
Melrose uses high-quality inks to deliver the most demanding components
We work with industry-leading ink compounders, which helps us to cut manufacturing lead times and decrease pricing volatility. Our engineers can match a broad range of production requirements related to unique ink and printing techniques while remaining flexible throughout product development. Schedule a meeting with our consultant to learn more about our offer and get specialized advice on inks and printing.
Post a comment
Your email address will not be published.
|
Palme School: useful articles about teaching children the Russian language
Russian spleen or why we long for our homeland
All immigrants are familiar with homesickness. Some have it stronger, some weaker, but according to psychologists, everyone who has left their homeland goes through it.
Why does this happen? We leave for opportunities, achievements and a better life. We are glad that our children will get a good education, get good jobs, will be able to do what they love. But still, sometimes we want to go back to our childhood, walk in familiar streets, or at least stop feeling lonely.
We are "one of us" in the homeland
Psychologists explain it this way: in the process of adaptation immigrants seem to lose their role, as they have to start life anew, from scratch. In the home country everything was clear and understandable, close and familiar. But in another country there is no basic feeling of safety. Everything is strange, unfamiliar, alien, and sometimes there is a feeling of inferiority. And that is why I instinctively want to go back to my homeland, where "everything is simple and familiar, for a day.
Children's impressions are the strongest
The first years of a child's life have a very important influence on him. One mother told a story: "When I saw port and shipbuilding cranes in the port of Savannah, I would freeze in admiration and always think of Russia. For a long time, I was looking for the reason, until my father once told me that he often took me on a picnic on the Yenisei, where just stood similar cranes. I wasn't even three years old at the time, but I remembered it!"
We involuntarily absorb everything around us. That's how the brain and psyche works. And when we immigrate to Canada or the U.S., memories are not erased. They continue to live inside, sometimes stirring the soul.
And we miss our homeland. For the feelings we had back then. The memories, the places, the smells, the impressions. And we will always miss it.
|
What would you say if someone asks you about an electron’s spin? Well, the first line of our answer would be, “an electron is a spin half particle.” Isn’t it? Not only the electron but all the fermions also have a half-integral spin while all the bosons possess an integral spin. But what does this spin actually represent? Are these subatomic particles actually spinning around their axis, and their spin is actually a measure of how much they spin? Or does the concept of spin has something completely different to offer? Well, these questions have been embedded in our minds for a long time, but we are not yet very clear of what this term “spin” actually represents.
So, this article is a compact attempt made in the direction to touch upon the basics of an electron’s spin and to understand it in the real sense! However, I’ll restrict myself to only an electron’s spin in this article to simplify things.
Do electrons spin around their axis?
Our Earth spins around its axis, but it would be a mistake to view electrons as little billiard balls spinning in space like a planet. Although electrons have a property called spin, it’s a lot fuzzier than what it spells to be. An electron is a spin half particle, giving out its spin angular momentum (1/2)ħ (we will discuss this later). If this quantity is true, then considering the electron’s mass and size, it should spin with a rotational velocity equivalent to the speed of light, which is not practical. So the only conclusion is that an electron can’t spin about its own axis, and thus, spin is just a representative term.
A popular meme explaining the concept of an electron's spin (Image: starecat.com)
A popular meme explaining the concept of an electron’s spin (Image: starecat.com)
Also read:
So what does an electron’s spin actually stands for?
Each particle in our universe has some intrinsic properties and some extrinsic properties. Intrinsic properties are the inherent properties of a particle, while extrinsic properties are acquired depending upon external factors. For example, our blood group is our intrinsic property, while our behavior is an extrinsic one. An electron has three main intrinsic properties, and the spin is one of them, with the other two being mass and charge.
An electron’s spin is related to the electron’s inherent angular momentum, i.e., the spin angular momentum. The spin multiplied by ħ (the reduced Planck’s constant) gives the value of the electron’s intrinsic spin angular momentum. It is independent of all other properties of the electron, even its orbital angular momentum. So, spin is just the angular momentum that a particle has just because it’s that particle!
The discovery of electron’s spin
In 1920, Otto Stern and Walter Gerlach designed an experiment, which unintentionally led to discovering an electron’s spin. In the revolutionary “Stern Gerlach experiment,” Otto Stern and Walter Gerlach put a bunch of silver atoms in an oven and vaporized them. These silver atoms’ beam was then passed through a magnetic field where it split into two beams.
To explain these observations, it was stated that an electron has a magnetic field due to its intrinsic spin. Whenever the electrons having opposite spins are put together, there is no net magnetic field. However, the silver atom used in the experiment had 47 electrons, 23 of one spin type and 24 of the opposite type. The electron pairs of the opposite spins canceled each other out, but one unpaired electron was left that gave the atom its spin, thereby leading the atoms to behave in an observed manner.
Since there was a high likelihood for either spin to exist due to many electrons in the beam of atoms, so when the atoms went through the magnetic field, they got split into two beams. And these two orientations of spin were termed +1/2 and -1/2.
What Does An Electron's Spin Actually Represent And How Was It Discovered About 100 Years Ago? 2
Stern Gerlach Experiment (Image: mriquestions)
Moreover, In 1925, Samuel Goudsmit and George Uhlenbeck claimed that some of the mischievous features of the hydrogen spectrum could be successfully explained by assuming that electrons act as if they actually have a spin.
You might also like:
Electron’s spin vs its spin quantum number
We often treat the electron’s spin and the electron spin quantum number as the same thing. Although both these entities are very closely related, they are not the same. While the electron’s spin is denoted by s, the spin quantum number is denoted by ms. The spin has only one value, like 1/2 in the case of electrons. However, the spin quantum number ms represent the spin’s possible orientations, such as -1/2 or +1/2, or spin down or spin up configuration.
The concept of spin undoubtedly plays a noteworthy role in quantum mechanics. It contributes to computing the characteristics of elementary particles like electrons and even regulates large atoms’ behavior on the whole. I have tried to keep this article as simple as possible without delving into the complex mathematics involved to make the basics clear. Maybe we can discuss the complex formulations related to spin in future articles.
Till then, Happy learning!
Learn Astrophysics At Home
Did you always want to learn how the universe works? Read our 30-article Basics of Astrophysics series absolutely free of cost. From the popular topics such as stars, galaxies, and black holes to the detailed concepts of the subject like the concept of magnitude, the Hertzsprung Russell diagram, redshift, etc., there is something for everyone in this series. All the articles are given here. Happy reading!
Scroll to Top
|
What is Hardware Security?
By Anirudh Menon - Published on February 22, 2022
Hardware Security
Definition of hardware security means the protection that is provided to physical devices. This protection is provided to prevent any sort of unauthorized access to enterprise systems.
Talking about everyday operations, it is very critical to protect hardware devices as much as it is important to protect the software. However, lately, it has been observed that the security of physical devices is often neglected. The article shares insights on potential threats to hardware and the best practices that can be incorporated to secure them.
What is hardware security?
Protecting your physical devices to ensure that no one tries to access these devices without permission is termed hardware security. Hardware security falls under the domain of enterprise security, primarily targeting the protection of machines, peripherals, and physical devices. The protection can take many forms such as deploying security guards, CCTV cameras, and even locked doors.
The other way of securing hardware components will be by creating cryptographic or encryption-based functions using an integrated circuit, that protects the devices from any sort of security vulnerabilities and kicks out the attackers. To put it in simple terms, hardware security is more about security devices physically or through some operational methods, and not by deploying any antivirus.
When we talk about physical security, it points in the direction of securing on-premise devices from any sort of human tampering or destruction. In today’s scenario, this is far more necessary considering that there is a potential threat to machine-to-machine (M2M) devices or IoT (internet of things) devices.
A very typical example of hardware security will be a physical device that scans employee access points or tracks network traffic; for instance a hardware firewall or probably a proxy server. Another way of achieving hardware security is through hardware security modules, also known as HSM. HSMs are basically devices that encrypt and secure enterprise systems by generating and managing cryptographic keys used for authentication.
Yes, there are software-based methods available to secure almost all kinds of enterprise environments, however, when it comes to hardware it is advisable to have hardware security for those architectures, which are responsible for connecting multiple hardware devices.
Potential security gaps can be exploited by attackers when a hardware device is engaged in an operation or executing a code or probably receiving an input. Any physical device that gets connected to the internet, needs protection from attackers.
Critical hardware devices such as servers and employee endpoints require strong security measures and protection to ensure that there is no hurdle in day-to-day operations. These devices also face threats from internal users, making it imperative for organizations to create a strong and robust internal hardware security policy.
10 Threats to Enterprise Hardware Today
If we talk about various sources of threats to enterprise hardware we can talk about firmware, BIOS, network cards, Wi-Fi cards, motherboards, graphic cards, and the list is a never-ending one.
An organization consists of a multitude of hardware devices and components and each one of them has its own share of vulnerabilities. This makes hardware security not only critical but also a complicated process. Let us look at the top 10 enterprise hardware threats:
1. Outdated firmware
Let us accept a fact that not every organization provides a foolproof smart device. There may be local manufacturers who provide IoT devices such as HVAC and RFID devices among others that may come with firmware full of bugs. Moreover, if organizations don’t do a proper deployment of security patches, it can compromise the hardware device.
2. Lack of encryption
We are seeing a large number of hardware devices moving towards being IP-oriented. However, there is still a considerable number of devices that are not connected to the internet using proper encryption protocols. It is to be noted here that encryption for data at rest and data at motion is vital. Any information that is not encrypted with the right set of protocols can be collected by attackers and used to forcefully access your enterprise environment
3. Unsecured Local Access
Usually, hardware devices such as IoT and IIoT devices are accessed either through a local network or via an on-premise interface. Small organizations may tend to neglect the level of access and end up with the improper configuration of the local network or local access points, rendering the devices vulnerable.
4. No change in default passwords
Almost all enterprise devices come with a default password, which can be changed and must be changed. However, many organizations, even those who are technologically far advanced and secure, may end up compromising the devices by ignoring this fundamental factor.
5. Customized hardware
Many organizations, because of the nature of their business operations, rely on customized hardware. For instance, corporate data centers and custom-built applications for heavy engineering and for scientific purposes. Since the chips used in these devices are tailor-made, sometimes the manufacturers tend to overlook the security aspects of these chips, exposing them to vulnerabilities.
6. Backdoors
Backdoors are nothing but a vulnerability that is purposefully inserted in a hardware device, but it stays hidden. The manufacturers usually insert this with the intention of accessing the enterprise environment the moment the device is connected to it, of course, without the consent of the owner of the device.
7. Modification Attacks
These are primarily used for invading the regular and normal operation of a hardware device and permit bad actors to override any sort of restriction on the hardware device. A modification attack basically modifies the communication protocol of the engaging hardware device.
8. Eavesdropping
This type of attack happens when an unauthorized entity or party accesses the hardware device and steals all the data in it. An eavesdropping attack can be easily performed even if the attacker does not have a continuous connection to the said hardware device.
9. Counterfeit Hardware
This is a threat that is constantly present for ages, making it easy for attackers to target enterprises, rather easily. Here, enterprises are sold devices that are not authorized by the original equipment manufacturers (OEM) creating opportunities for backdoor vulnerabilities.
10. Trigger faults
Here, attackers can easily induce faults in the hardware device, thereby disrupting the normal behavior of the device. Through fault attacks, system-level security can be compromised thereby causing leakage of data.
Best Practices for Hardware Security
While there are threats constantly hovering around hardware security, there are best practices that can help in protecting your hardware devices. Here are seven such best practices that organizations can follow
1. Study the hardware supplier
2. Encrypt all possible hardware devices
3. Implement sufficient electronic security
4. Minimize the attack surface
5. Ensure strong physical security
6. Have a real-time monitoring mechanism
7. Perform periodic and regular audits.
Final Thoughts
With these measures, organizations can certainly secure their hardware from any potential threats. Of course, it is needless to say that the attackers constantly find innovative ways to breach the device, but these best practices also undergo continuous evolution, thereby making the life of attackers difficult.
Anirudh Menon | I have adorned multiple hats during my professional journey. My experience of 14 years comes in areas like Sales, Customer Service and Marketing. ...
Related Posts
|
• Standardized patients
The role of the Standardized Patient
Standardized Patients (SPs) are highly-trained actors that simulate the concerns of a medical patient. These actors are assigned different profiles of all ages, races, sexual orientation, and gender identity with specific medical histories and problems. It is the role of the medical student to interact with the SPs to develop a diagnosis or next step for that patient. After the simulation, both the SP and student evaluate the experience.
SPs are used in medical schools all over the country and are considered an invaluable part of the learning experience for medical students. It is critical that SPs are well-trained, excellent communicators, and professional at all times.
Gynecological/Male Urological Teaching Associates (GTA/MUTAs)
Gynecological/Male Urological Teaching Associates (GTA/MUTAs) are similar to SPs but specialize in training medical students about accurate pelvic, rectal and/or breast examination techniques. Given the sensitive nature of these subjects, it is vital that medical students have practice and training in communicating with these types of patients. If you are interested in becoming a GTA/MUTA, please Contact Us.
|
Understanding Harm Reduction: From Set and Setting to Policy
"some text to search"
ANY of the entered words would match
7 min read
Understanding Harm Reduction: From Set and Setting to Policy
This year millions of people will take psychedelics. For many, this will produce entirely positive and potentially life changing experiences. For some, however, the experience will be challenging.
Understanding Harm Reduction: From Set and Setting to Policy
The power of psychedelics to alter one’s mental state creates a situation where a vast range of outcomes are possible, depending on one’s set and setting – from transformative healing to intoxicated accidents. For anyone who turns to psychedelics, one concept in particular should be at the forefront of their minds: harm reduction.
The term “harm reduction” refers to attempts to minimize any negative outcomes that might be possible once one has consumed a substance. It consists of a set of principles, ideas and strategies that people can employ in order to have safer experiences of altered states. These efforts as a whole are based on the idea that people will always consume mind-altering substances and so, rather than deterring people from using any substance or ignoring this behaviour, we can instead promote safe, responsible use. When LSD burst onto the scene in the 1960s, it was legal to buy from pharmacies. This meant that it was free from any form of government regulation, leaving the responsibility for reducing harm to the psychedelic community itself. As news of the powerful effects of LSD leaked out from the research institutions where it was being studied into the wider culture, it was Timothy Leary who provided the community with its most enduring harm reduction concept: set and setting. By the late 60s, LSD consumption had been criminalized in order to suppress the anti-Vietnam war counterculture that had emerged [1] and this criminalization was justified through the logic of harm reduction. A moral panic was created that led to false stories appearing in papers (such as the classic: “Girl gives birth to a frog: Doctors blame LSD”) [2] and the wide publication of junk science [3]. The mainstream public tolerated prohibition because they thought it was keeping people safe. In the US, criminalization has not been successful at eliminating drug consumption. In 2016, a report from the American Civil Liberties Union (ACLU) found that every 25 seconds someone is arrested for possessing an illegal substance for personal use [4]. Pushing this activity underground actually increases the possibility of harm, as drugs on the black market can be adulterated with other dangerous substances, and as the fear of prosecution can deter people from seeking assistance when they need it—not to mention the psychological and physical harm that can come from arrest and criminal punishment. In the midst of this lack of effective harm reduction from the government, it was the psychedelic community itself that again took up the challenge of reducing harm. In the mid-to-late 90s, nonprofits dedicated to harm reduction and drug education began to emerge. The websites Erowid and Dance Safe were founded at this time with the intention of providing educational resources to those who were considering taking a substance. Dance Safe also facilitates drug checking by offering testing kits on their website that can be used in order to test the chemical content of any substance. In 2003, then-senator Joe Biden sponsored the Reducing Americans’ Vulnerability to Ecstasy (RAVE) Act in the US Senate. The act made it possible to prosecute bar and club owners if drugs were consumed in their establishment; punishable with a fine of up to a quarter million dollars or 20 years in prison. This act introduced guidance for law enforcement on how to go about targeting institutions where drugs might be being consumed. Is a club providing free water and a space for people to cool down so they don’t overheat? Such a club would now be considered suspicious in the eyes of the law. This resulted in club owners being disincentivized from actively supporting the safety of their clientele. Providing medical help of any kind could be construed as profiting from a business model based on the use of drugs, leaving the business owners legally responsible. Such laws increase the challenge of reducing harm amongst those who recreationally use drugs in public venues. One public “venue” in particular looms large when imagining places where drugs are consumed in the US: Burning Man. This nine-day art, music, and community-oriented gathering of approximately 70,000 people, held every year in the Nevada desert, was the birthplace of one of the most prominent contemporary harm reduction organizations, the Zendo Project. In 2012, Zendo set up a space for people undergoing difficult psychedelic experiences at Burning Man. Since then, they have provided support at many other festivals. The volunteers at the Zendo sit with people who require their services for the duration of their trip, talking them through it and helping them to surrender to the experience and feel safe. Helping to calm someone’s mindset and offering a safe setting is not only a good idea before a psychedelic experience, but even in the midst of a difficult experience. Psychedelics do not have a single, specific, repeatable effect – the effects are highly influenced by context, both internal and external. The “set” in “set and setting” refers to the mindset of the person who is considering undergoing a psychedelic experience—their internal context. One’s mood, stress levels and expectations around what the experience will be like can all influence the course of the trip. If you feel safe and relaxed and you know why you’re about to undergo the experience, you’ve increased your chances of having a very positive experience. If you feel unsafe and throw yourself into the experience without much thought, there’s a greater chance your experience might be one of overwhelming disorientation and fear. This brings us to the “setting”—their external context. Set and setting are intimately linked; whether you are around people you trust or alone in a safe and supportive setting, your mindset is more likely to be in the right place to undergo the experience. A physically or socially unsafe or unpredictable setting is likely to push your mindset in a more fearful direction. Having a sober friend or a guide sit with you for the duration of your experience can be another highly effective form of harm reduction. Not only can a well-selected trip-sitter be a source of reassurance if one feels fearful during the trip, they can also look out for the physical safety of the individual undergoing the experience. If no one ever took drugs, there would be no harm caused by drugs. This logic is often used to justify the illegality of drugs, but in reality, the threat of punishment doesn’t scare people into total abstinence and actually increases the risk of potential harm by pushing drug use underground. Certain laws do exist, however, that truly support harm reduction.
The 911 Good Samaritan Law is a prime example of this. If you were consuming an illicit substance with someone else who got into some kind of trouble where you might need to call the police, you would be faced with the issue that you might get arrested for drug possession. In order to avoid people being scared to call the cops when necessary to help someone, the 911 Good Samaritan Law makes a provision that the person reporting the issue cannot be prosecuted for drug possession. All 50 states and Washington DC have Good Samaritan laws of some kind but the details vary by state [5]. When four decades of authoritarian rule came to an end in 1974 with the Carnation Revolution, Portugal opened up to the world. Over the coming years, heroin and marijuana started to flow through the port towns of the south coast. By 1999, opiate addiction was widespread and Portugal had the highest rate of HIV amongst those who injected drugs in the entire European Union. In 2001, Portugal took a seemingly radical step to deal with their drug issue: they decriminalized all drugs. Individuals could no longer be arrested for possession and consumption of a personal supply of any substance. Decriminalization is not the same as legalization however. Individuals who were caught with a substance that was not legal might be given a warning, a fine, asked to perform community service or directed towards harm reduction services or treatment facilities. Each case is decided by a commission of legal professionals, medical professionals and social workers although, in the majority of cases, the individual is given no penalty. Decriminalization made it possible for a wide range of harm reduction efforts to be rolled out, from needle exchange programs, so that people could access clean needles in order to reduce the spread of HIV and other blood-borne diseases, to substitution treatment with substances such as methadone. In the 11 years following decriminalization, the rate of HIV diagnoses among those who inject drugs dropped from 1,016 in 2001 to 56 in 2012, and new cases of AIDS fell from 568 to 38 [6]. Boom Festival, a biennial psytrance music festival held in Portugal and mainstay of the European psychedelic scene, has been at the forefront of psychedelic harm reduction in Europe since 2002. This has been possible as a result of Protuguese decriminalization. Kosmicare, Boom’s harm reduction program, provides drug testing areas, seating areas, and public alerts when they are needed. It is staffed by therapists, psychologists and volunteers and works in collaboration with the festival’s medical services as well as health services outside the festival in the region. Decriminalization has made it possible for festival-goers to know the exact chemical composition and strength of the substances they are considering taking, which also allows them to more accurately consider their dosages, reducing the chances of unintentional over-consumption. After taking the substance, they have access to medical provisions should things go wrong, without fear of the harms that come with arrest and the associated criminal punishment. Harm reduction begins with education. In the age of the internet, it’s never been easier to stay informed about substances that you might intend to consume. Information is now widely available on the likely effects of any substance by dose. For example, Reality Sandwich provides both Substance Guides and Dosage Guides to help educate readers on various substances. Beyond this, you can be even more informed by purchasing testing kits to test the chemical composition of the substance. Educating yourself on what provisions for support might be available to you, whether it’s Good Samaritan laws or chill-out spaces at festivals, can only help you to stay safer. None of this is a replacement for giving serious consideration to one’s set and setting however. Once you’ve taken the necessary steps to ensure you’re likely to have an amazing experience, there’s nothing left but to follow Leary’s other enduring piece of advice: “turn off your mind, relax and float downstream”. Dr. James Cooke is a neuroscientist, writer, and speaker, whose work focuses on consciousness, with a particular interest in meditative and psychedelic states. He studied Experimental Psychology and Neuroscience at Oxford University and is passionate about exploring the relationship between science and spirituality, which he does via his writing and his YouTube channel, YouTube.com/DrJamesCooke. He splits his time between London and the mountains of Portugal where he is building a retreat centre, The Surrender Homestead, @TheSurrenderHomestead on Instagram. Find him @DrJamesCooke on Instagram, Twitter and Facebook, or at DrJamesCooke.com. [1] https://edition.cnn.com/2016/03/23/politics/john-ehrlichman-richard-nixon-drug-war-blacks-hippie/index.html [2] https://dangerousminds.net/comments/girl_gives_birth_to_frog_lsd_to_blame [3] https://pubmed.ncbi.nlm.nih.gov/4994465/ [4] https://www.aclu.org/report/every-25-seconds-human-toll-criminalizing-drug-use-united-states [5] https://recreation-law.com/2014/05/28/good-samaritan-laws-by-state/ [6] European Monitoring Centre for Drugs and Drug Addiction (2014) ‘Data and statistics’. https://transformdrugs.org/drug-decriminalisation-in-portugal-setting-the-record-straight/.
Read the full article at the original website
|
Mixed English Questions Set 26 (New Pattern)
Practise our New Pattern English questions for upcoming exams like SBI PO, IBPS PO, Clerk and insurance exams. This set contains questions from Sentence Filler and Odd Phrases.
Direction (1-5): In each of the following questions a short passage is given with one of the lines in the passage missing and represented by a blank. Select the best out of the five answer choices given, to make the passage complete and coherent.
1. The gradual subsidence of most cities has several causes, both man-made and natural. It starts with the hundreds of millions of people migrating into urban centres in search of better jobs and higher standards of living. (…………………………………………………………………..).The indiscriminate use of groundwater, a scourge of rapidly-expanding cities, is a prime contributor. Dried-up lands compress under their own weight. In this sense it costs too little to sink tubewells and pump up the precious stuff at will.
A) Unlike global warming, the problem of a sinking city is local, and eminently solvable.
B) This puts pressure on certain pockets of the planet to keep up with the intensified demands for basic human sustenance.
C) For more than a decade it has been mulling a plan to build inflatable gates on the seabed, which could be raised to close off the lagoon during high tides.
D) As when facing the challenges posed by climate change, politicians tend to balk at most proposals to stop land subsidence.
E) Underground pipelines can tilt marginally or crack under pressure, causing water and sludge to vomit onto the earth’s surface.
View Answer
Option B
2. A few forces could be pushing the two formats apart. The first is that the different skills needed to excel at Twenty20 has players starting to specialise. Twenty20 demands batsmen who can blast plenty of runs in a matter of minutes, rather than gradually build an innings over several days. Bowlers must bamboozle these aggressive hitters instead of probing at a batsman’s defence for hours at a time. (…………………………………………………) Repeatedly hurling the ball can fatigue and eventually injure fast bowlers.
A) The third factor is the money.
B) There are other signs of divergence between the player pools for international Test fixtures and Twenty20 cricket
C) Those contracts represent significant investments, since each franchise has a salary cap of just $10m.
D) The second is the fatigue the comes with playing both forms of the game.
E) The competition pits eight star-studded teams against each other in the Twenty20 format, an abridged and heavy-hitting version of the sport launched in 2003.
View Answer
Option D
3. The worst part about liking or loving someone is not knowing whether they feel the same about you, or worse still, knowing that they don’t. From the moment you meet someone, it takes so much courage to express how you feel about them knowing that in all probability you might face rejection. The oldest and simplest route is to “find out through mutual friends”, or perhaps to be friends with the person and drop hints and gauge body language. (…………………………………………………………………………………..)
A) Not just because it’s a lonely one-way street, or because your fantasies and dreams are dashed, but also because it becomes about self-worth.
B) But wouldn’t you rather wait for the right person rather than moping over someone not in sync in with you?
C) Whatever the methods used, rejection is never easy to handle.
D) A long distance relationship only brings out the best sides that you put forward over social media and other forms of communication.
E) All the fleeting time you do manage to spend together is full of the good stuff.
View Answer
Option C
4. (………………………………………………………………..) Cricket is no exception. Technological advancement is an indispensable part of the game now. Analytics and numbers determine the value of a player and every move is closely monitored to determine his strengths and weaknesses. A small fault can be easily exploited with the assistance of sophisticated software.
A) In this scenario, when a batsman or a bowler shows rapid progress at the international level, it indicates the humongous amount of work he’s put in to strengthen his technicalities.
B) The young players entertained the crowd with their talent.
C) The modern generation of players are juggling with three formats of the game.
D) There is no room for imperfection here.
E) Development in any field is closely associated with the growth of technology.
View Answer
Option E
5. The government and the RBI have been initiating a series of measures to encourage MSMEs, but these are generally supply-side efforts. (…………………………………………………………………)Therefore, there is need to nurture entrepreneurs from a young age. There is a difference between risk and opportunities, and this needs to be emphasised for the young in India.
A) In India, commercial banks are mandated to lend to MSMEs.
B) This can take the shape of higher fee or interest rate, failure to explain exit costs, and sometimes threatening them with refusal to extend regular credit.
C) The need is to generate demand-side requirement from the general public to set up MSMEs.
D) Not all banks can do MSME financing as this is a specialised area and requires specialised skills to assess the institutions that can benefit from bank finance and yield higher production
E) In addition to financing, there is a need for focussed coordination of activities of different government authorities to encourage MSMEs.
View Answer
Option C
Directions (6-10): Which of the words/phrases (A), (B), (C) and (D) given below should replace the words/phrases given in bold in the following sentences to make it meaningful and grammatically correct. Mark (E) as the answer if the sentence is correct and ‘No correction is required’.
1. Giving their location in the mainstream of their adopted countries or their sheen numbers, the overseas Indian could played an important role in presentation a more realistic assessment of India.
A) Gives their , the sheer , could have been playing, in the presentation
B) Giving there , the sheen , can played , in the presentation of
C) Given their , their sheer , can play , in presenting
D) Giving of their, the sheer , can played , in presents
E) No Correction Required
View Answer
Option C
2. The Ministry of Environment and Forests and Climate Change have urge the Ministry of External Affairs for revoke the visas of BBC’s South Asia correspondents Justin Rowlatt and his crews.
A) has been urging , for the revocation of , corresponding , his crews
B) has urged, to revoke, correspondent , his crew.
C) have urged, for revoking , correspondant, their crew
D) have been urged , to revoke , correspondent , his crews
E) No Correction Required
View Answer
Option B
: The Ministry of Environment and Forests and Climate Change has urged the Ministry of External Affairs to revoke the visas of BBC’s South Asia correspondent Justin Rowlatt and his crew.
3. OECD on Tuesday threw its weight around India saying the country is worth of a credit rating upgrade, but cautions Indian policy makers against taking measures only by the aim to get a better rating.
A) weight along , is worthful , and cautions , alonf the aim
B) weight behind , is worthy , but cautioned , with an aim
C) weight on , is worthy , and cautiously , with an aim
D) weight less , is worthy , and cautioned , by a aim
E) No Correction Required
View Answer
Option B
: throw weight behind – to use your influence to support someone or something
throw your weight around – to act as if you have a lot of power or authority
4. Buddha shows the paths of knowledge and enlightenment to his disciple, but its implementation is something that the disciple ought accomplish on his own.
A) shows the path , to his disciples , its implementation , must accomplish
B) showed the path , among his disciple , his implementation , can be accomplished
C) showing the path , to the disciples , their implementation , can accomplish
D) showed the path , to his disciples , their implementation , ought accomplish
E) No Correction Required
View Answer
Option A
5. It is welcome that the government has managed to persuade fuel retailers to continue to accept credit and debit cards, without any burden being placed on consumers.
A) is welcomed , is managing , in acceptance of been placed
B) is a welcome , has manage , accepting , placing
C) is welcome , is managing , in acceptance , been placed
D) can be a welcome , would manage , in acceptance of , along
E) No Correction Required
View Answer
Option E
Related posts
|
Are cryptocurrencies taxed? How do people pay their taxes when the government is not involved?
As you get more involved in the crypto space, these might be some questions you have. While the system used on the blockchain is decentralized, it leaves many people to wonder how these digital assets are regulated and taxed, considering that there are lots of investors who are taking home massive profits.
Well, the short answer to your question is–yes, cryptocurrencies are taxed. But how so?
Crypto Taxation
Crypto Taxation
All about crypto taxation
Each country has its own rules in terms of taxation. This is not different when it comes to cryptocurrencies. Each state views cryptocurrencies differently. But one thing is for sure; there is not yet a precise classification of where cryptocurrencies belong in the list of assets recognized by governments.
Some experts believe that cryptocurrencies should be considered as securities like stocks. So, when they are taxed, capital gains are recognized. Some places tax cryptocurrencies based on capital gains, and the tax percentage rate is usually equal to the “capital gains tax” of the country.
The income you get from cryptocurrencies is considered part of your income in the US. If you’re a retail investor and make money from trading crypto, your profit should be declared in your tax return and part of your taxable income. The tax rate would depend on how much your total income is. The tax rate could be as low as 10% or as high as 37%.
Meanwhile, some countries have already defined how they will tax cryptocurrencies. India, for example, has just recently accepted a proposal describing the crypto tax rate at 30%. This is regardless of how much personal income an investor gains. This 30% tax rate is considered capital gains tax. There are discussions that it will soon turn into a law.
In other countries, there are no crypto tax rules to follow yet. This is one reason why some governments, including banks, are so strict in tracking or monitoring transactions that are allegedly from trading cryptocurrencies. We all know that the crypto market is so rewarding that a person can buy his house and lot with his crypto earnings. So, some countries assign a team to monitor crypto transactions and track those who are not paying their taxes correctly. South Korea, for example, is limiting crypto transactions to investors who use their real names in their trading and bank accounts
What about other digital assets?
NFT taxation
NFT taxation
Besides cryptocurrencies, you might also get curious about how other digital assets like NFTs are taxed. The truth is, there is no regulation yet defining how they should be recognized. Pure digital assets? Securities? Collectibles? Financial instruments? There are still debates on how NFTs should be recognized, regulated, and taxed. Some experts say that they should be classified as collectibles, similar to gems and art in the real world. Others concur that NFTs should be taxed like cryptocurrencies since they are powered by blockchain.
While there is no rule or law defining how NFTs and other digital assets should be classified, especially in taxation, all profits an investor makes from them should be taxed as a personal income or capital gains.
Why are there no specific tax rules for crypto?
The whole blockchain ecosystem is decentralized, which means that no one controls everything. No person, government, or institution can control how investors move within the market. So, if you’re a crypto holder, you have complete control over your digital assets.
Decentralization is a bar for the government to take complete control of the traded assets. After all, that’s what decentralization is all about–to eliminate intermediaries. It goes to say that there are no tax rules for crypto yet because of decentralization. Adding to that is the spread out of information across different computer networks worldwide. So, there are multiple storages of data instead of one. So, it would be challenging for the government to gather all the needed information from the scattered storage.
What about your winnings from crypto casinos?
Outside blockchain, gambling winnings are usually taxable. You’ll consider them as taxable income. If you file your taxes individually, your winnings from crypto casinos like BC Game are not different. You can still view them as personal income subject to personal taxation. There’s no specific law yet covering crypto earnings from crypto casinos. Your earnings from crypto casinos are treated the same way as cryptocurrencies from exchanges.
Do you have to pay your crypto taxes?
Although there are no tax regulations specific to crypto yet, you have to pay your taxes accordingly. This doesn’t just apply to crypto earnings. You also have to see your overall taxable income so you can file your tax returns and be able to pay your obligation. Remember that both decentralization and centralization exist. If you have transactions on the blockchain, it doesn’t mean that you will not comply with the commitments you have in your country. After all, what makes traditional finance good is the organization through all the rules and regulations.
How can you pay your crypto taxes?
The first thing you have to do is to know the tax laws used in your country. If your place has been active in crypto trading for the past years, there should be a discussion of how your country is taxing digital assets like Bitcoin. If none, you can always ask the right people from your government to help you determine the appropriate tax laws to use.
Once you know what to do with your crypto earnings, the next step is to file your tax return. Depending on the total income you have, you might need to add all your earning sources to determine the tax rate you need to use. Again, it’s better to seek help from tax professionals regarding this matter.
In summary, cryptocurrencies are taxed. But taxation rules differ per country.
|
Sold out till July 2022. Orders open for August 2022 & later. 100 Nights Risk-Free Trial.
Why do we sleep? The evolution of rest, according to science
By Cradlewise Staff
Some two million years ago, some of our hairy, tree-living ancestors descended from sleeping on the trees to sleeping on the ground. Fast-forward to the present day, their descendants have split the atom, walked on the moon, and sent probes on interstellar voyages. Not to mention, colonized the entire planet.
So what gave way to this evolution?
REM sleep is the answer. It’s this state of deep sleep in which we become as good as dead and dream of marvelous visions each night. This state conferred some unique benefits to our species. These benefits made some significant changes to our jumbo brains, and these changes are the very reason why we are at the top.
Let’s go back to the beginning to find out what happened.
Descent from the trees
A chimpanzee thinking versus a human thinking in front of a laptop
REM sleep has changed our brains and how we think and solve problems.
We often like to think of our species as unique on this planet. We’re not, though. We belong to a group of Primates with over 200 species. Our first cousins (like the Neanderthals) might have gone extinct, but our second cousins — monkeys, lemurs, and apes are pretty much still around. Today, our closest living relatives are chimpanzees and bonobos, with whom we share 99% of our DNA.
Did you know?
Sleep, hunger, blood pressure, alertness, and pretty much everything else in your body happen in sync with the circadian rhythm.
Life of our closest cousins
So like our closest cousins, our ancestors also built nests on trees to sleep each night (a tedious task that took up hours of our ancestor’s time). But then, as our ancestors evolved to have erect bodies, they became unfit from living on trees. And that was when they moved to sleep on the ground. At this point, our shared histories with our cousins began to diverge.
The transition to sleeping on the ground, first and foremost, eliminated the most significant danger of sleeping on the trees—lethal falls. One bad dream, one wrong turn while sleeping, and you would have been out of the gene pool.
A group of early humans sleeping around a fire while one person is awake
Sentineled groups were an evolutionary advantage.
Second, it brought several physiological and cultural changes like beds, shelters, variation in chronotypes (how early or late someone sleeps), and controlled use of fire.
Controlled use of fire is probably one of the most important factors as it did two things. First, it saved us from flesh-eating and blood-sucking predators while we slept on the ground. Second, we began sleeping in large, sentineled groups on stable ground beds, protected by fire, which would have fostered a safe sense of community.
For the first time in history, there was no danger of falling off a tree. There was fire against predators and the safety brought by sleeping in groups where someone was always guarding.
For the first time in history, our ancestors experienced the advantage of deep, efficient sleep.
This advantage put them in a unique position.
Our ancestors could now capitalize on the benefits of deeper, more intense, REM-dominant sleep. And that’s where our journey of a different type of dreaming began.
Why do we dream?
Human sleep is unique. Compared to other primates, human sleep duration is the shortest, averaging seven hours. Our primate relatives clock in between 13 and 17 hours of sleep each night. Good for them, but our sleep is way more efficient.
A graph showing that humans sleep the least number of total hours versus other species
Compared to other species, humans sleep the least number of total hours.
Higher sleep efficiency means that human sleep is shorter, deeper, and exhibits a higher proportion of REM. We spend a staggering 25% of our sleep in REM, which means we dream more. The human REM to NREM ratio (22:78) is the highest proportion of REM to NREM of all primates.
Why do we dream so much? Why do we dream when the deep REM state leaves us vulnerable to predators and dangers?
But first, how did human sleep evolve to be more efficient?
Two evolutionary anthropologists, Charles Nunn and David Samson have done some groundbreaking research in this department. They have also postulated the Sleep Intensity Hypothesis to answer this question.
According to the Sleep Intensity Hypothesis, “early humans experienced selective pressure to fulfill sleep needs in the shortest time possible.”
An early man hiding behind the rocks and bushes to be safe from a tiger
The risk of predators was one factor that pressured early humans to pack more efficient sleep in less time.
This means that early humans were pressured to sleep more efficiently in less time because of several factors. For one, the risk of predators was higher in terrestrial environments. So the sooner they completed their quota of REM sleep, the better the chances of protection from incoming threats. For another, less sleep meant more time to pursue skills and knowledge. And deep sleep meant effective consolidation of these skills, which eventually led to higher cognitive intelligence, a point we’ll discuss later in more detail.
What’s going on inside your brain when you’re sleeping?
Research has revealed we have four 90-minute long sleep cycles. And these sleep cycles have three distinct stages of human sleep:
• Stage one: Light N2 sleep comprises NREM one and NREM two.
• Stage two: Deep N3 sleep — Characterized by slow-wave activity or SWA in the brain
• Stage three: REM
A visual representation of our sleep architecture
The proportion of REM and NREM sleep stages varies throughout our sleep cycle.
What happens in NREM sleep?
NREM sleep has three stages.
The N1 stage is the shortest (usually lasting less than 10 minutes) and occurs right after you drift into sleep.
The N2 stage begins slow-wave brain activity and lasts for around 30 to 60 minutes. Your body becomes more relaxed.
The last NREM stage is the N3 stage, which lasts between 20 and 40 minutes. It’s the hardest to wake someone up from this stage.
Each of these stages takes us into deeper depths of sleep. The body repairs and regrows tissue during this phase. It builds muscles and bones and boosts the immune system.
What happens in REM sleep?
On the other hand, REM sleep is associated with our bodies becoming paralyzed and our minds becoming more awake as we dream.
Recent MRI scan studies show that specific brain parts are up to 30% more active during REM sleep. There’s high neural activity, pretty much the same as when awake. Many things happen in REM sleep — like learning, consolidation of the information learned, and sieving essential details in long-term memory.
A woman dreaming in REM sleep, in which her brain is experiencing many benefits
REM sleep conferred multiple benefits to the early human brain, which helped in its rapid evolution.
However, early humans evolved to dedicate more time to slow-wave sleep (N3) and REM sleep with terrestrial sleeping. Sleeping on the ground led to less time to light N2 sleep (NREM stages one and two). Less time in N2 sleep increased the total time we spent in deep sleep compared to light sleep.
From an evolutionary perspective, even though deep sleep stages were vulnerable, they shortened the total time early humans needed to sleep and be inactive.
If we consider the evolutionary advantages REM sleep bestowed upon humans, it would be safe to say that this shift worked out just fine. So if you’re wondering why we dream, it’s because dreaming gave our primitive brains some evolutionary gifts.
Three unique gifts of REM sleep
After we fall asleep, we enter the REM sleep stage. This stage conferred three unique benefits on us.
The first of which is called threat priming.
Through the ability to dream, REM enables sleepers to rehearse events or social scenarios that might occur in their waking environments. Our brain has this unique ability to imagine scenarios that haven’t even taken place yet, in exceptional detail. In a way, daydreaming has evolutionary advantages.
So today, we might be priming our brains to rehearse that presentation or practice that Olympic-worthy gymnast routine. But ages ago, this ability gave our ancestors the ability to become exceptionally better at warding off threats and fighting them.
Even today, performers, musicians, sportspersons — people from all walks of life — practice this technique of mentally rehearsing a particular action before performing them.
Like us, no other species can imagine a future event and rehearse it mentally.
An early man experiencing the benefits of REM sleep.
Threat priming, increased innovation to build tools and shelter, along with more free time to interact and form social bonds, are some of the key benefits provided to early humans by REM sleep.
The second benefit conferred by REM sleep on us is increased innovation.
REM sleep and its contents allowed the human brain to form a more comprehensive network of ideas. This resulted in a greater frequency of creativity, insight, and innovation. A high creative association was required to do many tasks. These included building tools, whittling the ends of spears to make decent hunting gear, developing them with time to make survival easy, etc.
Plus, increased sleep intensity also enhanced memory consolidation. A genius creative idea wasn’t of much use if our ancestors got it and forgot it the next day. Without the ability to consolidate creative ideas and innovation in the brain, we’d probably still be tending a bonfire outside caves, flinting stones.
Slow Wave Sleep and REM sleep play an essential role in processing daily information into long-term memory stores. Research states that SWS is primarily associated with consolidating procedural memories, episodic memory, and processing emotions.
So let’s say that Steve from a clan had a realization. Knapping a certain kind of stone with a particular technique makes excellent blades for chopping that hunted dinner (procedural memory). He would need to remember that information, consolidate it in his brain before passing it on to other clan members.
The third benefit of REM sleep was an increase in the available time.
Now that we slept more efficiently and deeply, the number of hours required to sleep decreased.
With more awake time, we could indulge in more social activities, which eventually had significant consequences for how we interacted. Now our hominin ancestors had more time to bond with their communities. They could talk, gossip, and tell stories to transmit cultural information and increase cognitive abilities.
REM sleep made us smart and social
People of all nationalities, races, and backgrounds standing together
Our unique ability to collaborate with people of different nations, backgrounds, religions, and languages is one of our distinctive traits as humans.
Our cognitive intelligence and sociocultural complexity separate us from the rest of the animal kingdom. Both these markers result from REM sleep and dreaming.
REM sleep set in motion a different course of history for us by bestowing us with these two defining features. This course would diverge from anything any species had done in the animal kingdom. In a way, REM sleep laid the foundation for what became today’s modern society.
Our cognitive intelligence allowed us to reason, question, solve complex problems, apply logic and use language to communicate complex ideas.
For example, a monkey knows that if he drops a fruit, it will fall on the ground. But a human knew this and discovered gravity and a mathematical formula for it. Also, how the gravity on Earth is different from, say, that of a Blackhole 12 light-years away.
Our sociocultural complexity is also unparalleled.
Humans of one nation can easily cooperate with other countries’ humans based on their shared interests and agendas. The same can’t be said for, let’s say, forest to forest cooperation between American and African chimpanzees.
Our cognitive intelligence also allowed us to create elaborate myths, which led to our complex sociocultural coordination and cooperation. This myth-making made our society and culture way more complicated than any other animal kingdom. Our ability to imagine, use language, and converse complex ideas created shared myths or beliefs. These myths and ideas are socially constructed like religion, money, heaven, hell, etc.
For example, a monkey might have communicated danger to its group. Like, there’s a lion near the pond, and it’s dangerous to go there. But humans evolved to express something more complex. Like, “Oh hey, saw a lion near the pond last night so maybe not go there if you don’t wanna end up dead? And also, since this beast is so fierce, it should be the guardian spirit of our tribe. The people who worship it are our friends. And the people who worship an Eagle should not have help from us if they are ever in trouble.”
No other animal in the animal kingdom did something even remotely close to this. The entire foundation of what we have achieved today begins with this question of why do we dream.
This myth-making was the beginning of how our nation-states and modern society would come into being by sharing certain beliefs.
The bottom line
Two astronauts on Mars standing in front of a colony, looking at Earth
Humans have started their journey of colonizing Mars, while our nearest relatives — the Chimps, are still living the same way they did millions of years ago.
So today, our nearest cousins are still living in jungles, building their nests the same way our hairy ancestors did. And us? Thanks to REM sleep and dreaming, some specimens of our species are thinking of building a colony on another planet and decoding the secrets of the universe.
1. From sleeping on trees to the ground. NewScientist. 2012. Chimp beds hint how early humans ditched tree-sleeping.
2. Humans belong to a group of primates. American Museum of Natural History. 2020. “Living Primates.”
3. Variation in chronotype. NCBI. 2019. “Chronotype Variability and Patterns of Light Exposure of a Large Cohort of United States Residents.
4. Proportion of REM sleep in humans. NCBI. 2016. “Shining evolutionary light on human sleep and sleep disorders.
5. 90-minute-long sleep cycles in humans. Psychology Today. 2013. “Your Sleep Cycle Revealed.
6. REM sleep boosts creativity. National Geographic. 2009. “Sleeping on it – how REM sleep boosts creative problem-solving.
7. SWS and memory. NCBI. 2009. “The Role of Slow Wave Sleep in Memory Processing.
8. Human myth-making. Dvir Publishing House Ltd. 2020. “Sapiens: A Brief History of Humankind.
9. Sleep Intensity Hypothesis. PubMed. 2015. “Sleep intensity and the evolution of human cognition.”
Stay in the know
Sign up to get sleep tips, exciting product updates, and special offers right into your inbox.
|
How LED Panel Lights Work
How LED Panel Lights Work
• Ashby Maxim
LED panel lights are amazing pieces of technology that effectively output a large and steady amount of light while being efficient and quiet. Panel lights, which most often are used in larger office spaces, have a few notable flaws that the LED versions seek to rectify. This article will cover the following:
• The advantages of LED bulbs
• How LED panel lights function
• The best uses for LED panel light
We hope you enjoy this article on how LED panel lights work!
Advantages of LEDs
Over the past few years, the usage rate of LED bulbs has skyrocketed—and there is no secret why. Below are some of the main advantages they provide.
• Safe Construction: LEDs are constructed will non-toxic materials and no glass. This means that if a bulb breaks, it will not be a huge safety hazard.
• Constant Light Source: LEDs don’t flicker or produce a buzzing noise like halogen bulbs. They provide high-intensity light perfect for any activity.
• Fully Controllable: LED bulb circuits are made to be dimmed by the user. In some cases, they’re also made so the user can change their color and hue to match their surroundings or influence the mood of a room.
• Long Lifespan: These lights have more than 10 times the lifespan of halogen bulbs. This is a major factor in their popularity among consumers.
How LED Panel Lights Work
LED panel ceiling lights are constructed with two different layers. The first layer houses the LED array, which will direct the light onto a diffusing plane. This diffusing plane reflects the light within its own structure to allow for the even distribution of light throughout the whole thing. This layer is encased in clear plastic and is responsible for transmitting the light we can visibly see. It keeps the light-diffusing layer protected from dirt and dust.
Best Uses for LED Panel Lights
Because of their construction, these light panels are best suited for an office environment. They are neat and out of the way, yet functional and bright. Furthermore, they provide a safe alternative to other older lighting sources while bringing many advantages to the table, such as energy efficiency and quiet lighting.
We hope this article has helped you better understand how LED panel lights work and why they are great. If you’re considering installing these types of lights in your home or office, stop by the Eco LED Mart! We have an amazing variety to select from for all your lighting needs.
Your cart
Sold Out
|
Cherry shrimps, or more particularly red cherry shrimps, have become one of the most popular aquarium shrimps since its 1st introduction in 2003. Its vivid color and richness in appearance have made it so popular that many people now take cherry shrimp breeding as a profession.
So, if you are one of them, you need to know at what age do cherry shrimps breed? It is vital since knowing the cherry shrimp breeding age will help you prepare the breeding ground and accumulating essential items for the newborn babies.
Cherry shrimp (Neocaridina Davidi) attains maturity at about 120-150 days, and after that, it is ready to breed.
However, for proper breeding, you also should know how to take care of the cherry shrimp during the mating session and if there are ways to control the reproduction of the beautiful creature.
For your convenience, we have covered cherry shrimp breeding time, cares, tips, and other essential factors related to breeding in the following sections.
Cherry Shrimp Breeding Age
What Age Do Cherry Shrimp Breed?
To know the cherry shrimp breeding age, we need first to understand its maturity. And the maturity of cherry shrimp is linked with its lifecycle.
So, here’s our put on cherry shrimp lifecycle- shortly.
When a cherry shrimp is born, the first stage is post- larva. It continues for 1-2 days, and then it transforms in the grub. It lasts for the next 3-10 days, and finally, the cherry shrimp turns into a juvenile offspring.
For the next 100-120 days, the cherry shrimp gains weight before transforming into a fully grown adult shrimp. However, to attain sexual maturity and preparedness for breeding, a cherry shrimp roughly needs 120-150 days or sometimes even more (rarely).
Growth Stage Timeline Weight
Post-larva 0-2 days 3-5 grams
Larva and Juvenile 3-120 days 5-25 grams
Adult Shrimp 120-150 days 25 grams plus.
Nonetheless, the breeding capacity or age doesn’t depend on maturity. For a fully grown cherry shrimp to be able to breed, weight plays a crucial role.
We found that for proper breeding, a cherry shrimp needs to be around 22-27grams weighty.
Since the cherry shrimp weight is directly linked with water parameters, filtration, tank heating, shrimp feeding, and so on, you also need to know about them. These are equally important cherry shrimp breeding factors as like the age itself.
Cherry Shrimp Breeding Factors:
When we talk about factors affecting cherry shrimp breeding and lifecycle, we actually indicate the elements that we can control to slow down or boost the shrimp aging and growth.
Yes, you heard it right.
You can control the water parameters, filters, feeding, and tank mates to get control over cherry shrimp growth, including both age and size, to gain control over its breeding procedure as well.
Proper Filtration
Over the years, I have seen many cherry shrimp owners neglecting the use of a proper filtration system in their shrimp tank.
It is a suicidal attempt for shrimps.
I repeat, it is a suicidal attempt for cherry shrimp.
Although many people consider that a well-planted cherry shrimp tank works as an alternative to the filters, things cannot go any more wrong than that.
A filter is a specialized device for filtering the aquarium water, and it can’t be replaced with any plants or other staff.
Cherry shrimps are extremely sensitive to external affairs, and the slightest change in the water parameter affect the shrimp profoundly. So, it is needless to say that these sudden changes will also have an impact on the cherry shrimp breeding.
Thus, to maintain the water parameters at an optimal level, we recommend you to use a proper filtration system in the aquarium or tank.
But which type of filter should you choose for a cherry shrimp tank?
Here’s what we have found-
Sponge filters:
When it comes to shrimp tank filters, you don’t need to spend billions on it. A useful yet straightforward sponge filter works fine for the cherry shrimps. First off, it doesn’t create a strong current in the aquarium. So, baby shrimps won’t get stuck on it.
Secondly, its wider surface helps biofilm growth. Biofilms are the favorite food for shrimps. Moreover, it helps the growth of helpful bacteria that keeps the tank water healthy for shrimps.
Finally, what I love about the sponge filters is that these come at a relatively low price and performs like a pro-filter. I prefer using the Powkoo Spong Filter, and it hasn’t disappointed me until now.
HOB filters:
It is usually known as Hang On Back filters and capable of more filtering than the regular spongy ones. However, the tradeoff is a higher price.
Still don’t recognize?
Well, it is the canister filter I am talking about right now. The screen is excellent for boosting bio-filtration and mechanical filtering together. Nonetheless, if you don’t have other fish or shrimp species except cherry shrimps in the tank, a sponge filter is still the perfect one to go with for maintaining the tank.
Water Parameters
Next, you need to focus on maintaining the water parameters. But before we move to see the water parameters, why not see the water itself?
Well, you must use chlorine-free water for the cherry shrimp tank.
It is crucial if you don’t want to see the shrimps dying at a premature age. However, the best water type for the cherry shrimp breeding is the following-
A mixture of cherry shrimp mineral and RO water is preferable. Once you can do it, you will see the increasing number of shrimp babies and the attractive color of them.
You should use the combo since it really helps in getting the vibrancy in shrimp color, and the color representation matters the most for cherry shrimp.
Now, let’s see the essential water parameters for shrimp tank maintenance.
Parameters Optimum Level
Temperature 72°F to 78°F
pH 6.5-7.5
General Hardness (GH) 5-8 ppm
Carbonate Hardness (KH) 1-4ppm
Total Dissolved Solids (TDS) 150-250ppm
Water temperature:
For cherry shrimp breeding, probably, water temperature plays the most crucial part. While the ideal temperature for cherry shrimp to thrive is 70°F to 75°F. For breeding, it needs to be slightly higher. Yes, during my long acquaintance with cherry shrimps, I have found that with a somewhat higher temperature, you can actually boost the shrimp growth.
On the contrary, a slightly lower temperature will reduce the growth rate. But you need to remember that the temperature change needs to be slow and consistent since sudden change can be detrimental for your shrimp.
When you raise the temperature to 76°F-78°F in the water tank, the following things will happen-
• Increased molting
• Quick breeding
• Frequent feeding
• Inferior quality for offspring.
On the contrary, at a lower temperature, although the molting and breeding become slower, you will end up getting a higher quality of cherry shrimp offspring.
So, you need to decide at first- do you need quick breeding and more offspring or less offspring with higher quality and vivid color.
pH level:
The pH level indicates the presence of acid or alkaline in the water. The pH measurement scale is 0-14.7. The cherry shrimp water should neither be too acidic, not too alkaline. Hence, the preferred pH range for cherry shrimps is 6.5 to 7.5.
GH, KH, and TDS:
GH indicates the presence of calcium and magnesium in the tank water, higher or lower amount. KH, on the other side, reports the pH stability, and TDS showcases how much chemicals are dissolved in the water.
These three key components combine to reduce or boost the water conditions quite significantly. You can purchase a TDS meter or KH and GH testing kit to measure the factors at least once in a month.
Feeding Quality
Apart from the water temperature, the quality of cherry shrimp feed probably has the greatest effect on the breeding procedure. You will mostly find three types of cherry shrimp feeding-
1. Biofilms and algae ( born within the tank)
2. Boiled and blanched vegetables
3. Commercial shrimp food
While shrimps will get algae and biofilms grown naturally in the container, you can provide blanched vegetables as a change. I use the following vegetables to feed my red cherry shrimp-
• Spinach,
• Carrots,
• Lettuce,
• Cucumber
However, even with algae and blanched vegetables, the shrimps will lack proper nutrition. So, you need to feed them with commercial shrimp food to boost health and growth.
I have been feeding my cherry shrimps with Bacter AE, and the result is entirely satisfactory. The best fact about the food item is that it is suitable for adult and baby shrimps alike. You can find several good-quality commercial cherry shrimp food online.
However, when feeding the shrimps remember that-
Underfeeding cherry shrimp is preferable than overfeeding.
Other Factors
Sometimes, you might be interested in considering the other factors as well. However, remember that these factors are less crucial, and so, you might escape them.
• Light: Lighting has nothing to do with cherry shrimp breeding. It is only useful for the plants you use in the tank. So, even if you use a light in the cherry shrimp tank, don’t use it more than 7-8 hours.
• Mineral stones: Many expert shrimp breeders say using mineral stones in the tank will produce a higher quality of shrimp. It also increases the number of shrimp offspring.
• Plant supplements: Plant supplement has nothing to do with the cherry shrimp breeding. Yet, if you want to use these, make sure it doesn’t contain any copper or iron. The slightest presence of these two chemical components is good enough to kill the shrimps.
Cherry Shrimp Breeding Care and Tips
1. If you want to take cherry shrimp breeding professionally, don’t mix different grades. It will result in inferior quality offspring and less vibrant colors. So, you will get less market value for the shrimp babies.
2. Don’t mix shrimps and fishes. Always use a specific tank or aquarium for the breeding shrimps.
3. Breeding cherry shrimp will not place you among the wealthiest people of the world. In fact, it hardly covers the shrimp breeding expense even. So, don’t be so serious.
4. Since cherry shrimp isn’t a profitable business at least in the initial stage), don’t go for steep grades. Instead, begin with the cheap species or grades and see how it goes.
5. When it comes to maintaining the water parameters, remember that instead of hitting the optimal level, consistency is more important. So, focus on the consistency of KH, GH, pH, and TDS.
The Bottom Line
Cherry shrimp, thanks to its vibrant and rich color, is an exclusive aquarium breed. Sometimes cherry shrimp owners might be interested in breeding them. Then, you need to know at what age do cherry shrimps breed?
Well, the answer is- cherry shrimp gets ready for breeding at about 120-150 days after it is born. However, some shrimp might achieve sexual maturity around 80-100 days and, thus, be prepared for mating.
Also, you can control the tank temperature and water parameters to increase or decrease the breeding procedure along with the number and quality of offspring.
However, I think cherry shrimp breeding is more for fun than for commercial purposes since the profit margin is negligible. So, don’t get serious with it.
Good luck.
Similar Posts
|
IMG_0877 (2)
Although the Lego Mindstorms EV3 kit comes with a variety of cool sensors, wouldn’t it be awesome to build your own custom sensing device? This project will show you how to create your own unique object sensors for your Lego EV3 controller using basic electronic components found in your junk box or purchased from distributors like RadioShack, Adafruit, Jameco, or SparkFun Electronics.
Think of this DIY object sensor as an electronic substitute for the touch sensor that comes with the EV3 kit. Instead of making physical contact, the object sensor uses light and shadow to detect objects, prompting your EV3 to turn on a motor or sound an alarm. Running some simple code that you’ll upload, the EV3 also displays a pair of expressive eyes on-screen (like Rethink Robotics’ Baxter) based on the presence of an object in its detection space.
What’s Inside a Lego EV3?
The Lego EV3 is a remarkable programmable controller with tons of cool features built in, such as Bluetooth communications, WiFi, USB hosting, and a multi-button user interface. The brain behind the Lego EV3 is a powerful Texas Instruments 456MHz ARM microprocessor. Connected to the TI microprocessor are the EV3’s power supply, USB, I/O (input/output) ports, flash memory, and display buttons, and a GUI (graphical user interface) on an LCD screen. The I/O ports consist of 4 inputs labeled 1, 2, 3, and 4, and 4 outputs labeled A, B, C, and D. The input ports use RJ-12 type connectors for attaching sensors and motors with standard RJ-12 cables. For this project, we’ll be using input port 1 to connect our DIY object sensor to the TI ARM microprocessor.
Modifying the LEGO Cable
The key element to building your own sensors is having access to the Lego EV3’s input circuit. In order to connect your DIY object sensor to the EV3 input port 1, you simply need to modify a standard RJ-12 cable. By cutting one of the RJ-12 connectors off the cable and stripping the outer insulation, the 6 individual wires are accessible for attaching the object sensor circuit to it. The DIY object sensor’s relay contacts will connect to 2 of the 6 wires.
Lego EV3 Code
The programming code for the object sensor is based on the interlocking of function blocks in the EV3 software. Each block provides a certain function related to object detection, response time, and displaying expressive eyes. The cool thing about programming the EV3 is the many sounds and visual effects that can be heard through the mini speaker and displayed on the LCD, respectively. The object sensor’s Robot Eyes code and multimedia effects allow your robot to be more engaging and provide a sense of awareness of its surroundings based on the expressive response. Let’s get started.
|
Mayank Jain
IoT has Disrupted the way we Feed our Planet
IoT 5 min read October 16, 2019
Hundreds of years ago, the typical human being was a farmer. Most people's primary concern was to go to work on the farm so they could produce enough food to feed their families and sell whatever was leftover for some money. 200 years ago 90% of the population was producing food.
But a lot has changed since then. In the modern world, 2% of the population produces all of the world's food. Farmers can provide food for 10x the amount of people they could in the 1900s.
While there are many different factors that helped farmers get better at what they do, IoT is definitely one of the main causes of increased efficiency.
IoT provides farmers with one of the most valuable assets people can have these days - data.
Here are some of the biggest ways that IoT is revolutionizing the agriculture industry.
Greenhouse Automation
In the past, greenhouses, or really any type of garden, had to be completely operated through manual labor.
If there was a group of crops that needed watering, someone had to go in and water it some amount they thought was right. If it got too hot for the plants to survive, someone had cool down the greenhouse. If there isn't anyone constantly monitoring the plants, they'll die.
Now thanks to IoT, you can just stick a moisture sensor in the dirt to adjust the watering levels. You can put a temperature sensor around the plants, and have automated systems that adjust the temperature to optimize crop yield.
There's no limit to the number of things farmers can track. They track everything from light exposure to ventilation to create perfect incubators to grow food.
The sensors collect tons of data that farmers can use to learn more about their farm. They can learn about new trends and constantly make their farms more and more efficient.
Farmers don't have to do much to maintain their greenhouses anymore. They just plant the seeds, turn on the system, and walk away till they get an alert that the crops are ready for harvest.
Drone Irrigation
A lot of us have seen hobbyists flying their drones around in public, taking pictures and competing in drone races.
Companies like DJI have taken drones beyond just photography and built industrial drones that can irrigate acres of field in a fraction of the time that it takes a regular farmer.
They took this technology to the Philippines and it had an unbelievable impact on their farms. Here's a video from DJI on the impact of their irrigation drones:
I totally recommend you watch it, but in case you didn't, I'll quickly summarize the main takeaways from the video:
• Farmers could create flight plans by walking the path around the field.
• The drones were able to accomplish a day's worth of human worker irrigation in 2 minutes
• They provided aerial footage that could show anything wrong with the crops like discoloration.
• IoT sensors on the drone let it always be an even height over the ground so that the spraying is effective. (More effective than manual irrigation).
• The drones save farmers when they are understaffed in emergencies.
I honestly think that its crazy how a drone does a day's work in 2 minutes 🤯.
Basically, at the end of the day, these drones are saving farmers time and money while making them more efficient.
Livestock Monitoring
I already mentioned how IOT helps farmers monitor their crops, so why not monitor their livestock too?
IoT is a game-changing implementation for farmers and ranchers.
These days, farmers can buy bio-capsules that they can feed to cows to track their animals' temperature, drinking cycles, stomach pH levels, nutrition, and location.
This helps farmers instantly know when something is wrong with their livestock and where they are if they go missing.
Before, farmers would wait to check on their animals during roundup, by which time animals could have run miles away in any direction, especially if its a horse. IoT lets them set virtual boundaries on their pasture. If an animal wanders out, the farmer gets an alert and the location of the animal so they can bring it back quickly.
These IoT devices even give data to farmers that help them increase the chances of successful livestock breeding.
Overall, Before they'd have to drive around the farm and individually check on the health of each animal. Even then, they might not realize if an animal is sick in time to save it. Now they can just sit back on their couch and only go to the animal if they get an alert on their phone, all while being sure that they can help them in time.
IoT makes farming life more efficient, cheaper, and relatively relaxing.
Key Takeaways / TLDR:
• IoT is making farming quicker, cheaper, more efficient, and easier.
• Sensors and automatic systems can allow greenhouses to automatically optimize growing conditions.
• Drones are making irrigation extremely faster without the need for manual labour while providing monitoring data on fields of crops
• Biotrackers let farmers keep track of their animals' health and know where they are at all times.
|
When the stars came out
The James Webb telescope is an exemplar of collaborative science and human ingenuity
The dominant narrative these days across much of the world is, as Ayn Rand said about her novel The Fountainhead, the story of ‘individualism versus collectivism, not in politics, but in man’s soul’. In India, we too celebrate such individualism where heroic individuals, through their will power, strategic vision, perseverance and unique personal qualities, lift society up by its bootstraps and, like Nietzsche’s superman, and create a new moral order. This new social order will, ostensibly, enjoy a higher level of human creativity and human freedom. In this narrative, individualism has built the modern world.
This is, however, only half the story. While Elon Musk, Jeff Bezos, Stephen Schwarzman, N.R. Narayana Murthy, Mukesh Ambani have made a significant difference as individuals, as also countless others who have passed away, there is another perspective that is equally significant but has rarely been celebrated. Obscured by the dominant narrative, this other account applauds the contribution of groups. Working together in collaborations, such groups, through sharing and cooperation, produce outcomes that are no less beneficial for society. In this story, there are no supermen just worker bees.
The making of the $9.7 billion James Webb telescope is one such story. One of the most significant technological achievements of the last few years, that involved construction, transportation, launching, alignment, and deployment in deep space, the James Webb Space Telescope (JWST) is a project that marked twenty plus years of continuous collaboration between many teams. Its successful placement in deep space is a defining moment in humankind’s history of reaching for the stars. Another journey into ‘man’s soul’ has just begun.
Making the telescope
There are four aspects in this other narrative, that are complementary to, and not competitive with, that of the superman. These are, the ambitions of the project; how it was put together; the technologies involved; and its implications for human society. Taken together, they constitute an illustrative case of the collective production of a common good.
The James Webb telescope was imagined by its initiators as the coming together of many cutting-edge technologies. It was planned to enable humanity to peer deeper into space and to look further back in time. The telescope will give us new knowledge about the origins of the universe. Because it is essentially an Infra-red spectrum telescope, as compared to the Hubble which worked largely in the UV and visible light range, it will allow us to stare into the beginnings of the ‘cosmic dawn’, a period 250 million years after the big bang when light began to break through the cloud of mist and the first stars and galaxies began to form. The JWST will take us back about 150 million years further than Hubble, closer to when it all began.
The project seeks to understand how galaxies form and evolve. It will look for evidence of dark matter, study exoplanets, capture images of planets in our solar system, and other such cosmic curiosities. This knowledge will impact not just the physical sciences but also the humanities and social sciences as we attempt to understand our own place in the universe and ask those perennial questions such as: Is there other life in the universe? Will it look like us and, more worrying, will it look for us? What is the relation between ‘chance’ and ‘necessity’, to use Jacques Monod’s thesis, in the emergence of life? In this ambition, the JWST belongs to the classical tradition of scientific inquiry: the pursuit of fundamental curiosity untouched by special interests.
The CEO of Northrop Grumman, an aerospace and defence company and the primary contractor of the project, has gone on record to announce that because of the delays and production lapses, the company would only book profits after the successful deployment of the telescope.
Collaborative science
If the ambition of the project was to understand the origin of the universe, and our place in it, the execution of the project was a stellar product of collective endeavour. Although there were many remarkable individuals who led the various groups in the project, the emphasis throughout was on its accomplishment by teams who worked together to fabricate the instruments, make the telescope parts, design the cooling systems, etc. This new collective, comprising of free scientists and engineers, collaborated with the single purpose of producing, launching, and placing, at the chosen Lagrange point (a point where the Earth’s and Sun’s gravitational forces are balanced), a telescope that was lighter than Hubble but had a mirror six time larger. Compared to Hubble’s location 550 km from the Earth, JWST was located 1.5 million km away. All its parts had, therefore, to work the first time around. There were no second chances. Recent reports from NASA inform us that the deployment and aligning of the mammoth telescope is proceeding well and may even ‘exceed expectations’. The launch of the satellite, on 25 December 2021, was a joint project of NASA, the European Space Agency, the Canadian Space Agency and involved many universities, organisations, and companies across 14 countries.
Further, the science and technology that was deployed should be toasted as a tribute to human ingenuity. Eighteen hexagonal beryllium mirrors first had to be folded to fit the available space in the Arianne rocket and then unfolded, in deep space, to make a single mirror with nanometric precision. One of the instruments, for example, has 2,50,000 individually-controlled shutters to ensure that the illumination of only the narrow portion of the sky being observed is possible. The JWST teams built and installed a Near Infra-Red spectrograph, a Near Infra-Red camera, a slitless spectrograph and, after technical difficulties, a Mid Infra-red instrument because, unlike the other instruments that need to be cooled to 40 K, it needs to be cooled to 7 K. At great cost the successful cryocooler was finally engineered.
These collaborative achievements have produced a sophisticated scientific infrastructure for exploring space and for opening the door to new scientific knowledge. It has created a new ‘knowledge commons’. Administered by the Space Telescope Science Institute (STSI), which has a charter and a website that places in the public domain all relevant information, and that invites scientists from across the world to submit projects, JWST, through the process of commoning, is on the threshold of producing a huge knowledge commons. The ‘heroic collective’ thereby shares space with the ‘heroic individual’. Hubble gave us mind blowing pictures of the infinite sky, such as the Lagoon nebula. JWST will give us pictures of the heavens that Isaac Asimov only imagined in his brilliant science fiction short story Nightfall when the stars came out.
(Peter Ronald deSouza is the DD Kosambi Visiting Professor at Goa University)
Source link
Share and Enjoy !
Plz Rate and Review
Please enter your comment!
Please enter your name here
|
It’s not surprising that animals far less complex than we are would display a trait that’s as generous of spirit as empathy, particularly if you decide there’s no spirit involved in it at all. Behaviorists often reduce what we call empathy to a mercantile business known as reciprocal altruism. A favor done today—food offered, shelter given—brings a return favor tomorrow. If a colony of animals practices that give-and-take well, the colony thrives.
But even among animals, there’s something richer going on. One of the first and most poignant observa tions of empathy in nonhumans was made by Russian primatologist Nadia Kohts, who studied nonhuman cog nition in the first half of the 20th century and raised a young chimpanzee in her home. When the chimp would make his way to the roof of the house, ordinary strate gies for bringing him down—calling, scolding, offers of food—would rarely work. But if Kohts sat down and pretended to cry, the chimp would go to her immediately. “He runs around me as if looking for the offender,” she wrote. “He tenderly takes my chin in his palm … as if trying to understand what is happening.” More recent- less gentle—stories of chimp savagery have muddied the animal’s rep for interspecies amity, but a capacity for violence does not preclude a capacity for gentleness too.
OJ- Simpson’s 1995 acquittal of double-murder charges may have outraged millions
of people, but it did make the morality tale surrounding him far richer, as the culture as a whole turned its back on him
TIME Your Brain: A User’s Guide Article on What Makes Us Moral ?
Kohts’ reports are not the only ones of their kind. Even cynics went soft at the story of Binta Jua, the gorilla who in 1996 rescued a 3-year-old boy who had tumbled into her zoo enclosure, rocking him gently in her arms and carrying him to a door where trainers could enter and collect him. While it’s impossible to directly measure empathy in animals, in humans it’s another matter. Marc Hauser, professor of psychology at Harvard University and author of Moral Minds, cites a study in which spouses or unmarried couples underwent functional magnetic resonance imaging (fMRI) as they were subjected to mild pain. They were always warned before the painful stimulus was administered, and their brains lit up in a characteristic way signaling mild dread. They were then told that they were not going to feel the discomfort but that their partner was. Even when they couldn’t see their partner, the subjects’ brains lit up precisely as if they were about to experience the pain themselves. “This is very much an ‘I feel your pain’ experience,” says Hauser.
The brain works harder when the threat gets more complicated. A favorite scenario that morality research ers study is the trolley dilemma. You’re standing near a track as an out-of-control train hurtles toward five unsuspecting people—all of whom are related to you but not members of your immediate family. There’s a switch nearby that would let you divert the train onto a siding. Would you do it ? Of course. You save five lives at no cost. Suppose your true love is on the siding. Now the mortal ity score is 5 to 1—but that one is very precious.
Pose these dilemmas to people while they’re undergo ing fM Rl, and the brain scans get messy. Using a switch to divert the train toward an empty track increases activ ity in the dorsolateral prefrontal cortex—the place where cool, utilitarian choices are made. Complicate things with the idea of diverting the train toward another per son, and the medial frontal cortex—an area associated with emotion—lights up. As these two regions do battle, we may make irrational decisions. In a recent survey, 85% of subjects who were asked about the trolley sce narios said they would not kill an innocent person to save five others-even though they knew they had just sent five people to their hypothetical death. In this case, morality trumps the arithmetic of mortality.
Leave a Reply
You are commenting using your account. Log Out / Change )
Twitter picture
Facebook photo
Connecting to %s
Powered by
Up ↑
%d bloggers like this:
|
DEV Community
Posted on • Updated on • Originally published at
Using environment variables in a Flask + Heroku project
You can also read that article on my blog or on medium !
Using environment variables is fundamental in a project. This is how you tell your app if you are running in production or locally for example. This is where you also store more sensible information like SECRET_KEY or API credentials. Storing those directly in your web app code can be easy at first, but it is not safe when you will push your code in production, so taking good habits from the beginning is a good idea.
Before jumping in the topic of configuring environment variables, I will first make you set up and deploy a basic app.
In this project I use Flask. Create a folder for your app and a new virtual environment (check out poetry if you don't know what it is). Once your virtual environment is running, we need to install a few dependencies first :
# if you use poetry as virtual environment manager
# use the command 'poetry add' instead of 'pip install'
pip install flask gunicorn
Enter fullscreen mode Exit fullscreen mode
Flask is the web server running with python, and Gunicorn is what will basically run our web server on Heroku when we will deploy our app.
Lets first set up a basic app and create a file in your folder :
import os
from flask import Flask, render_template
app = Flask(__name__)
def index():
greeting = "Hello"
excited = os.environ['EXCITED']
greeting = greeting + "!!!!!"
return render_template("index.html", greeting=greeting)
Enter fullscreen mode Exit fullscreen mode
As you see we will render an index.html so create that file too inside a templates folder and paste this code inside :
<!doctype html>
<title>My App</title>
{{ greeting }}
Enter fullscreen mode Exit fullscreen mode
Before running the web app locally we need to set two variable in your terminal so Flask knows what to do exactly :
export FLASK_ENV=development
export EXCITED=True
Enter fullscreen mode Exit fullscreen mode
Lets run our app locally with flask run . As you can see we are greeted like expected, with all the '!!!!', but I had to export the variable in my terminal. Every time you close your terminal you need to do that again. Not very efficient.
Now if we want to deploy our app on Heroku we need to do a few things.
1. Create a Procfile at the root of your folder. Paste this code inside :
web: gunicorn app:app
2. We need to export all the python dependencies in a requirements.txt file :
pip freeze > requirements.txt
# if you use poetry you can do as following
poetry export -f requirements.txt > requirements.txt
3. Install Heroku CLI on your machine and then login on your Heroku account (here to register):
# if you are on mac and use homebrew
brew tap heroku/brew && brew install heroku
# you can also use npm
npm install -g heroku
4. Once Heroku is available on your terminal, we can create your Heroku App :
heroku create <name_of_your_app>
Now if you check your Heroku Dashboard in the browser, you'll see an application with that name. But it doesn't have our code or anything yet - it's completely empty. Let's get our code up there.
5. The next step is initializing git in your repository, and add the Heroku remote in your git repository :
git init
heroku git:remote -a <name_of_your_app>
6. Now before pushing our app on Heroku, we need to set that EXCITED variable to true. You can do that on the web interface of Heroku, or using the CLI with :
heroku config:set EXCITED=True
7. To push our web app to Heroku we can simply do :
git add .
git commit -m"deploying app on heroku"
git push heroku master
Now if you look at you should have a nice excited greeting message : Hello !!!!
Environment Variable
Alright, thank you Mathieu, but how is that suppose to teach us how to manage environment variable ?
Well, now that you are set up properly, let me introduce the .env and .flaskenv files. By convention, the .flaskenv is where you store your variables related to your flask configuration, such as the type of environment, or the files that contains your flask app. For this example we will write those lines inside this file :
#.flaskenv file
Enter fullscreen mode Exit fullscreen mode
Now the .env file contains the sensible variable information that your app needs to run. This file stay locally and you must include it in your .gitignore to avoid sharing sensible informations. For the purpose of this tutorial we will just include one variable :
#.env file
Enter fullscreen mode Exit fullscreen mode
Your folder structure should look something like this :
├── templates
└── index.html
├── .env
├── .flaskenv
├── .gitignore
├── Procfile
└── requirements.txt
Enter fullscreen mode Exit fullscreen mode
Now we have two different use case possible. If you want to use flask to run your app locally you need to install one more dependencies
pip install python-dotenv
# with poetry use
poetry add python-dotenv
Enter fullscreen mode Exit fullscreen mode
You don't need to do anything else, thanks to this new package, flask will automatically detect your two new files and will load their variables in the app context.
If instead of using flask to run your app locally you prefer to use heroku local web then you don't need to install python-dotenv, Heroku will detect the .env file by itself.
We saw how to implement a behavior that allows us to protect our sensible information by putting them in external files that wont be pushed to a public git repository. In the same time our app will work seamlessly when deployed on Heroku, if you set up the same variables in your .env files on the config of your Heroku app (with the web interface or the CLI).
You can find all the code of this tutorial here.
Discussion (0)
|
A Very Brief History of Cajun and Zydeco Music
Posted: May 7, 2014 in Livin' in the USA, Music
Tags: , , , , , ,
Cover of "The Bassist's Bible: How to Play Every Bass Style from Afro-Cuban to Zydeco
(Excerpted from The Bassist’s Bible: How to Play Every Bass Style from Afro-Cuban to Zydeco, by Tim Boomer. Every chapter begins with an introductory section on the history and characteristics of the style; this is the introductory section from the Cajun/Zydeco chapter.)
The terms “Cajun” and “Zydeco” refer to two distinct styles, both stemming from French cultures in southern Louisiana. Traditionalists say “lache pas la patate” (“don’t drop the potato”), meaning “don’t let go of the old culture,” more specifically, don’t allow Cajun music to become a hybrid musical form. However, as current Cajun and Zydeco musicians often perform both genres, it’s appropriate to include both in this chapter. An appreciation and understanding of the differences between the two will aid in accurate and authentic performance of both.
Cajuns (“Acadians,” French-Canadians exiled from Nova Scotia) came to central and southwest Louisiana in the 18th century. Originally, Cajun music revolved around the fiddle and stomping the floor on beats 1 and 3 (in 4/4 time) while playing a homemade triangle called the “tit fer” (little iron). Following World War II, Cajun musicians increasingly used the accordion, after American servicemen brought back the German-style (diatonic, single-row button) accordion from Europe. Similar to a harmonica with bellows, the German-style accordion has a fixed, limited tonal range, so, it lends itself harmonically and structurally to simple music. Its volume soon made it one of the primary instruments within the genre.
In addition to 4/4 tunes, Cajun music features many songs in waltz time (3/4), often subdivided into a 9/8 feel (3/4 with a triplet pulse).
Zydeco has its roots in African and Caribbean music and the Creole culture (Creoles being the racially mixed offspring of Europeans, American Indians, and Africans), and is still sometimes referred to by its early names, Swamp Pop and Swamp Rock. The term Zydeco is attributed to accordion legend Clifton Chenier, who popularized the song “Les Haricots Sont Pas Sales” (“The Beans Are Unsalted”). “Les Haricots” (pronounced “layzarico”) evolved into the term Zydeco.
The two leading instruments in Zydeco are the accordion (multi-row button or keyboard varieties, both of which can play sharp and flat accidentals) and a percussion instrument called a “frottoir,” a rub board often worn on the chest. Invented in the 1940s by Willie Landry and Clifton Chenier (with his brother Cleveland Chenier), the rub board has become Zydeco’s signature rhythmic voice. With it up front and dictating the rhythm, the style is up tempo, with the drummer and bass player powerfully driving the band. Zydeco music is predominantly played in 4/4 (shuffled or straight), with fewer 3/4 (and far fewer 9/8) tunes than in Cajun music.
Although the two musical styles maintained marked differences well into the 20th century, Cajun and Zydeco cultures began to blend as far back as the early 1900s, when rural African-American laborers invented “Juré,” a style which mixed singing, praying, hand clapping, and dancing. Shortly after its invention, Juré began to fuse with Cajun music to form “La La” (a Creole French slang term for “House Dance”). These early styles featured percussion instruments such as spoons, washboard played with a notched stick, the fiddle, and the accordion.
Country music began to influence Cajun music in the late 1930s and early 1940s. This Country influence, along with that of Rhythm & Blues in the early 1950s, brought the electric guitar, electric bass, and drum set into both Cajun and Zydeco ensembles. Although Cajun and Zydeco music developed separately, by the mid-1980s both styles were often played by the same bands, some of which brought Cajun/Zydeco to worldwide attention. Both styles’ popularity continues, as evidenced by Cajun/Zydeco festivals and the success of bands and musicians such as Beausoleil, Queen Ida, Buckwheat Zydeco, Zachary Richard, and C.J. Chenier (son of Clifton Chenier).
Other notable bands within the Cajun genre include Steve Riley and The Mamou Playboys, Jackie Caillier and the Cajun Cousins, Jay Cormier and the Cajun Country Band, and Rodney Thibodeaux and Tout Les Soir. Other prominent Zydeco bands include Beau Joque, Boozoo Chavis, Rockin’ Doopsie, Geno Delafosse, and Grammy award winner Terrance Simien.
The examples and variations below provide a thorough representation of Cajun/Zydeco bass grooves, most notably the Two-Step groove found in both styles. The most important characteristic in distinguishing between Cajun and Zydeco is that the Zydeco rhythm section is more active than the Cajun rhythm section. Cajun/Zydeco bands often mix both genres freely when performing, but almost never within the same song.
Enhanced by Zemanta
1. […] A Very Brief History of Cajun and Zydeco Music […]
2. […] A Very Brief History of Cajun and Zydeco Music […]
Leave a Reply
WordPress.com Logo
Twitter picture
Facebook photo
Connecting to %s
|
In a not-to-distant past life, I worked as an Early Years teacher. For six years I taught children from ages 3-7 and completed a masters in Learning, Education and Technology at the start of last year. This series of blog posts will focus on some 'dos and don'ts' of how you can use technology at home to help enhance children's learning.
I left teaching just under two years ago. The more I learned about children's cognition in these early years, the less I could justify the way that we taught. In these key years for development, learning and play should be indistinguishable and, as I see friends suddenly having to take up the mantle of teacher in this unprecedented landscape, this message is more important than ever.
I want to make one thing incredibly clear before I carry on. Children's mental well being and attitude towards their work is the absolute most important thing here. We cannot underestimate the impact the current situation is having on children. A quick look at Maslow's hierarchy of needs shows us that learning cannot even begin to happen when we don't feel physically safe, you know, like, if there were a literal global pandemic happening. If your child manages to learn anything in this time, you have done an incredible job. If your child is not in the frame of mind for learning then that's absolutely fine, and I absolve you of any guilt you may feel over this. If your child comes out of this having forgotten how to spell, that is much easier to teach than if they come out of this little balls of stress.
Maslow's hierarchy of needs
That said, technology can be a great way to make children excited to learn, and can bring aspects of gamification into learning - which can help to engage otherwise reluctant learners. It's very easy to see that word, gamification, and think that those elements of game play are equal to the play that children need. This is unfortunately not the case. Play for learning involves exploration, imagination and following children's natural lines of enquiry. The aspects of play brought in by gamification are specific tools that relate to playing games, such as point scoring and competition.
Many schools seem to be asking children to play apps such as "Sumdog", which refreshingly has questions that relate to the maths mastery curriculum, presenting different representations of number and asking questions that promote mathematical thinking. However, I am not a fan of the elements of gamification that apps like this use. The element of competition pits children against each other, instead of valuing their learning journey, and awards points only for getting the correct answer. An experience that can be stressful for developing brains.
Praise in learning should come from the thinking that takes place, and not from finding the right answer. The thinking is where the juicy part is happening, that's where children are having to form new cognitive pathways and really get their gears turning. Finding the wrong answer is a gem in mathematical learning. It gives us a chance to talk about the tactics we implemented to get there - if we can work out where we went wrong, we can better understand the skills we need to answer this question. Knowing that 2 * 8 = 16 does not make a child a great mathematician. Knowing how they multiplied the two numbers so that they can use this tactic in other places is the important skill. (There is a place for learning "short cuts" such as number bonds and times tables, because it helps you utilise those things to solve more complicated problems, but that's a conversation for another time!). It's the doing not the answering which is the important part of the learning.
This type of app should be seen as an add on to learning, instead of a replacement. The app is a way to get your child interested in talking about numbers, a fun reward to let them demonstrate what they know, but this, in itself is not authentic learning.
"Good" learning using technology happens when you use it as a tool in the same way that we use it in the real world. I am not sitting at the computer right now because I want to do some computing. I'm writing something that I want to share, and this is the best tool for me to use. I could write it down and post it up on the town message board, but this tool is better fit for the job.
Right at the start I said that learning and play should be indistinguishable. Technology should be used as a tool to enhance your child's learning through play. We'll be giving some examples of what this looks like in our next posts.
|
How to Select a Right DC Power Supply Part 2: Key Considerations
Posted on: June 23rd, 2021 by James
Welcome back to Part 2 of this series! Part 1 is all about the functions and characteristics of DC power supplies. In Part 2 here, we will take a look at the considerations when selecting a right power supply.
1. Key Considerations
There are too many considerations when selecting a DC power supply. This makes it hard to select a right power supply without any understanding, which is the topic we are going to explore here.
In Part 2, we will look at two main considerations: basic considerations and specific requirements for a DC power supply. First, start with the basic considerations which are most important factors in the power supply selection. The next is the specific requirements, which are mainly related to:
1) performance requirements 2) function requirements 3) system expansion requirements 4) requirements for flexibility in your system 5) safety requirements and 6) maintenance requirements.
In choosing a power supply, it is very important to be clear about what you want to achieve. So, Part 2 guides you to determine exactly what you are looking for in a DC power supply. Each of the sections below is considered in more details with particular examples or technical advice. Furthermore, we will not cover here in details, but you may need to think about your priorities, point of compromise, how to use or combination with other components in your system to find a best DC power supply.
1-1. Basic Considerations
The following are very important considerations that you should take into account.
1) Voltage and Current
Determine how much voltage and current that you need.
2) Wattage
Calculate the maximum power wattage to be achieved according to the voltage and current. Consider using a multiple-range power supply depending on your application.
3) Load Type and Current
Check your load type, load current (e.g. pulse current) and load current waveforms.
One last recommendation: You can use the five Ws and one H approach to find out what you need from a DC power supply: who, what, when, where, why and how.
1-2. Specific Requirements
This section uses the table below again:
1-2-1. Performance Requirements
1) Low Ripple and Noise
To achieve a low ripple and noise, select the B-type or C-type power supplies. The B-type power supplies, series regulator DC power supplies, have a low ripple and noise. The C-type power supplies, linear DC power supplies, can offer high speed and low noise. Read data sheets or specifications for the details.
2) Pulse-waveform
If your project requires sharp pulses such as sharp rising and falling in 5 µs, choose the linear
power supplies (C-type). If your project requires 30 µs, you can also use the inverter DC or AC
power supplies (D and F-type) or linear AC power supplies (E-type). The E-type and F-type
power supplies are high-voltage power supplies, but if you need higher voltage and high-speed pulses, add a pulse generator in your system. With a pulse generator, the switching DC power supply (A-type) or series regulator DC power supply (B-type) can be also used.
3) Absorb reverse current from load
The bipolar linear DC or AC power supplies (C or E-type) can absorb a current source such as a motor reverse current. Some bipolar inverter AC power supplies can return the absorbed current to a power line. Other than them, place a resistor or electronic load in parallel with your power supply to absorb it.
4) Fine Voltage Adjustment
A voltage setting resolution is stated in data sheets or specifications; e.g. 0.012 % of full-scale (max. voltage). The more voltage the power supply has, the more rough adjustment is achieved in constant voltage mode. The voltage setting resolution may differ depending on the setting method: set by panel or communication command.
5) Inrush Current
You need to consider the voltage rise time before selecting a power supply.
Pulsed inrush current is drawn by DUT when first turned on. The voltage rise time for typical DC power supplies is 50 ms or more, so the inrush current cannot be easily obtained even if your DUT has capacitive properties. To correctly measure the inrush current, the input voltage should have the rapid rise time. Placing a switch at the power supply output is effective to control the voltage rise time within 1 µs or less.
Capacitive current (Ic) = C x dV/dt; Large capacitors may cause a large current spike.
The switching DC power supply (A-type) has a large capacitor at its output side to be able to flow more inrush currents. There are power supplies with specific inrush current capability designed for use of large motors.
6) Power Efficiency
The power loss is the power difference between electricity input and output as a result of an energy transfer from AC voltage to DC voltage. The power conversion efficiency is usually 70 – 90 % and the high efficient power supply is the switching type power supply such as A, D or F-type on Table 1.
The most efficient way is to use the rated power from your power supply. Do not select a power supply with much higher power ratings than you actually need.
7) Low Current Consumption
To reduce a current consumption, use a high efficient power supply (read 6. Power Efficiency) or get a higher input voltage as much as possible.
The power supplies with a power factor correction circuit use less input current. A high power factor (close to 1) indicates good use of the incoming supply.
8) Takt Time
Takt time is the rate at which a product needs to be completed and is used to describe how fast or slow production takes place based on customer demand. Manufacturers are expected to reduce the takt time while increasing the productivity per unit time.
The DC power supplies are typically used in production test systems to apply various test voltages on a DUT. Their high speed response is important to reduce the takt time. Also, the signal lags between transmitting an output-on command and actual voltage rising needs to be minimized.
C, D, E or F type power supplies, the high speed power supplies, can quickly switch the output voltage, but the time taken for the signal lag was not much different between all types of power supplies. You can find a multi-output power supply with high set speed, but consult with the vendor or maker to confirm the exact information before making any decisions.
For the analogue control power supplies, the takt time can be reduced depending on how they are used.
9) Pulse Current
In pulse current, a duty ratio is the ratio of the on-time to total time of the current waveform, where total time is an on-and-off cycle (pulse cycle). If your load accepts the pulse current to flow with duty ratio 50 % and pulse cycle 5 ms or more, use any DC power supplies. If a pulse cycle is 1 ms or less, use either of B, C, E or F type power supply.
1-2-2. Function Requirements
1) Various Applications
When testing your DUT with different voltages or currents within the same operating area, the multiple-range DC power supplies are recommended. If you use a high-power power supply for small-power applications, its efficiency is decreased.
2) Bipolar Output
Using two units of DC power supplies can make a bipolar output. Place a switch to control the output on/off synchronization, if needed.
With a master-slave parallel operation function, the entire system can be controlled by the master unit. This function is similar to the dual tracking function (check Part 1 – Figure 9) of the multi-output power supplies. The multi-output power supplies often feature the output on/off synchronization function.
3) Remotely control power supplies
The modern DC power supplies often feature a LAN port. Multiple power supplies can be controlled with a LAN hub connected. Output on/off synchronization may be also available via LAN interface. The LAN interface will increase in popularity for DC power supplies.
4) Use as DC and AC power supply
C, D, E and F type power supplies, bipolar power supplies, can provide both DC and AC outputs. Especially, E and F type power supplies, bipolar AC power supplies, are more appropriate to provide high voltage output.
5) Voltage Waveform Generation
C, D, E and F type power supplies tend to have the voltage waveform generation function. These are the high speed power supplies and the voltage rising and falling time is from 3 µs to 30 µs, which can generate desired waveforms.
Some power supplies allow users to customize waveforms with their internal function generator. You can use this sequence feature to customize certain times or levels of waveforms and save the sequences into the power supply itself.
1-2-3. System Expansion Requirements
1) Increase power supply capacity
There is the master-slave parallel function which is performed by designating one master unit and connecting it to one or more of the same models being the slave units. The entire system can be controlled by operating the master unit. Output current and power can be greatly amplified under this operation.
To double the output voltage, connect two units in series. The maximum number of series connection units differs by power supply models.
2) Increase control units
The number of control units can be adjusted via communication network.
1-2-4. Requirements for Flexibility in Your System
1) Low Noise
DC power supplies were originally designed for use in laboratories and factories, so it is quite normal that their cooling fans get larger and louder. In some power supplies, the cooling fan can make less speed and noise during low-power outputs.
2) AC Input
Depending on the model, but mostly in E and F type power supplies, the input voltage range is specified from 85 VAC considering the input voltage drop. See the data sheets or specifications to check the input voltage range.
3) Under 15 A Circuit Breaker
Keep the power supply operation within the 15 A circuit breaker rating. Check the output power rating of a power supply before use.
4) Harmonic Current Reduction
A power factor correction circuit is necessary in high-power power supplies to improve the power factor. Please be noted that some power-saving power supplies do not have it.
5) Inrush Current Protection
Inrush current protection circuit is installed in almost all power supplies to prevent it. However, small-sized power supplies installed a commercial transformer may not have it.
6) Backup or Redundant Power Supply
It is the best way to prepare a back-up power supply to ensure the operation for your critical application during power failure or breakdown, or you can have redundant power supplies.
A redundant power supply is when a DUT operates using two power supplies in parallel with diodes as below. Each of the power supplies has the capacity to run this DUT on its own, which allows it to operate even if one goes down. During normal operation, each of the power supplies provides half of the power that is needed.
7) High and Low Temperature
The typical safe temperature range for DC power supplies is 0 – 40 C°, but the range can be extended to 50 C° depending on the models.
8) Mount in 19-inch rack
Check the availability of 19-inch rack mount accessories.
1-2-5. Safety Requirements
1) Safety
Most DC power supplies comply with IEC61010.
1-2-6. Maintenance Requirements
1) Warranty Period
The warranty period has been recently extended. Ask your vendor or maker.
2) Lifetime and Mean Time Between Failure (MTBF)
DC power supplies can last a very long time by making a repair. There might be no specific expected lifetime but the mean time between failure (MTBF) is defined.
Products Mentioned In This Article:
• DC Power Supplies please see HERE
|
Learn about the causes, diagnosis, and treatment options available for an obstructed airway
In case you start experiencing symptoms like snoring at night or daytime sleepiness, the dental sleep specialist may carry out some assessments to determine if there is an obstruction to your airway. Obstructed airways usually prevent you from having enough sleep and significantly decrease your quality of life. Therefore, Dr. Barry Chase, together with other specialists, identifies the cause of obstructed airway and manages it appropriately.
What is airway obstruction?
Airway obstruction refers to a blockage that prevents one from breathing correctly and comfortably. Some airway obstructions usually develop over time, while others are typically sudden and severe because they occur when one swallows a foreign object and ends up affecting your quality of life. Chronic airway obstruction is one of the conditions that can alter your sleep health, resulting in various sleep disorders. Some of the most common sleep-disordered breathing conditions are as follows:
• Obstructive sleep apnea (OAS)
Obstructive sleep apnea is a condition that occurs when the muscles of your throat relax too much, making them narrow and causing irregularities in your breathing patterns. Some of the sleep apnea symptoms may include waking up abruptly, morning headaches, waking up gasping, excessive daytime sleepiness, and waking up out of breath. Snoring is also one of the common signs of an obstructed airway. Obstructive sleep apnea is also associated with a greater risk of heart disease and high blood pressure.
Therefore if you begin to experience some symptoms or alterations in your sleeping patterns, you are encouraged to seek medical care to receive an evaluation of your condition. The health care provider performs a dental airway assessment to identify abnormalities and blockages in the airway.
• Upper airway resistance syndrome (UARS)
URAS is also similar to OAS in that it is also caused by airway obstruction. Patients with UARS usually experience breathing resistance when they are asleep, while those with OAS usually experience sleep apnea and stop breathing for short periods. However, the patient with UARS also presents with snoring and excessive daytime sleepiness.
What causes airway obstruction?
Several factors contribute to airway obstruction, including the narrow arch, allergies, obesity or excess body weight, enlarged tonsils, naturally narrow airways, deviated septum, scalloped or enlarged tongue, the position of the tongue, and position of the teeth. An obstructed airway can also cause serious complications like increased stress on the cardiovascular system and high blood pressure.
How is airway obstruction diagnosed?
Your specialist usually performs a dental airway assessment to enable them to diagnose your condition. During the evaluation, the health care provider performs a physical examination of the structures found at the back of your throat. It is used to determine your Mallampati score, which is a preliminary indicator of an obstructed airway. If your Mallampati score is raising the alarm, the dentist may refer you to in-lab polysomnography or a home sleep test.
How are obstructed airways treated?
The treatment options available are epigenetics, continuous positive airway pressure, positional therapy, and custom oral appliance. Your dentist also works closely with you to come up with a comprehensive treatment plan depending on the underlying cause of your condition.
If you are having difficulty getting enough sleep and you are interested in knowing the treatment option that is right for you, call or visit Chase Dental Sleepcare today.
|
skip to Main Content
Artist in Focus : Wassily Kandinsky
In this section for Art From Us, we pick one artist to showcase their work and creative journey so far. Today, we look at Wassily Kandinsky.
Art From Us & Divvya Nirula introduce you to artists and their art. Underlining significant works, discovering creative practices. And giving you a glimpse into their studio.
Kandinsky was born towards the end of the 19th century to parents of mixed ethnicities. His father was Muscovite and his mother was of Mongolian descent. Kandinsky’s upbringing was a combination of European and Asian cultures. He was born into a wealthy family and travelled with his family all over Europe. Soaking up different historical and cultural elements. He was still not into art. Rather as he completed his schooling, he was a proficient musician. He had perfected the piano and the cello. It was during this time that he first displayed an active interest in art.
He made a connection between colours and their inherent individuality. Their ability to express an emotion and feeling. Kandinsky would go on to develop his theory much later, which would become a movement.
Kandinsky moved to Berlin during political unrest in Russia in 1917, where he was styling after his marriage and moved to Berlin.
The artist always was wanted to teach art as well as create. His time at the Bauhaus was perhaps amongst his most memorable periods. Kandinsky passed away in 1944 in Paris where he lived after the Nazis forced closure of the Bauhaus. He left an indelible mark and a solid foundation for modern artists to come.
It was during 1909 that Kandinsky was being inspired by the Expressionist movement. It was his Der Blaue Reiter, which he had painted in 1903 that would lend its name to the movement that he was at the helm of. He very much wanted to create a space for radical artists who were condemned for their nouveau techniques and abstract styles. He founded the Neue Kunstlervereinigung Munchen (NKVM, or New Artists Association of Munich). One important change along with the art which was radically different form his previous styles – was that his titles were deliberately objective.
In 1911, along with Franz Marc, Kandinsky and Marc heralded the ‘Der Blau Reiter’, a movement that was stemming from German Expressionism.
Kandinsky published his theory on abstraction, which augmented his theory about the affinity between the artist and spirituality. Kandinsky expanded his non-objective works while he still continued to create abstract and figurative works. The movement came to an end with the unfortunate death of Franz Marc during WWI.
Kandinsky was deeply affected and it was after he returned to Moscow in 1916 that he familiarized himself with other great artists once again. He absorbed the ideas of the Consturctivists and Suprematists styles. But the October Revolution of 1917 was a blow to Kandinsky’s plans of starting an art school, and he and wife fled to Berlin.
Kandinsky’s contribution to the world of art is formidable. His art theory formed the basis of modern art movements, especially Abstract Expressionism. He influenced many others with his painterly style, backed with his ideas about the core concept of the creative potential of colours themselves. The Color Field painters and Neo Expressionists owed much to Kandinsky and his manifesto.
57 of his canvasses were confiscated by the Nazi’s for their ‘degenerate’ content but his works were famously collected by Solomon R Guggenheim, despite Fascist repressions. And thus these works were imperative in shaping the modernist missions of the Guggenheim museum, which supported avant-garde art and artists.
To learn more about iconic artworks and their socio-political context, visit the Artist in Focus archive.
Back To Top
|
System Admin «Prev Next»
Lesson 3 Determining the version of UNIX
Objective Version of UNIX is installed.
Determining the Version of UNIX installed on your System as UNIX Administrator
Here are the three different ways you can determine which version of UNIX you are using.
The Slideshow below uses a Linux example, but the concepts are the same for all UNIX systems.
1. Pre-login banner: Most unix systems announce themselves with a banner before the login process.
2. Post-login banner: Many systems identify themselves following login with a header message describing the system.
3. uname command: Finally, most versions of UNIX offer the uname command, which identifies the operating system.
4. -a option of the uname command displays all the information about the machine type and operating system version.
5. The output from the uname -a command has the following components: 1) The name of the operating system.
6. Second Component 2) name of the machine
7. Third Component: 3) Version of the operating system
8. Fourth component from the uname -a command. 4) The machine processor type
9. The -a option means all, but the various parts of the uname output can be extracted with other options. The -m option gives you just the machine type.
10. The -n option gives you just the machine name
11. The -v command give you just the operation
12. Now the cursor is not blinking
Banners name command
Unix Operating System
How can I tell what version of linux I am using?
Often times I will ssh into a new client's box to make changes to their website configuration without knowing much about the server configuration.
I have seen a few ways to get information about the system you are using, but are there some standard commands to tell me what version of unix/linux I'm on and basic system information (like if it is a 64 bit system or not). Basically if you just logged into a box and didn't know anything about it, what things would you check out and what commands would you use to do it?
The motd in /etc/motd stands for "Message of the day."
As system administrator, you can modify this file to communicate information to all system users.
Just edit the file using a text editor. You will need to be superuser[1] to have enough permissions to modify it. We will discuss becoming a superuser later in this module. You can modify /etc/issue, too. The uname command can be extremely useful when writing shell programs that have to run on different UNIX versions. It provides a method for automatic detection of the operating system type.
[1]superuser: A special user account that has root privileges. You can become a superuser by typing the su command without arguments and giving the system's root password.
|
Past Perfect Continuous Quiz
Interrogative - Yes / No Questions
Level: Upper-intermediate
This is a past perfect continuous interrogative quiz testing you on yes / no type questions. For example:
• ________________ (Sue / study) before we arrived?
• Had Sue been studying before we arrived?
We use the past perfect to indicate that one event happened before another in the past. 'Arriving' is the second event, and the 'studying' happened before that.
The continuous is used to show something happened over a period of time i.e. the studying.
It's an online quiz and interactive, so you just have to write your answers in the gap then click to check your score after each quiz.
You can also download these quizzes to PDF if you want to practice them or use them in the classroom.
Past Perfect Continuous Tense Quiz
Interrogative - Yes / No Questions
Fill in the blanks with the past perfect continuous tense forms of the verbs in brackets to make interrogative sentences.
Note: Don't leave a space after the last word otherwise it will be marked as incorrect. Use capitals where needed.
1. (you / search) for cheaper flights before you bought your plane ticket?
2. (Ron / visit) his sister often before she got married?
3. (they / exercise) before the New York marathon?
4. (Tina / send) emails for hours before we met at the restaurant at 3?
5. (you / try) to schedule a meeting with the fashion company before you arrived in Paris?
6. (the students / work) diligently on their project before they made a presentation?
7. (Sara / drive) long before she stopped to get some rest?
8. (you / install) new programs on your brother’s laptop when he returned from work?
9. (John / paint) a portrait before Miss Jones called him from the gallery?
10. (Dan / buy) a lot of stocks before he went bankrupt?
Your Result is (1/10)
New! Comments
Any questions or comments about the grammar discussed on this page?
Post your comment here.
|
What Is Dmz Korea?
What is the purpose of the DMZ in Korea?
The North Korean side of the DMZ primarily serves to stop an invasion of North Korea from the south. It also serves a similar function as the Berlin Wall and the inner German border did against its own citizens in the former East Germany in that it stops North Korean citizens from defecting to South Korea.
Is it safe to visit DMZ Korea?
Although the DMZ is considered an active war zone, and patrolled by significant armed military forces on both sides, visiting the DMZ is actually very safe as long as you follow all the rules. The rules will vary depending on your tour and how close to the border zone you actually get.
Is DMZ dangerous?
In summary, it is safe to put games consoles into the DMZ, but it is not considered safe to put other devices like PCs and laptops into the DMZ. Doing so could compromise the security of these devices and leave them open to viruses and hack attacks.
You might be interested: Often asked: When Does School Start In South Korea?
What happens in the demilitarized zone?
A demilitarized zone (DMZ or DZ) is an area in which treaties or agreements between nations, military powers or contending groups forbid military installations, activities or personnel. A DMZ often lies along an established frontier or boundary between two or more military powers or alliances.
Why is the DMZ so important?
The main benefit of a DMZ is to provide an internal network with an additional security layer by restricting access to sensitive data and servers. A DMZ enables website visitors to obtain certain services while providing a buffer between them and the organization’s private network.
Is the DMZ in Korea considered a combat zone?
By all accounts, Korea’s DMZ is about as close to a combat zone as there is in the world today for American ground units.
Can South Korean go to North Korea?
In principle, any person is allowed to travel to North Korea; only South Koreans and journalists are routinely denied, although there have been some exceptions for journalists. Visitors are not allowed to travel outside designated tour areas without their Korean guides.
Has anyone escaped North Korea?
A defector from North Korea was apprehended in Goseong last week after evading South Korean guards for hours. A man escaped North Korea last week by swimming several kilometers before coming ashore in the South, where he managed to evade border guards for more than six hours, according to a report released on Tuesday.
Is it worth going to the DMZ in Korea?
The DMZ tour is an absolute must see for visitors to Korea. It is a chilling reminder of the conflict that still exists and to how close the hostile forces are to us.
You might be interested: What Day And Time Is It In South Korea?
Does DMZ improve gaming?
Using DMZ settings then is an excellent way of freeing up your console’s connectivity to the internet at large and therefore other gamers, which is after all the crucial factor in being able to game online without lag.
Should U enable DMZ?
A true DMZ is basically a section of your network that is exposed to the internet but do not connect to the rest of your internal network. However, most of the home routers offer DMZ setting or DMZ host settings. In fact, you generally should not use the home router’s DMZ function at all if you can avoid it.
Why do South Korean soldiers wear sunglasses?
South Korean guards in this area were armed with pistols and they stood in a modified taekwondo stance with stolid facial expressions, clenched fists and sunglasses, which was meant to intimidate the North Korean guards.
How far is the DMZ from Seoul?
The DMZ is a no-man’s land about 30 miles north of Seoul that was established in the 1953 Korean War Armistice Agreement.
Are there tigers in the DMZ?
But tracking wildlife populations in the DMZ can be challenging at best. Two animals in particular, the Amur leopard and the Siberian tiger (two of the most endangered cats in the world), have been reported by observers but never definitively recorded as having a habitat in the DMZ.
Which city in South Korea is famous for DMZ tours?
Panmunjeom Tour Panmunjeom and the surrounding Joint Security Area are famous as the only place in the world where North and South Korean leaders meet.
Leave a Reply
|
Personal Protective Equipment (PPE) Testing
Materials Testing Guidance for Personal Protective Equipment
stethoscope icon
The global demand for medical equipment has skyrocketed in response to the COVID-19 pandemic, with many companies refocusing their efforts to produce personal protective equipment (PPE). With this vast uptick in demand, medical device manufacturers have scrambled to increase their production capacities while non-medical manufacturing companies have transitioned their own production facilities to create items such as masks, gloves, and nasal swabs. With increased manufacturing comes increased quality control testing, and Instron has received numerous inquiries from companies seeking to expand their testing capacity or reconfigure their existing equipment to test PPE. This guide was created to help familiarize manufacturers with key testing requirements and provide an overview of current FDA regulations. We hope that it will be a useful resource for anyone seeking to aid the fight against COVID-19.
Medical Masks
Medical masks come in two primary types: single-use surgical masks and N95 respirator masks. Surgical masks are intended to prevent viral spread by containing droplets produced by the wearer, while respirator masks are designed to protect the wearer from virus particles that have been aerosolized. Both of these masks are relatively easy to manufacture and test, and many textile manufacturers have shifted their operations to produce them in an attempt to meet the current demand. Many of these companies already own materials testing equipment and are able to make small modifications to their existing systems in order to perform the required FDA testing.
mask fabric test
Fabric Test
Mask fabric is generally tested in accordance with general textile standards such as ASTM D5034. Because fabric samples are prone to jaw breaks, we recommend pneumatic grips with smooth jaw faces to minimize this risk. In order to capture peaks and troughs generated by individual fiber breaks, we recommend a test system with a high data capture rate, such as Instron's 68SC-5.
surgical mask elastic test
Elastic Test
It is important to test the strength of the connection between a mask's fabric and the elastic band that holds it into place. This test is performed by loading the band to a minimum of 10 N and visually evaluating it to ensure there has been no separation. With masks now being worn for longer periods of time than ever, it may also be valuable to perform a relaxation test to determine its durability.
respirator mask
Filter Test
Respirator masks must be tested to ensure the strength of connection between the mask fabric and the respirator valve. This test can be accomplished using a side-acting grip on the base of the system and a custom-made hook fixture attached to the load cell.
Medical Gloves
ASTM classifies medical gloves according to their material (latex, nitrile, natural rubber, PVC, or polychloroprene), while ISO classifies them based on their application (patient examination or surgical). Regardless of the testing standard, material, or clinical application, the equipment and general procedure for testing is consistent across all medical glove types. ASTM D6319, ISO 11193, and EN 455-2 are standards used by the biomedical industry to regulate the tensile properties of medical gloves. The key results for all glove testing standards are the tensile strength and ultimate elongation of the material. Rather than testing the entire glove, a dogbone specimen is cut from the finished glove and testing in accordance to the relevant elastomeric standard (ASTM D412 or ISO 37).
Glove Test Setup
1) Load Cell
A 500N load cell is an appropriate capacity for all glove materials.
2) Pneumatic Grips
Air pressurized grips ensure consistent clamping forces Jaw faces are easily interchangeable to ensure the correct surface texture is used for the material. Elastomeric materials like rubber gloves typically require rubber coated faces due to how thin the specimen is. The rubber coating is able to prevent slippage of the material without damaging the specimen.
3) Bluehill Software
The biomedical method suite includes preconfigured methods for EN455-2
4) Elastomeric Roller Grips
Roller grips provide a cost effective gripping solution for thin elastomers The roller grip utilizes a proportional clamping pressure which increases as more force is applied to the specimen
5) AVE 2.0
An optical non-contacting strain device can be used to ensure more accurate strain measurement
6) Specimen Preparation
All the major ASTM/ISO/EN standards require a dumbbell shaped specimen to be stamped from the palm of the glove EN 455-2 takes into consideration the potential discrepancies in thickness between the palm and the fingertips. The standard compares their thickness and uses a correction factor for the tensile strength of the speicmen.
surgical mask testing
Nasal Swabs
Nasopharyngeal (NP) swabs are crucial tools in the diagnosis of influenza and respiratory diseases. Despite their similar appearance, these swabs are considerably more specialized than the standard cotton swabs used for personal hygiene, using synthetic fibers for the swab staff and tiny bristles for the swab tip. In an effort to bolster the global supply, significant collaborations have occurred between 3D printer manufacturers and medical research teams, which have resulted in a massive increase in production capacity of test quality NP swabs. It is critical to perform mechanical testing to determine if the performance of the 3D printed swabs is comparable to the performance of those produced by standard methods.
nasal swab 3 point bend test
3 Point Bend Test
Nasal Swabs are subjected to flexural forces as they travel through the nasal passage. 3 point bend tests help characterize these stresses. It is also important to evaluate the weak point at the tip of the swab which helps achieve the correct size for transport. Instron's standard 2810-400 3 point bend fixture with 10 mm diameter anvils is ideal for these applications.
nasal swab cantilever bend test
Cantilever Bend
The cantilever bend test best represents the stresses seen when the swab is held during the procedure. The material needs to be flexible enough to ensure it will not fail during the test. This setup is accomplished using a component test plate and an advanced screw action grip to hold the specimen in place. Any probe can be used to deflect the tip of the swab.
nasal swab shear test
Tip Shear Strength
The tip of an NP swab is made of tiny bristles that allow for maximum sample collection. These bristles need to withstand shear forces as they move across the walls of the nasal passage. In order to test this property, the tip and base of the swab are clamped with advanced screw action grips to evaluate the maximum force required to break the bristles or the tip itself.
FDA Requirements in the Age of COVID-19
The Food and Drug Administration is the main regulatory body overseeing the production and distribution of PPE in the United States. The level of FDA involvement depends on the class of the device, which can range from class 1 to class 3 based on the device’s potential risk of nonconformance. Most types of PPE are labeled as class 1 devices, which have the fewest barriers to approval.
Because it can take months or even years to gain FDA approval, in times of health crisis the FDA issues something called an Emergency Use Authorization (EUA). An EUA essentially loosens the requirements for production and distribution of certain medical products to allow production to ramp up quickly. EUAs are currently being granted to manufacturers of COVID test kits, virus therapies, ventilators, respirators, and PPE. These emergency authorizations are generally granted to specific companies who apply to expedite the approval process, but they are also being released as blanket statements covering certain types of PPE so that smaller companies can also participate with minimal red tape. The EUAs include additional documentation that outline the enforcement policy for PPE manufacturing during the current public health emergency and provides criteria for quality control standards as well as the required labeling of products released under the authorization.
Watch Our PPE Testing Webinar
Contact Us
Hospital and Surgical Supplies
hospital and surgical supplies Download
|
IgA Vasculitis (Henoch-Schönlein)
by Stephen Holt, MD, MS
My Notes
• Required.
Save Cancel
Learning Material 2
• PDF
Slides Vasculitis.pdf
• PDF
Download Lecture Overview
Report mistake
00:00 I think based on the process of elimination, I'm thinking we got a winner here, but let's take a look back at the case, get some blood work and see if IgA vasculitis, which we haven't even talked about yet, is going to be our diagnosis. Alright, so 14-year-old girl. Just to refresh your memory, we already learned that IgA vasculitis, previously known as Henoch-Schönlein purpura is an immune-mediated small vessel vasculitis that unlike the ANCA-associated vasculitis is going to have significant immune complex deposition in tissues. So, let's talk about the specific features of this case that go along with this diagnosis. So her age of being 14 is the classic age point for Henoch-Schönlein purpura IgA vasculitis. As you can see, most common ages are between 3 and 15 years of age. Interestingly enough, it's often preceded by a strep throat infection and this may be due to some sort of molecular mimicry type picture so the time course of days to weeks of prior strep infection would be consistent with this diagnosis as well. When looking at the systems that are involved, we have skin involvement, we have GI involvement, we have arthralgias, in fact about 65% of patients with IgA vasculitis will have an arthritis or arthralgia, more of the arthralgias without as much evidence of a sinovitis on exam. And importantly some things that are absent may also help us out here. She doesn't have any ocular symptoms, she doesn't have any cardiopulmonary symptoms, she doesn't have any constitutional symptoms and that turns out to be significant. Cardiopulmonary stuff without pulmonary symptoms were certainly weaning away from a lot of those ANCA-associated vasculitides as well. Back to the physical exam, the absence of any oropharyngeal lesions like red lips or strawberry tongue is going to lead us away from something like Kawasaki's and thinking back to our large vessel vasculitides the absence of any systemic blood pressure discrepancies, there's no carotid bruit, that's going to stir us away from Takayasu's for example. She is diffusely tender to moderate palpation on exam and has guaiac positivity. We're certainly concerned about either submucosal edema of the intestines or perhaps hemorrhage or ischemia and then as I mentioned an oligoarticular arthralgias would be consistent with this diagnosis as well. Lastly, on the skin exam, if you do a biopsy you're probably going to find a leukocytoclastic vasculitis particularly in the dependent areas down the lower legs and feet. Alright, so let's review the labs here. We have mild anemia, very nonspecific; a leukocytosis, same thing. Creatinine looks okay but she does have 1+ protein, 1+ RBCs so it does look like there is at least mild renal involvement. There are no casts that we're seeing and the creatinine is okay. So, there wouldn't be any really compelling reason to get a kidney biopsy at this point, but if you did you might see this in about a third of patients with IgA vasculitis, you might find mild to severe crescentic glomerulonephritis with this pathognomonic IgA deposition in the mesangium on immunofluorescence. Take note that in this particular glomerulus, it's not particularly hypercellular but it clearly is staining very strong for IgA throughout the mesangium. Lastly, her serum IgA is elevated. That can be a marker of IgA vasculitis, no surprise there. And again, the skin biopsy would be what we've talked about in the past though as an immune complex mediated small vessel vasculitis, you ought to find some IgA actually deposited in your specimen. Alright, there you have it. With a 14-year-old girl presenting with fairly acute onset of symmetric palpable purpura, abdominal pain, and arthralgias in the absence of any constitutional symptoms or cardiopulmonary symptoms and the skin biopsy showing pathognomonic presence of IgA deposits, this is your classic illness script for IgA vasculitis.
04:21 Fortunately, treatment is actually just supportive. It is a self-remitting illness and will get better on its own. Treatment is supportive, you can use, of course you want to provide them hospital care, supportive care, hydration, you can use NSAIDs for any discomfort the patient is experiencing, rarely you would use glucocorticoids only if you have to.
About the Lecture
The lecture IgA Vasculitis (Henoch-Schönlein) by Stephen Holt, MD, MS is from the course Vasculitides.
Included Quiz Questions
1. ...conjunctival injection.
2. ...arthralgias.
3. ...IgA nephropathy.
4. ...abdominal pain.
5. ...lower extremity purpura.
1. Supportive care
2. Plasmapheresis
3. Intravenous immunoglobulin
4. Methotrexate
5. Cyclophosphamide
Author of lecture IgA Vasculitis (Henoch-Schönlein)
Stephen Holt, MD, MS
Stephen Holt, MD, MS
Customer reviews
5,0 of 5 stars
5 Stars
4 Stars
3 Stars
2 Stars
1 Star
|
Skip to main content
Straight from the Horse's Mouth: Project Aims to Interpret a Whinny
Jockey Jose Santos reacts after riding Funny Cide to victory in the 129th running of the Kentucky Derby on Saturday, May 3, 2003, at Churchill Downs in Louisville, Ky. At left is second place finsher Empire Maker and at right is third place finisher PeaceRules. AP Photo/Timoth D. Easley
You always knew exactly what Mr. Ed was thinking. Television's most famous talking horse spoke his mind in plain English.
But what about poor Barbaro, this year's Kentucky Derby winner that shattered his right hind leg in the Preakness Stakes. Vets would love to be able to understand what Barbaro and other injured horses have to say.
Someday humans may get a glimpse into these equine emotions.
Scientists with the Equine Vocalization Project are working to analyze what comes out of the typical horse's mouth to interpret how a whinny communicates stress. More so than many animal noises, a horse's whinny uses many frequencies.
"The quest now is to determine if horses can utilize this varying frequency to produce specific vocal expressions," explains David Browning of the University of Rhode Island. "If so, you might be able to get a sense of their physical condition by their vocalizations."
Browning and his colleagues have not yet created a Berlitz guide for horses, but preliminary results suggest there are at least some leads to follow. The findings were presented this week at a meeting of the Acoustical Society of America in Providence.
Acoustic analysis suggests a whinny has two elements: a constant tone with varied harmonics that increase as the animal becomes agitated; and a variation in frequency that may be associated with communication or expression.
When stallions fight, for example, their whinnies degenerate to an uncontrolled high-pitched scream, Browning said. But when calm, their whinnies seem rich and variable. "The whinny is not a threatening sound, but the question is, 'What is it for?’"
Browning's team has also studied the brays of donkeys, which seems to have little control over what they say. "When they bray, they just let it rip," Browning said.
Up next: three species of zebras, which should prove interesting. Browning said one brays like a donkey, another whinnies like a horse, and one barks like a dog.
|
Skip to main content
With the unprecedented rise of technology in the Information Age and the exponential increase in digital payments (which are expected to hit $10.5trn by 2025), one thing that has also increased in parallel is the number of cyberattacks and data breaches.
According to Intuit, online payment fraud made up a total of 459,297 reported instances of fraud and identity theft combined over the last year.
For this reason, data security has become one of the biggest concerns and priorities among millions of organizations globally. In this article, we will talk about tokenization as a smart and effective way to protect cardholders’ data, and we will cover some of the biggest benefits of payment tokenization for both businesses and consumers.
What is payment tokenization?
Payment tokenization is the process of converting sensitive data, such as credit card numbers, into randomly-generated, undecipherable values called tokens. These tokens are created algorithmically, and they help to prevent credit card fraud by hiding sensitive information behind random elements with no extrinsic value.
In other words, every time companies want to store the credit card numbers of their clients into their database, the tokenization system will transform this data into tokens that have no exploitable meaning.
These tokens serve as a reference within the tokenization system in a way that it’s mapped back to the sensitive data when needed, for example when a customer wants to store their credit card for faster purchases in the future.
However, the mapping back to the original data is not possible without the reference of the tokenization system, meaning that cybercriminals that steal these tokens can’t decipher their value or exploit their meaning, which is what makes this method so good against fraud.
Of course, it’s important to keep in mind that all tokenization systems should be secured and validated using best practises applicable to audit, storage, data protection, authentication and authorization, such as the ones covered in the PCI-DSS standard.
How does payment tokenization work?
In a traditional, non-tokenized transaction, the credit card number is sent to the payment processor, and then stored in the merchant’s POS terminal or other internal systems for later reuse. Now let’s see what happens during a tokenized transaction.
In this case, after the customer has entered his credit card number, instead of going directly through the payment processor, the data is first sent to a tokenization system.
This system assigns a random combination of characters to the credit card number, or the so-called token. After the token has been generated, it is returned to the POS terminal and the payment processor in a safe form in order to complete the transaction successfully.
payment tokenization benefits
The process of payment tokenization. Image source:
What are the differences between payment tokenization and encryption?
Before we talk about the benefits of payment tokenization, there are a couple more questions that we need to clear out. One of them is the differences between tokenization and encryption, which is something that many people often confuse.
Encryption is a way of rearranging or altering data in a way that appears random. It requires the use of a cryptographic key, or a set of mathematical values that both the sender and the recipient agree on.
Source: Benefits of encryption vs benefits of payment tokenization.
While encrypted data typically appears random, the process of encryption works in a logical and predictable way, which allows the receiver of the encrypted data to decrypt it back to its original value. To be fully secure, encryption should use keys that are complex enough to be difficult to decipher by guessing, for example.
As opposed to encryption, a security method that allows information to be deciphered with the adequate key, tokens cannot be decrypted outside the tokenization system as there is no mathematical relationship with the original account number.
Because the token usually contains only the last four digits of the actual credit card for a specific transaction, hackers will not be able to access the whole account number of the cardholder.
5 benefits of Payment Tokenization for businesses
Payment tokenization is not only a great method to reduce online payment fraud and protect cardholders’ data from cyberattacks, but it also has a lot of advantages for businesses. Here are some of the biggest benefits of payment tokenization:
1. Helps to build trust with customers
As we already mentioned, one of the most important benefits of payment tokenization is security, which helps companies to establish trust with their customers
Although online payments are constantly growing, especially after the global pandemic kept millions of people at home shopping online, many people still don’t feel safe making online payments.
Building trust and loyalty with their customers is essential for the healthy growth of every business, and assuring that their payment data is secured is one way to do that. In fact, according to multiple studies:
• 65% of data breach victims lost their trust in a company after a breach;
• 80% of consumers will avoid purchasing from an organisation if their data has been compromised in a security breach;
• 85% of victims of a data breach will share their negative experience with others, causing an even further impact on the reputation of the compromised company;
• 52% of customers would consider moving to a competitor if they provide better security.
Tokenization ensures the correct formatting and transmission of data, making it significantly less vulnerable to cyberattacks and payment fraud. This helps to keep online transactions secure for both customers and businesses, fostering trust and good reputation in the long run.
2. Prevents costly penalties and revenue loss
Second on our list of benefits of payment tokenization is the prevention of revenue loss and costly penalties imposed by different institutions.
The previous statistics that we mentioned showed that compromised security has a negative impact on the reputation of a business that’s been involved in a data breach. This often translates to direct revenue loss for companies as customers move to competitors who are taking better care of their payment data.
Unfortunately, this isn’t the only way in which companies may suffer losses after a data breach. They can also get involved in expensive lawsuits, especially if the breach has compromised the sensitive information of thousands or millions of people.
One famous example is the video-conferencing platform Zoom, a company that spiked 376% in revenue after the global pandemic moved all meetings online.
Image source: Zoom’s security issues & the benefits of payment tokenization
However, after a series of cybersecurity breaches, including a misleading end-to-end encryption that turned out not to be true, the company had to set up an $85 million fund to pay cash claims to U.S. users.
This amounted to anywhere from $15 for non-paying users to $25 for those with paid subscriptions. On top of the fund, Zoom also had to pay about $21 million in legal fees, according to the ruling.
On top of lawsuits and direct revenue loss, companies that suffer data breaches or don’t comply with the PCI-DSS standard for payment security are also facing possible penalties by financial entities. In fact, non-compliance with PCI can result in monthly fines ranging from $5,000 to $100,000, imposed by credit card companies.
If a company has suffered a breach in which credit card information has been endangered, they can expect different penalties, including fines of $50-$90 per cardholder whose data has been compromised, or even termination of the relationship with the bank or payment processor.
3. Improved internal security
Another one on our list of benefits of payment tokenization is improved internal security
Because the token is practically unreadable by anyone except for the payment processor, companies ensure both external and internal protection, including employees or other people connected to their business.
4. Allows for recurring payments
One of the biggest benefits of payment tokenization for businesses is that it allows them to accept recurring payments and other payment options in a safe environment, simplifying the subscription-based processes.
In fact, tokenization is a game-changer when it comes to secure recurring payments. On one hand, subscription-based services are on the rise – according to SAP Insights, 53% of all software revenue will come from subscription models in 2022.
On another hand, more and more people are embracing online shopping for everyday items, and they have a growing preference of storing their payment details in order to make these purchases faster and more convenient.
Tokens are a great way to achieve this convenience and simplify the buying process. Because they convert sensitive information into random values that are undecipherable outside the tokenization system, it allows companies to store credit card data in a way that doesn’t compromise its security.
benefits of payment tokenization
Benefits of payment tokenization
5. It makes compliance with PCI-DSS easier
Undoubtedly, one of the most important benefits of payment tokenization is PCI-DSS. With the rise of digital payments adoption and the concerns regarding potential fraud and cyberattacks, it is no wonder why data security has received so much attention during the last couple of decades.
For this reason, the biggest credit card companies gathered in 2006 to establish strict and efficient security standards that regulate the management of credit card data, or the so-called PCI-DSS – Payment Card Industry Data Security Standard.
If older POS and other database systems allowed the storage of credit card numbers and their free exchange over networks, the arrival of PCI made it no longer possible.
Nowadays, PCI Compliant businesses must store credit card data precisely through the tokenization method, making transactions and sensitive information safer and less vulnerable to hackers.
It is important to note that tokenization applies not only only to credit card numbers, but also to any kind of personally identifiable information, such as passwords, files and customer accounts.
Other PCI-DSS requirements to maintain payment security include, but are not limited to:
• Building and maintaining a secure network
• Maintaining a Vulnerability Management program
• Restricting physical access to cardholder data
• Assigning a unique ID to each person with computer access
• Regular monitoring of all accesses to network resources
• Maintaining an Information Security Policy
Benefits of payment tokenization: PCI-DSS
How can you become PCI-DSS compliant and tokenize your data?
Keeping data safe is not only one of the benefits of payment tokenization, but the concept of tokenization is one of the main pillars of the PCI-DSS (Payment Card Industry Data Security Standard). To become PCI-compliant, you can either obtain a certification for your organisation, or hire a third-party provider.
However, getting certified in PCI can be extremely costly for many companies. To give you an idea, a Level 4 can cost up to $90,000 with an annual maintenance of $35,000, while Level 1 may even reach 1,000,000 with an annual maintenance of $250,000.
For this reason, using the services of an already compliant digital payments provider is the widely preferred alternative for many businesses, especially considering all the added features and benefits that it may offer.
For example, on top of PCI-DSS compliance, MYMOID offers:
• Multi-acquisition for full independence thanks to our strategic alliances;
• Universal Tokenization to make your tokens independent of other processors;
• Advanced Dashboard to manage all of your payment orders with advanced filters;
• Collection Suite for improving conversion ratios and debt recovery operations;
• Multiple payment options such as recurring payments, instant payments, IVR payments and Pay by Link.
Additionally, we offer tools and features to help you reduce chargebacks, pre-authorize your payments, or simply sell online in more than 45 countries and in 130 currencies.
If you are looking for a fully compliant third-party provider, MYMOID is a payment platform that allows you to process and store cardholder data in a completely safe environment. You can contact us for more information.
|
What can I expect to happen once a bias incident or hate speech has been reported?
Posted by:
Once a complaint has been made, a trained administrator will investigate the incident following the district protocol. To summarize, students directly involved in the incident will be informed of the investigation, as will the parents/guardians. Interviews of the victim, perpetrator, and any witnesses will be conducted. Evidence, if available, will be gathered and reviewed. Once the investigation has concluded, the designated official will prepare a written report. Those involved will be informed of the outcome and the steps being taken to respond. If the complaint is substantiated, responses will include consequences for the person responsible, support for those impacted, and actions to prevent reoccurrence.
|
Are you getting enough vitamin K2?
We’re mid-way through a discussion of the work of Dr Weston A. Price, who studied the diets of traditional people and found them to be almost entirely responsible for their near-perfect health.
Activator X: a missing nutrient
In his research, Dr Price discovered a fat-soluble vitamin he called ‘Activator X’, which we now know to be vitamin K2. He referred to it as an activator because, as we discussed last week, like vitamins A and D, it’s an important catalyst which helps the body absorb and utilise minerals.
Price observed that “people of the past obtained a substance that modern generations do not have” and that its absence from the diet could explain many of our modern diseases. He was able to reverse dental decay and cure degenerative conditions in his patients by supplementing foods rich in this nutrient – the foods that all traditional cultures revered as sacred: animal fats, eggs, concentrated forms of dairy like butter and cheese, and organ meats.
It’s worth noting that when it comes to vitamin K2 and indeed all fat-soluble vitamins, the levels found in various animal foods are entirely dependent on the animal’s diet and the farming method employed by the producer. Grass-fed land animals have much higher concentrations of fat-soluble vitamins, across the board.
Vitamin K and K2 are different!
Vitamin K was originally identified for its role in blood clotting but we now know it has far more diverse and important functions. There’s a growing body of research that suggests that they should be treated as two different vitamins, just like the family of B vitamins.
Whilst there’s no direct test for vitamin K2 deficiency, we can measure the markers of vitamin K status in bone and tissues (uncarboxylated osteocalcin and dephospho-uncarboxylated matrix GLA protein for those who were wondering)!
The recommended daily requirement of vitamin K is only based only on our need for K1. To this day, there’s still no daily requirement of vitamin K2!
Consequently, and with a tip of the hat to the anti-fat and cholesterol campaigns once again, there’s now evidence that we’re looking down the barrel of a near-universal epidemic of vitamin K2 deficiency. Children and anyone over the age of 40 are particularly at risk – especially for those avoiding dairy or on medications, as many inhibit our absorption of dietary vitamin K.
We touched on this last week, but it’s worth reiterating that the ratio of vitamins K, A and D are almost as important as the amount. This is why supplementing with any of these nutrients should never be done without addressing the others.
Why is vitamin K2 important?
Whilst vitamin D helps with our absorption of calcium (which is why in the last decade, people have supplemented the two together), K2 is the nutrient responsible for shuttling that calcium into the bones and teeth and keeping it out of the blood vessels, organs and other soft tissues to prevent calcification. You can see why it is such an incredibly important piece of the puzzle, especially for growing children!
It is one of the most critical nutrients for healthy teeth and gums and we’ll be hearing from local Dentist, Dr Steven Lin on this topic next week.
K2 also improves insulin sensitivity and stabilises blood sugar, helping to protect against diabetes and also the metabolic issues that often emerge as a consequence of obesity.
It promotes sexual, reproductive and mental health by helping optimise hormones and protects against cancer by suppressing the genes that make cells cancerous and expressing the genes that make cells healthy.
How much K2 do we need?
One of the leading authorities on this Vitamin, Chris Masterjohn PhD, believes we should aim for 100 at an absolute minimum and up to 200 mcg daily for optimal health. To give an example of what this looks like, aiming for the minimum amount of 100mcg daily:
A small 50g serving of natto (fermented soybeans) provides a 5 day supply
100g serving of goose liver provides a 3 day supply
100g pork or beef ribs provide a 1 day supply
100g pork or dark chicken meat (thigh) provides around 50-75mcg
100g of good quality hard cheese provides around 75mcg
4 pastured egg yolks provide 20mcg
It’s important that anyone taking anticoagulants such as Warfarin avoid making any dietary changes that affect vitamin K status, without supervision of a medical practitioner.
Traditional Russian Custard
Whipped egg yolks, traditionally known around the world as sabayon in France, zabaglione in Italy or 'Russian custard' is a delicious, centuries-old nourishing treat that provides a brilliant vehicle for incorporating egg yolks into the diets of children and adults alike. Traditionally it tends to have a small amount of alcohol or cream added, however this isn't necessary - it's delicious and more suitable for children when the recipe is kept simple.
Pastured egg yolks are a beautifully rich wholefood source of vitamin A, D and K2 as well as zinc, iodine, choline and omega 3. All of these nutrients tend to be lacking in our modern diets and especially in diets of children.
It doesn’t require cooking, so is an exceptionally quick and easy thing to make and is a great cream substitute for families who are dairy free. This recipe serves 2.
6 egg yolks
3 tsp maple syrup (or 1-2 tsp honey)
1 tsp vanilla paste or essence
Tiny pinch of salt
You’ll need either a blender, handheld beater or mixer for this recipe. Beating the mixture with a beater or mixer yields a much lighter, fluffier custard but the result is delicious either way.
1. Separate the egg whites from the yolks and save the whites in a jar in the fridge for later use (they’ll last for several weeks and are great for macaroon slice or cookies).
2. Place all ingredients into a small mixing bowl, blender or mixer and beat until the mixture thickens and the colour lightens to a pale yellow.
Yes, that’s it, folks! So simple that there’s no excuse not to give it a try.
It’s delicious served with fruit or eaten on its own. The custard will last a couple of days in the fridge or you can make a bigger batch and freeze into a very impressive dairy-free ice-cream. Add a touch more sweetener if you’re planning to do this.
50% Complete
Two Step
|
Technology can help manage the supply chain for COVID-19 vaccines or treatments.
- Photo: MIL-Miguel-Pena
Photo: MIL-Miguel-Pena
As the best minds in global medicine continue to make hopeful progress toward an effective treatment or vaccine for COVID-19, those of us in supply chain services and technology need to ensure we have the most effective set of solutions in place to manage and monitor those shipments. It will be critical that the movement of this life-saving material is done expeditiously, and with the least amount of waste when it comes to keeping these drugs safe along their journey. Temperature, and the lack of visibility of a pharmaceutical product in relation to its ambient temperature poses a huge risk to the efficacy of these treatments to the populous. I wrote about this on LinkedIn back in 2015.
Over $15 Billion in product losses occur every year in the Pharmaceutical industry due to temperature excursions alone. It is estimated when adding the cost of replacement and other impacts over $35B is lost each year.
When Cold Chain IQ surveyed pharmaceutical executives, it found that at least 10% of respondents recorded temperature deviations in more than 15% of their temperature-sensitive shipments. Twenty percent didn’t know whether excursions had occurred. According to World Health Organization and Parenteral Drug Association:
• 25% of vaccines reach their destination degraded because of incorrect shipping
• 30% of scrapped pharmaceutical can be attributed to logistics issues alone
• 20% of temp-sensitive products are damaged during transport
Craig Montgomery - Photo: PowerFleet
Craig Montgomery
Photo: PowerFleet
Note the word "degraded" used above. That means the vaccine is not as effective when it is administered. Imagine getting a vaccine that you believed would be 80% to 100% effective that is really only 50% effective because the product went outside the prescribed temperature ranges for a several hours during shipment.
Recently, Bloomberg's Brendan and Riley Griffen penned an article titled "The World's Supply Chain Isn't Ready For a COVID-19 Vaccine."
In it, they say, “The industries that shepherd goods around the world on ships, planes and trucks acknowledge they aren’t ready to handle the challenges of shipping an eventual Covid-19 vaccine from drug makers to billions of people.”
What is critical to note is that the world's supply chain can tap into technologies that already exist to get ready. This can be done by deploying a crawl, walk, run strategy to ensure we ramp up the sophistication and fidelity of data flows to enhance our vision of the supply chain hardening it against the common issues like degradation and spoilage.
What’s most critical now is creating a simple, yet broad technology offering that can illuminate the black holes in our supply chain due to chains of custody; emphasis on simple, adoption and deployment must be "easy" (nothing is easy in a global supply chain).
Tracking and monitoring COVID-related drugs in the future is not different than the existing use case of tracking vaccines or other critical medicines in the supply chain today. One needs to know the status (temperature and other environmental factors that may impact the medicine like shock or humidity), the thresholds those drugs must maintain, and the location corresponding to those readings. All of this should be done from the manufacturing floor to the end customer door, a person or business that will distribute these materials.
Our best chance for achieving this vision is three-fold:
1. Global asset tracking / monitoring devices: The optimal choice is a Bluetooth Low Energy (BLE) tracking and recording "tag" that records things like temperature. These exist today, are as small as a few coins stacked together and are disposable. BLE is ideal because it can interact with a phone or tablet that is also BLE enabled, which is almost every device today. These devices can be dropped in to a box or pallet of COVID material and be associated to that material (ex: this tag is monitoring this product that has these requirements). BLE cargo or package tracking devices are unique as they are essentially a database on a chip, meaning they can be encoded with data like its current temperature, what temperature it has to stay at, and can trasmit that data throughout the supply chain and each change of custody.
2. App for smartphones / tablets: This app can be downloaded in theory by anyone in the COVID distribution supply chain. It can passively run in the background and listen for these BLE devices. When found, they record the data, for example temperature of the product, and associate a GPS location and time stamp and what entity had custody of the product. The phone app could also write this same information to the BLE device. This allows the BLE device to be interrogated at any time along the way and tell you its status and stability ("I am expired" or "I may be unstable" because I went out of my temperature range for Y minutes or X hours).
3. Blockchain: Setting up a proprietary, closed blockchain will be key. It is where all the logistics parties can append this critical data and it enables a centralized repository of the status (current and historical) of every shipment, shipping lane, logistics provider, etc. This also allows access for anyone in logistics who is part of the supply chain with their own BLE device and app - as long as a set of standard reporting is used for the chain. This data can then be harvested and analyzed by private and public entities to do root cause analysis and further optimize the supply chain.
What must be remembered is that the more you instrument the supply chain, the more data points you create. The more data points you create, the more fidelity you derive in the supply chain ecosystem for COVID. This in turn enables suppliers, shippers and governments to inspect, analyze, optimize and protect the distribution of COVID materials through the disparate supply chain that exists today.
Craig Montgomery is the SVP of Global Marketing for PowerFleet Inc. and a 15+ year industry veteran of industrial IoT and supply chain logistics technology.
|
Python Pillow - Resizing an Image
Most of the digital image is a two-dimensional plane of pixels and it has a width and height. The Image module from pillow library has an attribute size. This tuple consists of width and height of the image as its elements. To resize an image, you call the resize() method of pillow’s image class by giving width and height.
Resize and save the resized image
The program for resizing and saving the resized image is given below −
#Import required Image library
from PIL import Image
#Create an Image Object from an Image
im ="images/cat.jpg")
#Display actual image
#Make the new image half the width and half the height of the original image
resized_im = im.resize((round(im.size[0]*0.5), round(im.size[1]*0.5)))
#Display the resized imaged
#Save the cropped image'resizedBeach1.jpg')
If you save the above program as and execute, it displays the original and resized images using standard PNG display utility, as follows −
Original Image
Original Image
Resized Image
|
Why Do Cats Walk Sideways?
Why Do Cats Walk Sideways
Cats typically walk sideways because they are exhibiting fear or they are being threatened. It could also be a sign of defensiveness, intimidation, or a health condition such as vestibular disease. It may also just be part of a cat’s or kitten’s favorite postures during playtime.
Why do cats walk sideways?
A cat walking in a sideways pose may be accompanied by an arched back posture which is akin to the so-called Halloween cat pose. However, the difference is that, when a cat does the latter pose, it is usually accompanied by hissing, growling, ears folded back, and hair standing at its ends.
Here are the probable reasons why cats walk sideways:
It could be that your cat feels threatened by another cat or person.
If you notice your cat walking sideways accompanied by an arched back she may feel threatened by another cat or even a person. Cats will usually do this to appear as if they are bigger or larger. It may also be accompanied by a fluffed-out tail to create the illusion of size.
She may be trying to intimidate or scare another cat or dog.
Your feline may be trying to scare off a perceived threat such as another cat or dog. Just like when they feel threatened, cats do this when trying to frighten off a perceived enemy to look larger. It may be accompanied by an arched back and they will also puff out their fur. With this kind of posturing, it appears that a cat is stronger, bigger, and faster.
It could mean that a cat is in a defensive mode.
A cat walking sideways could mean that she is in a defensive mode and ready to fight it out especially in the presence of an unfamiliar cat or dog. Again, this will most likely be accompanied by an arched back and fluffed out fur to appear larger and menacing. Some grown cats have been observed to have this posture when approaching a dog or if approaching an unfamiliar cat it may be showing dominance over it.
It may be a part of playtime among cats and kittens.
Cats and especially kittens love to spend most of their time playing with each other. Playtime for them is also a sort of practice or preparation for their adult life. Kittens usually arch their backs and walk sideways as part of their playtime mode and make-believe way of stalking and hunting each other.
It may mean that your cat has health issues.
A cat walking sideways may also be a warning sign of an underlying health condition.
If your cat is walking sideways and is also manifesting the following signs then she may be suffering from the feline vestibular disease:
• loss of body coordination
• falling or circling to one side
• the eyes are darting back and forth, also called nystagmus
• the head is tilting
• nausea
• vomiting
Cats lose their balance because of a damaged or diseased vestibular apparatus that is located in the inner ear, the part that is responsible for maintaining balance, and a cat’s sense of direction and orientation. The feline vestibular disease affects cats regardless of age and usually caused by middle or inner ear infections. The growth of tumors and exposure to toxins and drugs may also contribute to symptoms of the disease although a majority of cases were attributed to unknown causes.
Some common cat postures
Cats are known to be complex animals and they have unique body language and behavior that can be confusing. However, if you will learn to understand it you will appreciate your cat more.
Here are other common cat postures along with its meanings:
1. Your cat approaches you with tail straight up and whiskers forward.
If your cat approaches you in this posture and she rubs her chin or head against your leg in a friendly manner, it means that she is greeting you or confidently checking out something new to her such as a particular scent.
2. Your cat rolls over and exposes her belly.
This posture means that your cat wants to play but female cats may also display this behavior during mating season.
3. Your cat is lying on the belly, sitting, or standing with the back of the body at a lower angle than the front.
If your cat is exhibiting this posture it could mean that she is anxious. This is usually accompanied with bent legs and her tail is curled close to the body with the tip moving up and down or side to side.
Cats are sometimes hard to decipher but as you learn to familiarize their body language, postures, and behavior you will realize just how unique and smart they are. Cats may walk sideways because they feel threatened, as a sign of defensiveness, or as part of various other postures that they manifest during playtime. However, it may also mean that they are not feeling well and suffering from health issues like the feline vestibular disease. If this is the case, a proper consultation with the vet should be made as soon as possible.
|
Boost your immune system this winter with these simple tips
Vitamin C
Get your zzzzz’s
Sleep is a time for the body to rest and recover. It is a time for cell and muscle rejuvenation and allows the body to focus on killing off any nasties you may have come across throughout the day. Sleep has also been shown to improve the memory of the immune system to fight off previously met viruses or bacterial invaders. This means that our body has the power it needs to respond promptly to attacks from foreign materials that may be out to make you sick.
If you are struggling to get a peaceful night’s sleep, talk to your nutrition professional about the benefits of magnesium. You may also like to try sipping on a comforting, warming cup of chamomile tea before bed to relax.
Stay active
Physical activity enhances circulation and increases the immune systems ability to find and fight off harmful invaders such as bacteria and virus. Being active also produces sweat through our skin. The skin is one of the major detoxing organs, forcing out unwelcome sicknesses.
Even low intensity exercise plays a role in stimulating healthy circulation and immune responses. Just move as much as you can safely.
Eat more antioxidants
Consuming foods high in antioxidants enhance the activity of the immune system and reduces toxic materials such as bacteria and viruses from taking hold in the body. Foods with bright rich colours posses the greatest amounts of antioxidants. For example, capsicum, berries, tomatoes, beetroot, leafy greens etc.
Get your daily dose of D
Safe sun exposure can improve immune function and reduce the duration and frequency of cold and flu. Even in winter the sun can still burn so short amounts of time spent in the sun is advised.
Take a bath with double the benefits
Epsom salts have been shown to reduce toxins in the body. There is nothing better than taking a warm relaxing bath in Winter. The relaxation that Epsom salts offers is a bonus to the immune system, allowing it to rejuvenate, because we all know the impact stress has on our health!
Bend it like Beckham
Yoga is an effective circulatory stimulant, digestive activator and endocrine system booster. All of these systems work together to form part of a high-quality immune system. Yoga’s ability to stimulate circulation, reduce stress and support relaxation makes it a powerful immune fighting tool to add to your daily routine.
Step away from the desk
A sedentary lifestyle, common in todays white collar, technology dominated world, can reduce the rate of circulation around our body. This means toxins are unable to be identified readily by our immune system. Getting up and moving and … even better, moving outdoors, for even a brief 15 minutes a few times a day can greatly improve your circulation and get you away from the recirculating bugs that float around in the office air-conditioning.
Be sure to breathe deeply
The simple action of breathing increases the free flowing of oxygen and carbon dioxide in and out of the lungs and blood stream. With these breaths nasty foreign materials are also exhaled. The oxygen that is breathed in supports immune cell health, improving the force of the all-important warrior we call our immune system. Further, taking deep breaths has been shown to reduce stress and enhance relaxation.
Indulge in a Massage
Massages that focus on the lymphatic system, help to circulate and drain the body of toxic blockages that may be backing up and contributing to frequent illnesses. Relax in style and allow your body and mind to sink away while feeling the undeniable benefits of a massage.
Dance, jump and shout and let it all out!
Dancing and jumping around forcefully pushes toxins out through sweat and all other eliminatory organs such as the kidneys, liver and digestive system. Why not have some fun while working on your immune system.
Don’t sweat the small stuff
Being angry and frustrated certainly takes its toll on your stress levels and your immune system. The simple act of being angry impacts greatly on hormones and gut which play a large role in immunity.
Be at one with your body. Listen to its cues. Give it the nutrients and TLC its needs and desires!
Xx Danielle Catherine
Anti inflammatory Eggplant Curry
Anti inflammatory eggplant curry
Anti inflammatory foods are important to our health and wellbeing as a whole as well as for specific health complaints such as headaches, muscle and joint aches and pains and immune support to name a few.
Curries are a great source of anti inflammatory herbs and spices. Ayurvedic medicine or Indian medicines have been using the spices in curries for many years to address various health conditions.
This Anti inflammatory Eggplant Curry with provide you with a fabulous dose of anti inflammatory support.
Anti inflammatory Eggplant Curry Recipe
1 large eggplant (multi-vitamin and mineral powerhouse!)
• 2 tablespoons good quality extra virgin olive oil (anti-inflammatory)
• 1 teaspoon cumin seeds (immune booster and digestion enhancer)
• 1 medium to large onion, sliced finely (immune enhancing, anti-inflammatory, allergy fighting, cholesterol lowering)
• 2 crushed garlic cloves (immune enhancing, anti-inflammatory and cholesterol lowering)
• 2 – 3 cm piece ginger (depending how much you love ginger, me, I go for THREE), peeled and finely chopped (anti-inflammatory goodness!)
• 1 tablespoon curry powder (anti-inflammatory goodness!)
• 1 large diced tomato (Lovely lycopene and vitamin C antioxidants)
• 1 finely chopped green chilli (anti-inflammatory goodness! metabolism boosting)
• 1 teaspoon Celtic or Himalayan salt (these salts contain wonderful minerals that regular table salts do not)
• 1/4 bunch finely chopped coriander (all round awesome herb for almost everything)
Preheat your oven to 190C.
Place the eggplant on a medium sized baking sheet. Use a fork to spike the eggplant all over to allow heat to penetrate through. Place in the oven to bake for 20 minutes or until it feels soft/tender. Remove from the oven, allow to cool enough to be able to peel and chop the eggplant.
Heat the olive oil in a medium saucepan over a medium heat. Add the cumin seeds and onion to the oil. Stir until the onion softens and slightly browns, roughly 5 minutes.
Add the pre-prepared tomato, garlic, ginger and curry powder to the saucepan with the onion and cook for a further 1 minute.
Stir in the chopped eggplant and green chilli, and season with salt to taste. Place a lid or appropriate cover over the mix, turn to a higher heat and cook for 10 minutes to allow the flavours to soak in.
Lift the lid or cover, turn the heat right down to low and cook for a further 5 minutes with the lid off. Garnish with the coriander.
This curry can be served as a side dish or as a dish on its own, possibly with brown basmati rice or quinoa
I served mine with fish, asparagus, roasted capsicum and fish. Random I know, but it was worth it 🙂
The Winter Warm Up: Boost Your Immune System This Winter
immune boosting foods
Winter is hot on our tail and if history is any record it brings with it a nasty cold and flu season which requiring an immune boost!
Prepare yourself for a fit and healthy winter with the following hot tips from Nutritionist Danielle Catherine.
Get some C in your diet
I know it seems like a no brainer, as much of the marketing around for many years has pushed vitamin C for reducing the duration of colds. But the humble C really is an all-round immune support, largely pertaining to its an antioxidant capability.
Food should always be medicine, so although a vitamin C supplement may give you an additional boost, nothing beats the real thing. Choose two or more of your favourites of the following and include them at each meal or as part of your 5 and 2 (five vege, 2 fruit).
Fruit rich in vitamin C and in season for winter:
Avocado (yes it is a fruit), kiwi fruit, orange, lemon, grapefruit, pineapple, lime
Vegetables rich in vitamin C and in season for winter:
Broccoli, broccolini, beetroot, carrot, brussels sprouts, cabbage, celery, kale, capsicum, chilli, cauliflower, pumpkin, sweet potato, spinach, leek,
Herbs high in vitamin C:
Ginger, garlic, coriander, dill, mint, oregano, parsley, rosemary, turmeric.
All of these delightful seasonals offer vitamin C but also, much much more in the way of vitamins, minerals and antioxidants.
Warm your insides Soups & Bone Broths
We all reach for something warm and comforting during the colder months. Soups and bone broths are fantastically nutritive when made at home and satisfy that warm hug from food which we grave. Soups and bone broths are not only nourishing but are a low-calorie/kilojoule option to keep down that additional winter coat we find ourselves hiding under our baggy jumpers and trackies.
The other benefit of soups and bone broths is that they are easy, convenient and you can have a large variety to tantalise your tastebuds. With a large crock pot, slow cook a large bone broth ready for the week and store it in the freezer in portioned containers.
You can do the same with soups, throw in the above-mentioned vegetables. You could even roast some up to add into the blender with the rest of the vegetables after for additional flavour.
Roasted pumpkin, sweetpotato, garlic and capsicum work a treat when roasted. Add a metabolism booster with a touch of chilli and you are set to clear the sinuses, kick the immune system into gear and for a satisfying easy meal.
Cook with garlic, onion and chilli
Apart from great vitamin C content, garlic, onion and chilli or cayenne pepper are all active immune stimulants. They all contain anti-inflammatory and circulatory properties to ensure immune cells are where they need to be.
Garlic and onion or anti-viral and anti-bacterial helping to destroy cold and flu processes.
Onion contains an anti-inflammatory and anti-histamine enzyme called quercetin. This enzyme is great at breaking down mucus and reducing sinusitis.
Chilli or cayenne is rich in antioxidants and is efficient at stimulating immune cells to function as they should. It helps to remove toxins from the body by enhancing circulation and promoting sweating.
Every heard the saying- “Sweat it Out”?
Tea Time!
Herbal teas are not just delish, but they can boast some ailment appropriate benefits. There is a tea for almost any condition you can think of!
Turmeric lattes are satisfying winter warmers that will reduce inflammation.
Pure hot chocolates (using cacao and not refined sugars) will offer a lovely magnesium hit
Herbal teas that are immune supporting and cold relieving will include: echinacea, lemon, honey, ginger, liquorice, cinnamon, elderberry, lemongrass, mullein, rosehip and yarrow.
Rug Up- Stay Cozy
Staying warm and cozy is a must in winter. Wear a scarf, temperature appropriate jumper and some nice warm long pants.
Power your immune system and quash your hunger with PROTEIN
Soups are wonderful and often filling enough on their own. However, protein will add an additional element to the soup or broth that will satisfy you for hours after you have finished your meal. Not only that, but proteins, with their amino acids are an essential building block to almost every function in our body, including our immune function.
Only clean lean proteins should be included into the diet for optimal benefits. Some clean lean proteins you might like to include could include:
Eggs, chicken breast, kangaroo, fish, turkey (not processed deli meats), extra lean beef, extra lean lamb.
Other great benefits of protein sources are that they often come with substantial iron, B12 and B6 levels. All of which play an important role in building up our blood and immune cells.
Rest and Recover – Get adequate ZZZZZZZs
Getting enough shut eye can go a long way in ensuring our body is performing at its best. Protein also assists us in our rest and recovery, meaning we can sleep sounder while allowing the body to undertake its repair duties. Our immune system relies heavily on this rest and immune supporting repair processes, therefore, depending on your age, it is important to get an average of between 6-8 hours quality sleep each night.
Avoid alcohol
Alcohol is an incredible immune depressant. Research has shown that the immune deficiency caused by alcohol can make people susceptible to illnesses such as pneumonia, systemic inflammation, aggravation on allergies and sinuses, inefficient detoxification such as through the liver, reduced immune responses to viral infections. It is also well recognised that alcohol impacts on a personal quality of sleep. While it might be easier to fall asleep after having a few, your body will not enjoy a long quality sleep as it struggles to achieve an efficient REM cycle.
Reduce your winter drop to one standard drink 2-3 times per week, aim for weekends, or not at all.
Restore your healthy gut friends
Probiotics and a healthy gut flora are currently hot topics for supplement and food companies. While it may seem like a big hype or fad, there is very real evidence supporting the benefits of optimal gut and systemic healthy bacteria. Your immune system is one of the most important areas of function for these little friends.
Healthy bacteria provide us with:
– A force against nasty bacterial, viruses and pathogens
– They assist us in reducing inflammation
– Digesting and absorbing food
– They promote healthy skin and prevent conditions such as eczema, dermatitis and psoriasis
– and; they support toxic waste elimination
All of which are pivotal in healthy immune regulation.
Along with a quality probiotic recommended by your nutrition health professional, include gut loving foods into your diet, including: raw, natural yoghurt, kombucha, sauerkraut, kim chi, kefir.
If you feel as though your immune system needs some additional TLC it may be time to book with a nutrition professional for comprehensive support and advice Call Now.
|
What is GPC?
Global Privacy Control (GPC) Global Privacy Control) is a new initiative by researchers, news companies from the United States, some browser makers, the EFF, some search engines and some other organizations to improve the privacy and rights of Internet users.
In one proposal, the GPC lets sites that a user links to know that the user denies the site the right to sell or share personal information with third parties.
Although it sounds like Do Not Track header 2.0, it is designed to work with existing (and future) legal frameworks such as the California Consumer Privacy Act (CCPA) or the European General Data Protection Regulation (GDPR).
How does it work;
It all starts with a browser, and an extension or application that supports GPC. For now, this means a dev version of Brave, the DuckDuckGo app for Android or iOS, or browser extensions from DuckDuckGo, Disconnect, EFF, or Abine.
Can you use Windows 10 without activation?
Brave has enabled GPC with no options to disable it, while other browsers, applications, or extensions may require users to enable it. In DuckDuckGo Privacy Browser, for example, it is necessary to enable Global Privacy Control from the settings to use it.
For users, the above are the only ones that currently exist. The browser, application, or extension adds GPC information to the data submitted during links so that sites are aware of it.
The next step depends entirely on the site the user is linking to. Non-participating sites will ignore the header.
When a site participates, it will ensure that user data is not shared or sold to third parties.
Will GPC become important?
The Do Not Track started with the hope that internet privacy would change for the better, but it turned out that it did not. In fact, it could even be used in fingerprinting efforts.
What are the temperature limits of a computer
The fate of the GPC may be similar. At this time, support is limited to a few extensions, applications, a browser with a limited market share, and some participating sites. Some of the participating sites may be important, such as The New York Times, but their use is currently very limited.
Mozilla and Automattic (WordPress) are leading the effort but have not implemented any applications so far.
But even if these two companies, or maybe others, add support for GPC, the big internet companies like Google, Microsoft Apple, etc. should join.
So waiting.
Registration in iGuRu.gr via email
Your email for sending each new post
Follow us on Google News iGuRu.gr at Google news
Leave a reply
Your email address Will not be published.
+ 38 = 42
Previous Story
Hacked National Center for Research and Technological Development
Next Story
CANghost Hack cars and buses!
|
Is Predictive Prevention The Future Of Medicine
Image by Elias Sch. from Pixabay
Betty’s joints have been acting up recently. She was enjoying life as usual when first her right toe and then the left knee started troubling her. The joints would become red, stiff, swollen, and unbearably painful. Her physician explained that crystals of uric acid have started collecting inside the joints and prescribed a medicine— allopurinol — to keep her uric acid levels in check. The condition is called gout and is fairly common.
A week later, large blisters appeared on her abdomen which, within days, spread to involve her legs, face, mouth, and arms. More than half of her body surface area blistered and started to peel off, leaving fleshy red underskin behind. A respiratory infection followed soon and her kidneys took a hit too, requiring 6 days on dialysis. She survived but took more than two months to recover fully.
Betty can consider herself lucky, as nearly half of patients that develop toxic epidermal necrolysis — TEN for short — succumb to the devastating condition. TEN is a rare but potentially fatal allergic skin condition caused by a number of prescription medicines. Allopurinol is one of them. Not everyone who takes the drug develops the complication though. The estimated frequency is less than 1 case per 1000 persons taking the medicine. Can we predict who is more prone? The answer is an encouraging YES, thanks to pharmacogenomics — an emerging field of medical genetics and the cornerstone of personalized medicine.
Predicting Adverse Drug Outcomes
How you react to a particular medication is, in part, determined by your genetic makeup. This is expected since genes are the ultimate drivers of our molecular destiny. Continuing with our example of allopurinol, genetic studies have shown that the presence of a variant of the HLA-B gene (known as HLA-B*5801) increases the risk of adverse skin reactions. HLA-B*5801 is found in Han Chinese at a higher frequency than other ethnic groups and, not unexpectedly, the adverse events are more common in Han Chinese. Testing for this gene before starting allopurinol is found to be a cost-effective way of preventing TEN and other skin reactions in this ethnic group. The Clinical Pharmacogenetics Implementation Consortium (CPIC) recommends such testing and it is being incorporated into clinical practice in Taiwan and Singapore, among other countries.
Allopurinol is not the only drug whose adverse effects are known to be, at least in part, genetically determined. CPIC, a project of the US Department of Health, currently provides guidelines on 25 gene-medicine pairs for which there is evidence that prior testing is useful. Pharmacogenomics, the science of tailoring patients medications based on their genetic makeup, is increasingly making its way from research laboratories to clinical practice. The ability to prescribe ‘the right medicine for the right patient’ is an essential step towards personalized medicine.
The CPIC drug list includes such commonly used medicines as aspirin, the blood thinner warfarin, and anti-depressants called SSRIs. Routine susceptibility testing for them is not common practice currently, but with mounting evidence and refinements in testing, pharmacogenomics is expected to play an increasing role in clinical practice in the near future. Moreover, the application is not limited to susceptibility testing for adverse effects only. Genetic testing can be used to predict responsiveness, or the lack of it, to specific medications…
Predicting Response To Treatment
Hypertension or raised blood pressure is ubiquitous in humans, irrespective of race or ethnicity. The preferred treatment option for a newly diagnosed 47-year-old black patient, however, may be different than a 52-year-old white European, as the former is likely to respond better to one group of BP-lowering drugs than another. This difference is undoubtedly due to underlying genetic differences though we don’t exactly know them — yet. For hypertension and other common medical conditions, the role of genetics is limited mainly to theory currently. On the other hand, for cancer, genetic risk stratification and response prediction is a reality and an established clinical practice. For example, a patient with acute myeloid leukemia — the nastiest of blood cancers — with a mutated FLT3 gene is expected to have a worse outcome when treated with standard chemotherapy compared to a patient who doesn’t have the mutation. Such predictive genetic testing is routinely used in oncology to guide treatment plans.
As our understanding of the genetic basis of commonplace disorders like hypertension, diabetes, and heart diseases improves, it is plausible that such genetics-based prescription will make its way into clinical practice for these disorders as well.
Genetics and Susceptibility to Diseases
Nancy Wexler was 21 years old in 1978 when her mother, Leonore Wexler, in her fifties, was diagnosed with Huntington’s Disease (HD) — a devastating, hereditary disorder affecting the nervous system that renders the patient incapable of performing day to day activities. Children of a patient with Huntington's have a 50% chance of developing the disease. In other words, their fate is essentially decided by a coin toss. What makes the matter worse is the fact that most of the individuals carrying the culprit gene do not show symptoms of the disease until the third or fourth decade of their lives. You will only know if the odds are not in your favor when you develop the disease. Thus for children of patients with Huntington’s, life turns ‘into a grim roulette’. Or as one patient puts it ‘the terrible waiting game, wondering about the onset and the impact.’
After their mother’s diagnosis and subsequent death, the waiting game had begun for Nancy and her sister. With no way to find out if she was carrying the culprit gene or not, life was in limbo. She, however, decided to embark on a life-defining project to find out the culprit gene of Huntington's disease and devise a way for its testing. Years of tiring yet inspiring research followed, detailed in the family memoirs by Nancy’s sister, Alice Wexler. Finally, in 1983, Nancy and James Gusella declared their findings of mapping the HD gene on chromosome 4, making it possible to genetically test the likelihood of a person developing the disease. This was one of the earliest genetic tests available for clinical use. With no way to prevent the development of HD if she was found to be a carrier of the HD gene, Nancy decided not to take the test herself. For a condition like coronary artery disease, however, such pre-knowledge can be potentially life-saving. Currently, she is 75 years old and a professor of Neuropsychology at Colombia University.
Huntington's Disease is rare. Other genetic disorders like thalassemia, cystic fibrosis, or sickle cell disease are more common. Today, it is common practice to do genetic testing for these conditions before a baby is born. These genetic disorders may be more common than HD but when considered in the whole spectrum of disease burden they make a minute proportion. The diseases responsible for the most number of deaths, for instance, are cardiovascular conditions and cancer. Moreover, in contrast to HD, if a person’s susceptibility to heart diseases is known, lifestyle modifications and perhaps medications can be helpful in reducing the chances of a fatal outcome. Our ability to predict a person’s genetic risk of developing cardiac disease or cancer, however, is limited to a handful of rare hereditary syndromes — until now that is.
GWAS: Guilty by association
Our idea of genetics and inheritance is shaped by high-school Mendelian genetics, where a single or a few known genes act in predictable ways to determine inheritance patterns of physical characteristics and diseases. This concept holds true for a few purely genetic diseases like sickle cell disease or cystic fibrosis. The vast majority of our diseases, however, are complex and multifactorial. Their increased incidence among family members nevertheless points towards a genetic component in these so-called non-genetic conditions.
Since genes are inherited at birth, we can — theoretically — predict the likelihood risk of these diseases years or even decades before their development. The problem, however, is that such diseases don't follow simple Mendelian inheritance since their genetic component is a product of variable contributions from a number of genes many of which are unknown, rather than a single known gene. In other words, these diseases are polygenic rather than monogenic.
Over the past 30 years, with genome-wide association studies (GWAS) a number of susceptibility gene variants have been discovered for common diseases like hypertension, inflammatory bowel disease, cardiac disease, and breast and colon cancer. GWAS scan a person’s whole genome (the whole set of DNA a person inherits) for an association between genetic variants and the development of a disease. Based on these studies, polygenic risk scores have been developed for various diseases including coronary artery disease, atrial fibrillation, type 2 diabetes, inflammatory bowel disease, and breast cancer. Such predictive scores can, in the near future, be used in clinical practice.
Take cardiovascular diseases for instance. Lifestyle factors like smoking, lack of physical activity, and high lipid intake are known to increase their risk. Not all individuals who smoke and eat steaks go on to get a heart attack, however. The difference in individuals’ response to environmental risks — as these factors are often called — is due to susceptibility gene variants or the lack of them. Researchers have identified more than 200 such gene loci for hypertension and more than 50 for coronary artery disease. A polygenic risk score based on these genetic variants for the European population showed that 8% of the population had inherited a genetic predisposition that conferred a threefold or higher increased risk for coronary artery disease, compared to the general population. The same study identified that 1.5% of the population had a genetically determined threefold or higher increased risk of developing breast cancer.
Such genetics-based predictive risk models, if incorporated into clinical practice, can help in devising individualized, risk-stratified preventive strategies. A person who is known to carry genetic polymorphs putting him at an increased risk of developing coronary artery disease will likely benefit from more extensive dietary modifications and will be more motivated to stay away from smoking.
The future of medicine is in prediction
Despite the fact that for many diseases prevention is far more effective than treatment, our current health model is geared towards treating a disease when it develops. This is in large part due to our inability to reliably predict the occurrence of diseases beforehand. With the uncovering of the genetic basis of commonplace diseases like hypertension, diabetes, coronary artery disease, predictive testing, and pre-emptive strategies are gradually becoming a reality. On the therapeutics front, similar predictive testing is paving way for personalized medicine.
Gene therapy and CRISPR are stealing the limelight for being the cool kids around, while the nerds — GWAS and pharmacogenomics — are silently making their way to mainstream medical practice. Either way, the future of medicine is exciting.
Physician by profession and nerd by choice. I read & write about science, medicine, technology & programming.
Recommended from Medium
Simple Tips to Better Balance
A Concise Guide to Glaucoma
Native American tribes have been hit hard by coronavirus. Now they’re battling red tape to get help
Israel coronavirus update — March 26
Why I Quit My Sleep App
Home Health Care or Institutional Health Care?
Water: The Key to Universal Health Care
Get the Medium app
Fayyaz H Zafar
Fayyaz H Zafar
More from Medium
Am I Ready for The Challenges That I Haven’t Expected Of?
How to hit “Diamond Rank” as a non-professional in Apex Legends Solo-queue
15 good reasons to exercise
The Omicron Factor: Stats on Willingness to Attend In Person Events vs. Online and Hybrid
|
Skip to content
Vegan Diets For Kids With ADHD?
It is no secret that veganism is on the rise. More and more people are making the switch to a plant-based diet for various reasons, including health, environmental, and ethical concerns. However, what many people don’t know is that a vegan diet can also be helpful for those who have ADHD! This article will explore some of how a vegan diet can help those who have ADHD.
Improves Focus
One of the most significant issues faced by those with ADHD is a lack of focus. ADHD makes it difficult to sit still and pay attention, making school a stressful and frustrating time for a child who has a lot of information thrown at them every day. A plant-based diet gives children more energy because they get the proper nutrients from healthy food instead of fatty fast food like hamburgers or pizza (which often give children lousy energy). Adequate nutrition also helps improve concentration and supports the brain’s proper development! So as you can see, getting proper nutrition on a vegan diet is beneficial in multiple ways!
Plant-Based Diets Equals Fewer Disruptions
If a child with ADHD is given a plant-based diet, they gain more energy and focus and have fewer disruptions in the classroom. In addition, many children on a vegan diet lose some excess weight which can help them feel better about themselves and make them less frustrated overall. This leads to fewer pesky interruptions caused by those who have ADHD. Teachers will notice this immediately because there will be less fighting between the children and peace in the classroom! Those who have ADHD are often seen as disruptive to the rest of their classmates because of how hyperactive they tend to be.
Lower Stress Levels
Another great benefit of giving your child a vegan diet is that they will experience lower stress levels overall. ADHD often leads to high cortisol levels in the body, which, when sustained, can lead to a host of issues, including anxiety and depression. Healthy eating on a vegan diet means that children can concentrate better at school and focus on their well-being. Teachers, parents, and other adults who work with these students should encourage good habits because this makes everyone involved much happier! This lessens the amount of stress they feel which helps them relax more and enjoy other activities such as exercise or reading at home.
Focus Better
A child who has ADHD often has a hard time focusing on schoolwork. This can lead to low grades, affecting their self-esteem and making them feel terrible about themselves. This can cause the child to act out more, making it harder for them to focus on their schoolwork even further. A vegan diet helps by giving children more energy and concentration to concentrate better in school, leading to higher grades! Eating vegan benefits the body and provides peace of mind because parents know that their children are eating right and getting proper nutrients! Those with ADHD require extra care for their diets due to how picky they tend to be; however, this doesn’t change the fact that a vegan diet is incredibly beneficial for those who have ADHD.
Less Likely To Become Overweight
Many children with ADHD are often overweight or obese. This is because they eat a lot of fatty foods, which give them excessive energy at first, but then, later on, have the opposite effect and make them feel horrible. Besides being bad for overall health, being obese can affect self-esteem and lead to anxiety and depression down the road. A child given proper nutrition (and less sugary chocolate donuts) will be more likely to lose some weight which helps improve their mental state significantly! Plus, eating healthy is something that most kids need whether or not they have ADHD.
As you can see, a plant-based diet is suitable for children who have ADHD. It helps them focus better and lose any excess weight they might be dealing with. Since it’s proven that children with ADHD often become adults who also suffer from attention issues, this is beneficial information to keep in mind! It’s also essential for teachers and parents of these children to encourage healthy eating habits because it not only benefits their overall health but boosts their mood as well! This gives them more energy throughout the day, making them less likely to act out or get into trouble. Plus, you know what will happen if they don’t eat enough healthy food every day…the dreaded hanger strikes again!
|
Orphans and Vulnerable children in Uganda.
Question: What is the definition of an “orphan” according to the Ugandan Government?
Answer: The basic definition of an orphan is a child whose parents are deceased. However, for purposes of interpreting the law, an orphan is a child who has nobody to take care of him or her.
Question: What are common causes of a child being orphaned?
Answer: Children are orphaned for a variety of reasons. Death of parents due to diseases, political instabilities, starvation, lack of medical care, etc.
Question: Why are there non-orphans living in orphanages?
Answer: This is an unfortunate situation. However, non-orphans live in orphanages for several reasons. On the part of the child’s family, the reasons are mainly due to any of the following:
– Poverty: Parents voluntarily relinquish their children to orphanages because they cannot afford to take care of them.
– Ignorance: Some parents are illiterate and sign off their parental rights to orphanages without knowing the implication of this action.
– Diseases: Parents may have illnesses like HIV/AIDS and are unable to provide proper care to their children.
Question: What are Kazira Projects?
1. Clean Water For All
2. Sponsorship Program
3. Women Micro-Finance
4. Handcrafts
5. Feed Kazira Kids Program
6. Computer Studies
7. Modern Agriculture
8. Cooperatives
9. School Trips
10. Swimming Program
11. Adult Skills Training
12. Renewable Source of Energy
13. Prevention of HIV/AIDS From Mother-to-Child Transmission
14. Computer Studies
15. Building our own home
Comments are closed.
|
July 3, 2020
Korechi Innovation Inc seeks to make agricultural robots and automation as simple and accessible as possible. RoamIO is their first robot and the focus of their current development and marketing efforts.
An autonomous agricultural robot with a smaller footprint
We previously profiled DOT, a large autonomous robot designed to replace a tractor. Let’s follow that up with a look at a similar autonomous robot, also developed in Canada, but instead of replacing a tractor, RoamIO is much smaller and more agile.
It’s designed to take over some of the monotonous or repetitive tasks on a farm, without compacting the soil or requiring significant capital.
Initially developed at McMaster University’s Innovation Park, RoamIO was also made possible by a $94,000 grant from the Canadian federal government via NSERC, the Natural Sciences and Engineering Research Council of Canada. Developed by Korechi Innovations Inc., RoamIO is a product of collaboration between the public and private sector, in concert with post-secondary institutions, including Niagara College and Durham College in addition to McMaster.
Founded in 2016 by Sougata Pahari, Korechi is now based in Oshawa, and affiliated with the Spark Centre and the AI Hub at Durham College.
While the robot itself is relatively lightweight at 120kg, it can carry as much as 450kg, and tow as much as 2,200kg, making it a fairly powerful small size agricultural machine.
Like most autonomous vehicles it combines GPS with LIDAR (laser based radar) so that it can navigate a field, golf course, or vineyard.
An obvious application of RoamIO is to do seeding, as it can have a route pre-programmed, and an attachment that spreads the seed across the field. This would be far smaller than a tractor (resulting in less soil compaction) and easier than doing it manually.
Another application for RoamIO is turf or lawn maintenance, especially on a golf course, where the size of land is large, but not necessarily the complexity of the task. The robot could pull a lawn more or a golf ball retriever.
RoamIO has a growing list of supported attachments that range from those used for turf care, to sprayers and seeders for agriculture.
It features sensors that enable collision avoidance and is relatively weather proof, although that may require further testing and evidence.
The device itself is controlled via software that allows an operator to upload their field information, select the area to cover, including desired path and exclusion zones, and then let the robot go and do it’s work.
A single charge of the default battery can last 8 hours, but can be extended further with different modifications.
As a robot, it could operate all the time, providing it can recharge, and could certainly operate at night in the dark, with comparatively low noise output.
With a current price tag of roughly $40,000 per unit, RoamIO is not cheap, unless you compare it to agricultural equipment and technology in general, in which case it is ironically not expensive. Similarly if there are enough tasks for this robot to perform, it could be cheaper than hiring someone, especially in the context of a golf course.
Although I do wonder about winter applications, and whether there will be ways it can be used year round. It doesn’t have a PTO attachment to operate a snow blower, but maybe it could be outfitted with a snow plow?!
At Niagara College, RoamIO was transformed into a kind of robot sentry, monitoring the vineyard, gathering data about the soil and crops, and using all of that to help manage, maintain, and inform the larger operations. This kind of open field surveillance may be more effective than having sensors installed, as the one robot can traverse the field fairly easily, and potentially cover more ground at a lower overall cost.
Similarly ground based surveillance offers advantages and capabilities that differ from drones and flying robots.
Artificial intelligence can measure much more than what the human eye can detect and robots’ abilities to work around the clock brings features to the agriculture industry that humans can’t physically complete.
Drones are difficult to use for this sort of work because they fly above the canopy, says Duncan. He wants to create a robot that works from the ground to identify factors within and under the canopy.
Although working on the ground brings better identification strategies and capabilities, it creates more challenges with manoeuvering the device around the surroundings and topography.
When working on the ground there is more of a chance for the robot to run into a tree or a pond and it’s a hard problem to solve, says Duncan.
The team, working with the original robot created and sold by Korechi, started off with attaching a camera and some temperature probes to a radio controlled car but had issues with having to continuously follow the car to control it and they wanted the device to be completely automatic. They needed something with weight and the ability to plow through thick grass, navigate itself and have enough charge to run 24 hours a day.
This is a great example of why collaboration with Universities and Colleges helps a company like Korechi further develop their technology and capabilities.
The agricultural industry also currently faces the dual problems of labour scarcity and aging demographics. Automated tools have an important role to play, both in performing menial tasks that humans don’t want to do, while also making it possible for older farmers to maintain their operations.
Kind of raises the question as to whether we’ll see the use of exo-skeletons in agriculture as a hybrid form of automation, humans, and machines.
RoamIO is still in its relatively infancy, but does provide an interesting example of how automation in agriculture as considerable potential. We’ll keep an eye on this device and the company that produces it, and will provide further updates as it advances.
Ottawa Valley Smart Farms
|
7 Self Care Tips for the Exhausted Empath
Empathy is when you put yourself in the other people’s shoes. It is an important factor for you to respond appropriately to the situation. However, empaths extend more to that. Empaths understand others to a significant level. They feel what other people are going through in their own bodies. It is more in-depth. Hence, empaths can be highly compassionate towards others.
But, the same as humans, too much stimuli may overwhelm them. Mentally, emotionally, and psychologically. Stimuli such as crowds, media, and negativity can burn them out. This is due to the mirror neurons.
Want to learn if you’re an Empath? Check this article out: 5 Signs You Are An Empath.
If you’re an exhausted empath, this is the right place for you. Most of the time you only have 3 things to work on: GROUNDING, SHIELDING, AND SETTING HEALTHY BOUNDARIES
But how can you exactly do these? And what are these? Here are thorough 7 self-care tips categorized for exhausted empaths.
It is the act of focusing in the present. What is happening to you physically, mentally, and emotionally at the present. The brain is made up of multiple networks of functionally correlated brain areas, especially the default-mode network (DMN). Grounding techniques reduce activity in the Default Mode Network (DMN). Low levels of DMN contributes to self-referential processing. Below are two simple ways to ground an exhausted empath:
1. Spend time with Nature
Relaxing or taking a walk in nature can bring peace to an empath. Based on research it can reduce negative feelings such as anger, fear, and stress. Feelings which may overwhelm an empath. This is caused by the color of green and beautiful scenery which can reduce the production of the stress hormone. Based on a study using functional Magnetic Resonance Imaging (fMRI) to measure brain activity, participants who viewed nature scenes, the parts of the brain associated with empathy and love lit up. Empaths who are in urban areas may decide to add in a few plants in their home to feel more at ease.
2. Meditate
It involves maintaining attention on what is currently happening and away from distractions. Empaths may need to practice this consistently and constantly. This enables them to maintain a peaceful state of mind. The part of the brain which handles human emotion and behavior is the amygdala. Based on Desbordes’ research using fMRI, it showed changes in brain activity in the amygdala to those who learned meditating. It portrayed steady brain activity in the amygdala, while they were meditating and while they were performing everyday tasks. Since the amygdala is often associated with the body’s fear and stress responses, the results show
Functional MFRI (left) showing activation in the amygdala when participants were watching images with emotional content before learning meditation. After eight weeks of training in mindful attention meditation (right) note the amygdala is less activated after the meditation training. Courtesy of Gaelle Desbordes (https://news.harvard.edu/gazette/story/2018/04/harvard-researchers-study-how-mindfulness-may-change-the-brain-in-depressed-patients/)
Healthy Boundaries
Boundaries aren’t just a sign of a health relationship, they’re a sign of self-respect. For many of us, setting healthy boundaries is a new concept and a struggle. Although boundaries may vary from one person to another, we must begin somewhere and identify these boundaries, right? Here are 2 simple steps to begin with your journey of setting healthy boundaries for exhausted empaths:
3. Self-awareness: Does this make me feel uncomfortable?
Knowing where you stand helps you make good boundaries. To do this, identify your physical, emotional, mental, and spiritual limits. Most of the time you can identify your limits with your feelings. Feelings such as discomfort and resentment mostly resemble what you can tolerate and not accept.
4. Assert your limits
After identifying your limits, communicate them to others and to yourself as well. In a respectful way, let the other person know what in particular is bothersome to you. For this to work, you must follow through. This doesn’t happen overnight but know that everytime you assert your limits, you are respecting yourself.
“Setting boundaries takes courage, practice and support.” – Psychologist and Life Coach Dana Gionta Ph.D.
To protect from danger, risk, or unpleasant experience. Aside from physical harm, too much negativity and uncomfortable feelings may lead to a burned out empath. It is putting your healthy boundaries into action. Here are 3 ways to protect exhausted empaths:
5. Unplug from social media
Media brings excitement as well as sadness, fear, and uncertainty. Unplugging from these kinds of entertainments such as Facebook, Instagram, Twitter and more may bring peace to an empath. Given that empaths absorb others negative energy, an empath’s mind may seem biased towards the negative. It may impair one’s ability to sleep and cope with traumatic events. A social media detox may be antsy at first. But with discipline and perseverance, you will reap what you sow.
6. Practice saying No
Most of the time empaths are people pleasers. Do not let other people use you as a doormat. As Dr. Susan Cain said, “To survive and thrive, you need to set limits with people.” Even a simple “No” is perfectly fine! Always remember to acknowledge your feelings and others as well. “I’ll respond to you in a while when I am in the right mindset.” or be straightforward and say you understand them but you need to set limitations to respect yourself. Stand up for yourself and take care of yourself! You are not responsible for anybody’s pain. You are your own responsibility.
7. Power of your voice: Can you control it or not?
As empaths absorb the energy around them, one must think “Can I control this or not?” If not, let it go. Dr. Susan Cain firmly suggests to repeat the mantra while slowly breathing in and out: Return to sender, return to sender, return to sender. She believes the power of your voice can release the negativity out of your body. If you can control it, breathe in and out slowly. You need to give yourself time to think. Believe me, thinking of a solution while you’re overwhelmed is the worst.
In a world filled with negativity and chaos, it is important to look after yourself, whether you are an empath or not. Especially when it comes to your physical and mental well-being. When we’re in a better place, mentally and physically, we can be better for other people.
Some of the techniques presented may not give you ease overnight or immediately, but know that these things take time and will help you feel less exhausted as an individual. We all have different ways of coping with feeling burned out. No one is the same. However, self-care isn’t limited to what we do after or during the burn out. But what we do to prevent feeling exhausted matters the most.
By applying these techniques to your life, you will feel more at ease and less exhausted when the time comes. In this pandemic, we at Psych2Go aim to spread awareness, support, and educate you about the importance of yours and others mental well-being. Especially during this pandemic.
Spread positivity and love everyone! Most importantly, take care of yourselves.
Garrison, K. A., Zeffiro, T. A., Scheinost, D., Constable, R. T., & Brewer, J. A. (2015). Meditation leads to reduced default mode network activity beyond an active task. Cognitive, affective & behavioral neuroscience, 15(3), 712–720. https://doi.org/10.3758/s13415-015-0358-3
Margarita Tartakovsky, M. (2016, May 17). 10 Way to Build and Preserve Better Boundaries. Retrieved January 30, 2021, from https://psychcentral.com/lib/10-way-to-build-and-preserve-better-boundaries#1
Cain, S. (2017, June 13). 9 Self-Protection Strategies for Empaths. Retrieved January 30, 2021, from https://www.quietrev.com/9-self-protection-strategies-for-empaths/
Refuge, E. (Director). (2019, November 13). 9 Self-Care Tips for Exhausted Empaths [Video file]. Retrieved from https://www.youtube.com/watch?v=hTVno82PD7I&feature=youtu.be
Leave your vote
1 point
Upvote Downvote
Total votes: 1
Upvotes: 1
Upvotes percentage: 100.000000%
Downvotes: 0
Downvotes percentage: 0.000000%
Related Articles
Hey there!
Forgot password?
Forgot your password?
Your password reset link appears to be invalid or expired.
Processing files…
|
kite flying
Catching Kites in Flight
By Anne Cissel; Photos by Mac Stone
These birds prefer life in the air. But to help them survive, scientists must bring them down to Earth.
Kites (birds)
Tap image for a closer look.
Swallow-tailed kites do nearly everything while flying: hunting, eating, drinking, even bathing! These light and graceful raptors, or birds of prey, barely flap their wings to stay up in the air. A swallow-tailed kite is the size of an average hawk but weighs half as much—less than a pound. That’s why kites look as if they’re floating, not flying! Soaring high above the trees, they hardly ever stop to land.
Many swallow-tailed kites spend their summers in the southeastern part of the United States (see map). They’re also found in parts of Central and South America. Look for them flying over land such as the marsh seen here in the background. They also like open forests and swamps. But there are far fewer of these acrobatic birds than there once were. In order to help the kites, scientists need to learn more about the threats the birds face. And that means getting up close and personal with these high-flyers!
Kites (birds)
Tap image for a closer look.
Swallow-tailed kites are built to chase and catch their favorite meals: flying insects. Their forked tails help them make tight turns. Sometimes the birds will even roll backward to catch an insect that is flying behind them.
Surprisingly, they often munch on insects that bite or sting, especially wasps! It gives new meaning to the phrase “grab a bite,” doesn’t it? Luckily, their feathers and the tough skin on their legs and feet protect them from stings. Their stomachs have thick linings, too. But most of the time, kites just pull out any stingers before gulping the insects down!
One thing kites can’t do while flying is lay eggs. But they do nest as far from the ground as possible—sometimes way up in trees more than 100 feet tall. That’s about as tall as a 10-story building!
The kites start nesting in March. A mother and father build the nest together and take turns keeping the eggs warm. When the chicks hatch, the male hunts while the female stays in the nest to protect her young. The chicks usually get bigger meals than the adults do, such as treefrogs, lizards, baby birds, and snakes. Mom uses her sharp beak to tear it all up into smaller pieces for her chicks.
Scientists sometimes climb all the way up to a nest to check on the chicks’ health. They also place cameras near the nest so they can observe from down below.
Most birds of prey like to live alone, but kites nest in “neighborhoods” with other kite families. In late summer, swallow-tailed kites leave the United States to fly to South America. Before heading south, thousands of them gather in areas to roost, or rest, together. They have to take a breather before the big trip ahead!
Kites (birds)
Tap image for a closer look.
Swallow-tailed kites used to live in 21 U.S. states. Now they are found in only seven. That’s because farms, buildings, and roads have taken over so much of the kites’ natural habitat. Scientists at the Avian Research and Conservation Institute study kites to learn how best to help them. To track their movements, scientists carefully attach electronic transmitter “backpacks” onto the birds. But putting a backpack on a bird that hardly ever touches the ground is not easy! It’s time to meet the surprise
helper in this story: a great horned owl.
How does a completely different bird of prey help capture a kite? The owl is used as a lure. This type of owl preys on the kites’ eggs and chicks—sometimes even a sleeping adult! When a kite sees this enemy near its nest, it flies toward it to scare it away.
The owls that are part of this project were injured and nursed back to health by humans. The owls are used to being handled by people. So they stay calm when they are leashed to a perch near a kite’s nest.
When a kite flies toward the owl, it gets caught in a special net the scientists have set up. Right away, the kite is carefully taken out of the net. The scientists give the bird a health check, put an ID band on, and attach the electronic transmitter. Then the kite is released to fly free once again.
These transmitters have already shown that some kites migrate all the way to southern Brazil—a trip of nearly 5,000 miles! Learning more about these fantastic flyers will help scientists make sure more of them glide and soar in American skies for a long time to come.
• More Animal Stories
|
Intellectual Conviction Scale
Intellectual Conviction Scale
Rokeach‚ & Eglash‚ 1956
1. The reason we should show consideration for others is that they will reciprocate and show consideration for us.
2. Radio and TY programs should employ only loyal Americans‚ so as not to lose their audiences.
3. What is wrong with socialization‚ as seen in England‚ is that it results in severe rationing.
4. The reason you should not criticize others is that they will turn around and criticize you.
5. The American economic and political system is preferable to the Russian‚ because the Soviet system means long hours at poor wages.
6. The fallacy in Hitler's theories is shown by the fact that‚ after all‚ he lost the war.
7. The reason that criticism is a poor policy is that it prevents you from making and keeping friends.
8. Do unto others as they do unto you.
9. It's better not to talk about people behind their backs‚ because sooner or later it gets back to them‚ and you get a reputation as a gossip.
10.Negroes deserve equal treatment‚ because there is as yet no scientific evidence showing that there is any real difference in body odors.
11.The fact that God exists is proven by the fact that so many millions of people believe in Him.
12.The trouble with Communism is that‚ in all of human history‚ it has never worked.
13.Taxation without representation is wrong because sooner or later people rebel‚
14.If a man fails to practice what he preaches‚ there's something wrong with what he preaches.
15.You should only criticize others when you are above reproach yourself.
16.The reason it's better to let people make up their own minds is because they won't follow your advice anyway.
17.Whether it's all right to manipulate people or not‚ it is certainly all right when it's for their own good.
18. Appreciation of others is a healthy attitude‚ since it is the only way to have them appreciate you.
19.Generosity is a healthy way of life‚ because he who casts his bread upon the waters shall have it returned ten-fold.
20.Whether one approves of filibustering or not‚ it is all right if it's for a good cause.
Strongly agree‚ Agree‚ Neither agree nor disagree‚ Disagree‚ Strongly disagree
This instrument can be found at:
Rokeach‚ M.‚ & Eglash‚ A. (1956). A scale for measuring intellectual conviction. The Journal of Social Psychology‚ 44‚ 135-141.
Zagona‚ Salvatore V. (1959). Dogmatism and A theory of interdependence Between Libertarian and Equalitarian Processes: A study in reciprocal evaluation. University of Arizona‚ Dovtoral Dissertation.
Robinson‚ John P‚ & Shaver‚ Phillip R. (1969). Measures of Social Psychological Attitudes. Institute for Social Research‚ The University of Michigan
|
How do I ssh from PuTTY to Linux?
How do I ssh from PuTTY to Linux?
How do I connect to a Linux server using PuTTY?
When you start PuTTY, the main session setup screen will appear.
1. Begin by entering the hostname (or IP address) of the server you are trying to connect to. …
2. By default, the port will be set to 22, as this is the standard port for SSH on most servers. …
3. Make sure the connection type is set to SSH.
How do I SSH using PuTTY?
How to connect PuTTY
1. Start the PuTTY SSH client, then enter the SSH IP and SSH port of your server. Click the Open button to continue.
2. A login prompt like: will appear and ask you to enter your SSH username. For VPS users, this is usually root. …
3. Type your SSH password and press Enter again.
How do I SSH on a Linux machine?
How to connect via SSH
1. Open the SSH terminal on your machine and run the following command: ssh [email protected]_ip_address If the username of your local machine matches the one of the server you are trying to connect to, you can type: ssh host_ip_address. …
2. Type your password and press Enter.
See also Quick Answer: When is iOS 11 available for download?
September 24, 2018
How do I connect to Unix using PuTTY?
Accessing the UNIX server using PuTTY (SSH)
1. In the “Hostname (or IP address)” field, type: “” and select open:
2. Type your ONID username and hit enter:
3. Type your ONID password and hit enter. …
4. PuTTY will ask you to select the type of terminal.
Can I connect to Linux server from Windows without PuTTY?
The first time you connect to a Linux computer, you will be prompted to accept the host key. Then enter your password to log in. After logging in, you can run Linux commands to perform administrative tasks. Note that if you want to paste a password into the PowerShell window, you have to right-click and hit Enter.
How do I connect to PuTTY?
The “putty.exe” download is good for basic SSH.
1. Save the download in your C: WINDOWS folder.
2. If you want to link to PuTTY on your desktop:…
3. Double-click the putty.exe program or desktop shortcut to start the application. …
4. Enter your connection settings:…
5. Click Open to start the SSH session.
March 6, 2020
What is the SSH command in PuTTY?
Putty is an open source SSH client used to connect to a remote server. … To connect to your server from your PC, you can use Putty and write simple SSH commands to perform different basic actions such as creating folders, copying them, etc.
What are PuTTY commands?
List of basic PuTTY commands
• “Ls -a” will show you all files in a directory. “
• “Ls -h” will show the files and also their sizes.
• “Ls -r” will recursively display the subdirectories of the directory.
• “Ls -alh” will show you more details about the files contained in a folder.
See also Can I install Kali Linux on USB?
How do I start PuTTY on Linux?
1. Login to Ubuntu Desktop. Press Ctrl + Atl + T to open the GNOME terminal. …
2. Run the following command in terminal. >> sudo apt-get update. …
3. Install PuTTY using the following command. >> sudo apt-get install -y putty. …
4. PuTTY must be installed. Run it from the terminal using “putty” as the command, or from the dash.
How do I SSH on a device?
How to configure SSH keys
1. Step 1: generate SSH keys. Open the terminal on your local machine. …
2. Step 2: Name your SSH keys. …
3. Step 3: Enter a passphrase (optional) …
4. Step 4: Move the public key to the remote machine. …
5. Step 5: test your connection.
What is the ssh command in Linux?
SSH command in Linux
The ssh command provides a secure encrypted connection between two hosts over an insecure network. This connection can also be used for terminal access, file transfers, and for tunneling other applications. X11 graphics applications can also be run securely over SSH from a remote location.
How do I enable SSH?
Enabling SSH on Ubuntu
1. Open your terminal using the keyboard shortcut Ctrl + Alt + T or by clicking the terminal icon and install the openssh-server package by typing: sudo apt update sudo apt install openssh-server. …
August 2, 2019
Can’t write to the PuTTY terminal?
PuTTY configuration
If PuTTY does not seem to recognize the numeric keypad input, disabling the application keyboard mode will sometimes solve the problem: Click the PuTTY icon in the upper left corner of the window. In the drop-down menu, click Change settings. Click Terminal and then Functions.
See also Can you install extensions on Chrome iOS?
Is PuTTY a server?
PuTTY (/ ˈpʌti /) is a free and open source terminal emulator, serial console, and network file transfer application. Supports various network protocols, including SCP, SSH, Telnet, rlogin, and raw socket connection. … PuTTY was written and maintained primarily by Simon Tatham, a British programmer.
How do I run a command in PuTTY?
How to start an SSH session from the command line
1. 1) Enter the path to Putty.exe here.
2. 2) Then write the type of connection you want to use (i.e. -ssh, -telnet, -rlogin, -raw)
3. 3) Enter username …
4. 4) Then type ‘@’ followed by the IP address of the server.
5. 5) Lastly, type the port number to connect to and then press
Conclusion paragraph: Let me know in the comments what you think about this blog post. about How do I ssh from PuTTY to Linux?. Did you find it helpful? What questions do you still have? I’d love to hear your thoughts!
#ssh #PuTTY #Linux
Similar Posts
Leave a Reply
Your email address will not be published.
|
featured image
What Is Bitcoin Technology & How It Works?
The new age coin known is basically from the Bitcoin technology which is a model of digital money initiated and retained electronically. The currency with no single entity to control it. And this coin is not printed like dollars. Production of this coin generates when the people start running computers which use software that solves scientific issues. Therefore, this is the example of developing a category of digital money known as Cryptocurrency.
Bitcoin technology makes E-money; therefore, we can buy things electronically. This coin is similar to that of current dollars or other currencies traded digitally.
This digital currency is different from current money because of the decentralization. It came to us from the software developer known as Satoshi Nakamoto. It is an electronic payment by mathematical proof. The idea was an independent and electronic transaction with low fees. This currency is not in the print form as it is in the digital form. A group that can join and processes these operations in a distributed network (mining of coins) produces it. This system processes the transaction made with virtual currency. Therefore, making Bitcoin its network for payment.
The term common currency has always been by gold or silver. If you give money to the bank, you can get gold instead. Bitcoin doesn’t stand for color; it stands on mathematics. Following mathematical formulas of software programs produces it because it is an open source software.
Features that stand out Bitcoin from government-backed currencies:
1. Decentralized System: The control to Bitcoin is not under one central authority. The machines work together to mine this currency and process transactions which make up a part of the network, without causing a fiasco by any central authority.
2. Simple Setup Process: Regular banks make you go through a whole lot of processes to open an account. However, the configuration process of Cryptocurrency is straightforward and free.
3. Anonymous and Transparent Usage: Users with many Bitcoin addresses has no link to any personal identifying info. However, records every transaction in a large ledger format called Blockchain.
4. Meager Transaction Fee: Bitcoin charges a minimal fee for International transfers.
5. Fast Network Process: The payment process is quick in Bitcoin network.
6. Non-Refundable: Once sent, Bitcoins can never do a refund back process.
Thus this virtual currency changes our digital economy, as we know it, globally. This process can be a major beneficiary to the respondents who become a big part of it.
1 reply
Leave a Reply
Want to join the discussion?
Feel free to contribute!
Leave a Reply
|
Play Live Radio
Next Up:
Available On Air Stations
What are the alternatives to standardized testing?
Michigan recently increased the time spent on mandatory testing for eleventh graders, in some cases requiring eight partial days of testing. Educators across the country are concerned about the growing number of tests kids must take and how the time spent on them detracts from actual learning. But if you cut back on standardized tests, what can we do to gauge student learning and, in turn, teacher effectiveness?
Anya Kamenetz, author of Our Schools are Obsessed With Standardized Testing – But You Don't Have to Be and NPR's lead digital education reporter, joins us to discuss the state of standardized testing today as outlined in her recent article for NPR.
In 2001 No Child Left Behind created the federal requirement for standardized testing in grades three through eight, and is a large factor in the amount of tests students are required to take today.
Kamenetz has come up with alternatives to accompany or replace standardized testing's stronghold on how we evaluate students and schools.
1) Sampling. Instead of requiring all students to participate in disruptive testing every year a smaller number of students that are statistically representative of the school or district would be used.
2) Stealth assessment. Learning software, some already being used by schools, gives us the ability to keep track of students' every answer throughout the year. Incorporating large amounts of data can lead to a richer and more detailed picture of student performance that can also aid in improving instruction.
3) Multiple measures. Many schools are already using longitudinal data systems to track students from pre-K through high school. Incorporating information taken from different school environments over time can help to provide feedback on how schools are working and changing.
3a) Social and emotional skill surveys. Collecting information on how students feel about being in the building, their hopes, engagement and well-being can lead to a greater understanding of how to improve learning environments.
3b) Game based assessments. Using video game-like simulations can help to gauge how kids make decisions and understand complex systems by compiling information as they play.
3c) Performance or portfolio-based assessments. Projects and presentations created by students individually or with a group can help demonstrate kids’ skills and hands-on experience in various areas of study.
4) Inspections. School inspections are still used widely in the UK. Not so different from a health inspector, a team of experts usually made up of experienced educators, visit schools and pose questions that can lead to school's self-reflection and goal-setting.
No Child Left Behind and the standards it has set for testing are overdue for re-authorization. Kamenetz says, "There is a surprising consensus on the right and the left: teachers unions, the Democratic Party, and the GOP that there may be too much testing going on."
*Listen to our conversation with Anya Kamenetz on Stateside on 3 p.m.
Related Content
|
How Changes in the Housing Market Affect Home Equity
Updated: Mar 4, 2021
Most people understand that having equity in a home is a good thing and that increases in home values contribute to increased equity. Beyond that though, it can get a little confusing. One way to understand equity is in terms of the accounting equation. (Stay with me here, even if you dislike accounting or math.)
Assets = Liabilities + Equity
In words, this equation says that to buy an asset (like a home), you can borrow money to buy it (liability), you can pay for it out right (equity), or a combination of the two. A combination is most popular for home buying.
Let’s say you bought a house for $200,000 and you put down 20% ($40,000). Here’s how the equation would look:
Assets = Liabilities + Equity
$200,000 = $160,000 + $40,000
As you can see, initially when you buy a home the amount of your down payment is equal to your equity. Let’s look at some scenarios that affect homeowner’s equity.
What if home values immediately increased 10%? The home’s value would increase to $220,000:
Assets = Liabilities + Equity
$220,000 = $160,000 + $60,000
In this example, a 10% increase in the home’s value leads to a homeowner’s equity increase of 50% ($40,000 to $60,000). Over time, the home’s value should increase and at the same time the loan is getting paid down. After some time, for example, the home might be worth $260,000, and you might owe $130,000:
Assets = Liabilities + Equity
$260,000 = $130,000 + $130,000
In this example, the homeowner would own 50% of the home.
What happens, though, if home values decrease 10% immediately after purchase?
Assets = Liabilities + Equity
$180,000 = $160,000 + $20,000
A 10% drop in the home’s value leads to a 50% decrease in homeowner’s equity ($40,000 to $20,000)! This is why lenders require down payments of 20% for the buyer to avoid paying mortgage insurance. If this homeowner were to stop paying their mortgage, the bank would foreclose on the home and become the owner of a $180,000 house, even though they only loaned $160,000.
Let’s look at the same scenario with a 3.5% down payment, which is acceptable for FHA loans:
At purchase:
Assets = Liabilities + Equity
$200,000 = $193,000 + $7,000
With a 10% housing market crash:
Assets = Liabilities + Equity
$180,000 = $193,000 + -$13,000
In this scenario, if the lender foreclosed, they would own a home worth $180,000 that they loaned $193,000 on. The buyer, however, was paying mortgage insurance premiums, so the insurer pays off the mortgage and the lender is protected. The insurer collects enough mortgage insurance premiums from homeowners that do not default to cover the mortgages of those that do.
Historically, housing prices have risen consistently. There have been dips in the housing market from time to time, with the largest being in 2008. Some homeowners had so much negative equity in their homes that they stopped making their payments and simply walked away. Others continued to make their payments until the housing market rebounded. Some homeowners bought after the crash, like me, and benefitted from the rising housing market.
Buying a home means taking on risk. The market might go up, or the market might go down. These examples illustrate the risk you take when you buy a home and explain the ways a homeowner’s equity can increase or decrease.
What do you think? Is home equity easier or more difficult to understand than you thought?
For ways homeowners can borrow their own equity, check out this blog, Home Equity Line of Credit vs Home Equity Loan.
13 views0 comments
Recent Posts
See All
|
Site Overlay
Interactive Know-how
Science and Technology
Some primary premises – typically common by leaders and supported by the led – exercise the collective conscience of the led in so far as they stimulate a willed improvement. The evolution of the position of expertise is tied to scientific and technological advances within the area of knowledge technology, the pressures of an increasingly aggressive surroundings and changes within the design of methods for managing the business.
They’re as a substitute based on differences of scholars with a background in STEM, problem-fixing, and palms-on expertise discovered from childhood play and life experience and those that havent had the identical sort of exposure. The USA has finished so much on this field; they have even introduced Nanotechnology as a subject for various schools.
It is a thoughts-numbing activity when you think about the billions of communications going on around the globe at any given time, but the science of computer forensics is consistently advancing each bit as quickly or sometimes even faster than the know-how they are answerable for investigating.
Many scientists and researchers use acceptable science journals to publish their very own work within the type of a short scientific paper that consists of 5 to 30 pages. Science and technology challenge mental property techniques, particularly patent legal guidelines.
Dive Pc Algorithms
Science and Technology
Contact lenses are one of the vital important innovations of science and technology. They thought that having the ability to buy and operate modern technological products qualifies for development in science and technological development. Students want stable data and understanding in bodily, life, and earth and area science if they are to apply science.
Future technology these days not concentrated only on the benefit of people. Researching an area related to Nanotechnology can result in new fields of expertise and discovery. Other than having a level in science related know-how, candidates additionally need to possess some on-the-job coaching, an essential criteria to get most of the jobs in the field of meals and science know-how.
science and technology corporation, science and technology fund td, science and technology in world history an introduction pdf
The historical past of hobbies could be very old. It’s the area of science the place totally different scientific approaches and methodologies are combined in an effort to study info expertise. There have been advances in medical care via the event of science and know-how. In line with a report released on May 16, 2013, in a significant medical breakthrough, scientists have for the primary time transformed human pores and skin cells into embryonic stem cells.
Science and Technology
The definition of knowledge has expanded with the advance of technology. Proper after the research carried out, an alarm was raised to many washer companies and considered one of them was the Samsung Electronics which has been very active of their promotion of the Silver Nano Well being System in washers and their air conditioners.
The scientists imagine that fashionable science is very efficient to make the world green. As a result of evolution in nanotechnology, it has develop into attainable to develop various methods that may assist individuals sooner or later. There are a number of developments going down within the area of medicines, physics, chemistry and other such basic sciences on account of nanotechnology.
science and technology in world history an introduction pdf, science and technology news cnn, science and technology fund lc, mcclellan science and technology in world history pdf, science and technology studies uc davis
The LIS is a software program system which is designed to handle all the information and information concerned in supporting the varied gear and equipment in a contemporary laboratory.
|
Listas Levante-EMV » Noticias
Data Safety and Safety
Data cover is the intimate relationship amongst the acquisition and distribution of data, technological, the public’s requirement of privacy, as well as the legal and social challenges surrounding all of them. It is also sometimes referred to as info security or perhaps data privacy. This article talks about the range from the protection to which data is vulnerable and the numerous strategies followed to ensure the safety of personal data. It in short , outlines the issues and problems surrounding info security. Finally, a few methods to secure your details in the modern age are given.
Personal information privacy, also called personal space, refers to the privacy of data that may be maintained by simply an individual, typically a person who works for that company. This information privacy is mostly related to net usage. The online world is a approach of obtaining data coverage and many fresh devices made to make it easier to keep against the fraud details from the pc are simply being created. Several devices will work in the impair.
Info security is related to the privacy of information placed by a individual on a laptop. The idea in back of data proper protection is to produce it difficult with regards to an employee of stealing company or perhaps user data without a pass word or equivalent security essential. In theory, this will make that more difficult for the purpose of the business treatments to squander any offences of prospect against the organization or against it is customers. For instance, if the company store’s customer credit card details in a secure server nonetheless gives pretty much all employees access to this info (even for the point of viewing the balance), it is easy to imagine how the workers might improper use the data or use it for his or her own needs without any authorization by the customer. Therefore , all employees must be trained in data protection and given strong passwords for his or her computers and other devices that they can use to gain access to company information.
There are two wide means of making sure data level of privacy and you approach to it is by making the data protected by the encryption of it using the most secure technology available. Security is a form of filtering that removes data from the unencrypted state ahead of transmission. A company may have to give a reasonable service charge for an encryption support. However , it ensures personal privacy better than any other means and possesses a higher amount of success. This is why it really is considered to be the best practices designed for data security and encryption.
You will discover two extensive means of guaranteeing data privateness and a single approach to it is by making the information protected by the encryption from it using the best practices. Encryption is a type of filtering that gets rid of data through the unencrypted state before transmission. A small business may have to pay a reasonable service charge for an encryption service plan. However , it assures privacy superior to any other means and has a higher rate of success. That is why it is regarded as being the best practices for info protection and encryption.
Businesses and individuals must look into whether they require special IT support plus the cost of maintaining a good outdated safety program for protecting their info privacy. Remember that no security alarm will be completely effective in protecting your information privacy in fact it is wise to make use of other actions. There are a immense amount of importance placed upon data privacy as well as the need to protect private information at the job and at home. Every business should have a legal agreement set up regarding their facts privacy and be sure that it is honored.
|
Stored Procedures: Returning Data Using a Result Set
For this article, we are going to examine the benefit of using stored procedures that simply return a result set (i.e. a stored procedure that has a SELECT statement).
Before we do a shallow dive into this functionality, it is worth noting that SQL Server has 3 methods of returning data from a stored procedure: 1) result sets, 2) output parameters, and 3) return codes.
To get a quick overview of these methods, refer to the following documentation provided by Microsoft. We are going to focus on the 1st method, result sets.
Why would we want to return a result set from a stored procedure when we could easily use a view or a table valued function?
For one, these types of stored procedures work very well with Azure Data Factory, as described in this document here.
Looking at the source options of an ADF “Copy data” activity, we can see the option to use a table, query or a stored procedure.
The stored procedure gives us the benefit to do all sorts of data manipulation activities before performing the SELECT statement to be used by the copy activity. We can create a complicated extract using temporary tables, insert into log tables, or delete and update data. We can then send a result set over to ADF all within one single database object, saving us the need to create any views or permanent tables and thus reducing our database object footprint.
If you are unfamiliar with the functionality of returning a data set in a stored procedure, here is an example in its most simplest form (and remember, you can parameterize the stored procedure to add reusability for different reporting needs).
FROM Employees;
The simplest execution is the following:
EXECUTE spGetEmployees;
If you want to store the results of the stored procedure in a table or a table variable we use the following syntax.
CREATE TABLE #TempResults
EmpName VARCHAR(100)
INSERT INTO #TempResults EXEC spGetEmployees;
SELECT * FROM #TempResults;
And here is the syntax for inserting into a table variable.
DECLARE @vTableVariable TABLE(EmpID INTEGER,EmpName VARCHAR(100));
INSERT @vTableVariable EXEC spGetEmployees;
SELECT * FROM @vTableVariable;
Hope this helps in understanding the benefit of stored procedures that simply return a result set and where they can best be used.
Also, if you haven’t done so, review table-valued functions as well and take some time to understand where to best use tables, views, table valued functions, and stored procedures that return a result set.
Happy coding!!
Leave a Reply
|
Skip to Main Content
Style Guides - Citation Resources: Oral Citations
When Are They Used
Within speeches
Within oral presentations
Why Are They Used
To convince your audience that you are a credible speaker.
To prove that your information comes from reliable sources.
To give credit to others for thier ideas, words, and data to avoid plagiarism.
To leave a path for your audience to allow them to locate your same sources.
How Are They Used
Ineffective: “Margaret Brownwell writes in her book Dieting Sensibly that fad diets telling you ‘eat all you want’ are dangerous and misguided.” (Although the speaker cites and author and book title, who is Margaret Brownwell? No information is presented to establish her authority on the topic.)
Better: “Margaret Brownwell, professor of nutrition at the Univeristy of New Mexico , writes in her book, Dieting Sensibly, that …” (The author’s credentials are clearly described.)
Ineffective: “An article titled ‘Biofuels Boom’ from the ProQuest database notes that midwestern energy companies are building new factories to convert corn to ethanol.” (Although ProQuest is the database tool used to retrieve the information, the name of the newspaper or journal and publication date should be cited as the source.)
Better: “An article titled ‘Biofuels Boom’ in a September 2010 issue of Journal of Environment and Development” notes that midwestern energy companies…” (Name and date of the source provides credibility and currency of the information as well as giving the audience better information to track down the source.)
Ineffective: “According to, possible recovery from autism includes dietary interventions.” (No indication of the credibility or sponsoring organization or author of the website is given)
Better: “According to pediatrician Jerry Kartzinel, consultant for, an organization that provides information about autism treatment options, possibly recovery from autism includes dietary interventions.” (author and purpose of the website is clearly stated.)
Note: some of the above examples are quoted from:
Metcalfe, Sheldon. Building a Speech. 7th ed. Boston: Wadsworth, 2010. Google Books. Web. 17 Mar. 2012.
|
Puneet Varma (Editor)
Arsenical bronze
Updated on
Share on FacebookTweet on TwitterShare on LinkedInShare on Reddit
Arsenical bronze
Since copper ore is often naturally contaminated with arsenic, the term "arsenical bronze" when used in archaeology is typically only applied to alloys with an arsenic content higher than 1% by weight, in order to distinguish it from potentially accidental additions of arsenic.
Origins in pre-history
Although arsenical bronze occurs in the archaeological record across the globe, the earliest artifacts so far known have been found on the Iranian plateau in the 5th millennium BCE. Arsenic is present in a number of copper containing ores (see table at right, adapted from Lechtman & Klein, 1999) and therefore some contamination of the copper with arsenic would be unavoidable. However it is still not entirely clear to what extent arsenic was deliberately added to copper and how much its use arose simply from its presence in copper ores that were then treated by smelting to produce the metal.
Using these various ores, there are four possible methods that may have been used to produce arsenical bronze alloys. These are:
• This method, although possible, lacks evidence.
• The reduction of antimony-bearing copper arsenates or fahlore to produce an alloy high in arsenic and antimony.
• This is entirely practicable.
• The co-smelting of oxidic and sulphidic ores such as malachite and arsenopyrite together.
• This method has been demonstrated to work well, with little in the way of dangerous fumes given off during it, because of the reactions together among the different minerals.
Furthermore, greater sophistication of metal workers is suggested by Thornton et al. They suggest that iron arsenide was deliberately produced as part of the copper smelting process, to be traded and used to make arsenical bronze elsewhere by addition to molten copper.
Artefacts made of arsenical bronze cover the complete spectrum of metal objects, from axes to ornaments. The method of manufacture involved heating the metal in crucibles and casting it into moulds made of stone or clay. After solidifying it would be polished or in the case of axes and other tools work-hardened by beating the working edge with a hammer, thinning out the metal and increasing its strength. Finished objects could also be engraved or decorated as appropriate.
Advantages of arsenical bronze
Secondly, it is capable of greater work-hardening than is the case with pure copper, so that it performs better when used for cutting or chopping. There is an increase in work-hardening capability with increasing percentage of arsenic, and it can be work-hardened over a wide range of temperatures without fear of embrittlement. Its improved properties over pure copper can be seen with as little as 0.5 to 2 wt% As, giving a 10 to 30% improvement in hardness and tensile strength.
Thirdly, in the correct percentages, it can contribute a silvery sheen to the article being manufactured. There is evidence of arsenical bronze daggers from the Caucasus and other artefacts from different locations having an arsenic rich surface layer which may well have been produced deliberately by ancient craftsmen, and Mexican bells were made of copper with sufficient arsenic to colour them silver.
Arsenical bronze, sites and civilisations
Arsenical bronze was used by many societies and cultures across the globe. Firstly, the Iranian plateau, followed by the adjacent Mesopotamian area, together covering modern Iran, Iraq and Syria, has the earliest arsenical bronze metallurgy in the world, as previously mentioned. It was in use from the 4th millennium BC through to mid 2nd millennium, a period of nearly 2,000 years. There was a great deal of variation in arsenic content of artefacts throughout this period, making it impossible to say exactly how much was added deliberately and how much came about by accident. Societies using arsenical bronze include the Akkadians, those of Ur, and the Amorites, all based around the Tigris and Euphrates rivers and centres of the trade networks which spread arsenical bronze across the Middle East during the Bronze Age.
The Chalcolithic-period hoard from Nahal Mishmar in the Judean desert west of the Dead Sea contains a number of arsenical bronze (4–12% arsenic) and perhaps arsenical copper artifacts made using the lost-wax process, the earliest known use of this complex technique. "Carbon-14 dating of the reed mat in which the objects were wrapped suggests that it dates to at least 3500 B.C. It was in this period that the use of copper became widespread throughout the Levant, attesting to considerable technological developments that parallel major social advances in the region."
The use of arsenical bronze spread along trade routes into North western China, to the region Gansu – Qinghai, with the Siba, Qijia and Tianshanbeilu cultures. However it is still unclear as to whether arsenical bronze artefacts were imported or made locally, although the latter is suspected as being more likely due to possible local exploitation of mineral resources. On the other hand, the artefacts show typological connections to the Eurasian steppe.
The Eneolithic period in Northern Italy, with the Remedello and Rinaldone cultures in 2800 to 2200 BCE, saw the use of arsenical bronze. Indeed, it seems that arsenical bronze was the most common alloy in use in the Mediterranean basin at this time.
The Sican Culture of north western coastal Peru is famous for its use of arsenical bronze during the period 900 to 1350 AD. Arsenical bronze co-existed with tin bronze for in the Andes, probably due to its greater ductility which meant it could be easily hammered into thin sheets which were valued in local society.
Arsenical bronze after the Bronze Age
The archaeological record in Egypt, Peru and the Caucasus suggests that arsenical bronze was produced for a time alongside tin bronze. At Tepe Yahya its use continued into the Iron Age for the manufacture of trinkets and decorative objects, thus demonstrating that there was not a simple succession of alloys over time, with superior new alloys replacing older ones. There are few real advantages metallurgically for the superiority of tin bronze, and early authors suggested that arsenical bronze was phased out due to its health effects. It is more likely that it was phased out in general use because alloying with tin gave castings which had similar strength to arsenical bronze but did not require further work-hardening to achieve useful strength. It is also probable that more certain results could be achieved with the use of tin, because it could be added directly to the copper in specific amounts, whereas the precise amount of arsenic being added was much harder to gauge due to the manufacturing process.
Health effects of arsenical bronze use
A well-preserved mummy of a man who lived around 3,200 BCE found in the Ötztal Alps, popularly known as Ötzi, showed high levels of both copper particles and arsenic in its hair. This, along with Ötzi's copper axe blade, which is 99.7% pure copper, has led scientists to speculate that he was involved in copper smelting.
Modern uses of arsenical bronze
Arsenical bronze has seen little use in the modern period. It appears that the closest equivalent goes by the name of arsenical copper, being defined as copper with under 0.5 wt% As, below the accepted percentage in archaeological artefacts. The presence of 0.5 wt% arsenic in copper lowers the electrical conductivity to 34% of that of pure copper, and even as little as 0.05 wt% decreases it by 15%. Therefore, there is no demand for copper containing arsenic in electric wires etc., one of the major modern uses of copper and steam engine boilers are no longer made from it, leading to no modern use.
Arsenical bronze Wikipedia
|
Plotting several variables simultaneously
Hello. I have several numerical vectors: 'a' , 'b' and 'c'.
I would like to plot them simultaneously on one plot.
Could you, please, tell me how to do it?
Thank you for your help.
Since you haven't given much to go on about your data, I have provided the quickest way I can think of.
If you need a better answer, I suggest you provide some more context, and preferably a reproducible example - see how to create a reprex
Any way, this will do something like what you are looking for
d <- replicate(3, sample(1:10))
matplot(d, type = 'l')
1 Like
Please include a data.frame and indicate the columns on which you need the plot. I would suggest you learn the package ggplot if you have to publisher print quality charts.
@Maxim I suggest you use google and find tutorials and help pages on how to use tools like plot/matplot/ggplot2 and try learning from them. Come back here when you have a specific problem, error, or question.
|
Growing Dendrobium Orchids
Dendrobium Orchids
Information on Dendrobium Orchids
Dendrobium plants are members of the orchid family and it’s a bit of unnecessary repetition to refer to them as dendrobium orchids: but it’s usually done this way.
They are commonly known as ‘greenhouse orchids’ but are probably more often found as house plants nowadays, and they are widely available in the house plant sections of garden centers.
In general terms, dendrobium orchids are best grown in a commercially-prepared orchid mix or decayed oak leaves, Spanish moss, or sphagnum moss; and the prepared orchid mix is probably the easiest to find.
During the daylight hours, they like a temperature of at least 16[degrees]C … say, 60[degrees]F … but of no more than 24[degrees]C or 75[degrees]F: but … and here’s the one rub … they really don’t like the temperature to fall below 13[degrees]C … say 55[degrees]F … during the night hours. Many people allow the temperature in even the warmest part of the house to dip below during cold winter nights.
Still, provided the plants are never left trapped behind the curtains on a winter evening and are kept well away from any draughts, they’ll have a good chance of surviving in a warm living room.
These orchids are very good at removing chemical vapors from the air … and yes, almost everyone has chemical vapors in their living rooms, from the carpets and the upholstery to the cleaning agents and polishes, the chemicals including things like alcohol, acetone, and formaldehyde.
They are also good at resisting the infections that plague many house plants, provided the atmosphere is not too dry, and they are not given too much water.
Dendrobium orchids have one habit that is of no real concern to householders but is, nonetheless, rather unusual; they take in carbon dioxide and release oxygen during the night, the reverse of when most plants do it.
But remember, read all about it below and after… keep the care instructions!
Dendrobium Orchid Care
Dendrobium orchids make excellent houseplants and are relatively easy to grow, though they are somewhat fussy about their environment.
As with most orchids, dendrobiums prefer to be grown in a medium that drains quickly.
The best is a medium-sized fir bark mix that has been developed specifically for use with orchids. Sometimes additives such as charcoal, perlite, and sphagnum moss are also used.
Many growers have their own particular recipes, but a mixture of about ten parts bark to one part perlite and one part charcoal works well.
Pre-made mixes can be purchased where orchids are sold.
Water and feed your orchid frequently during the growing season but less regularly during winter.
These plants typically send up at least one new upright cane each year. Avoid cutting off old canes, as they contain nutrients and water necessary for the orchid’s health.
Additionally, old canes occasionally flower or produce tiny plants known as keikis (pronounced “kay-keys”) that can be potted on their own once they develop roots.
The flower sprays bloom for approximately six to eight weeks and make excellent cut flowers. Your plant may bloom multiple times per year under ideal conditions.
If blooming appears to be less than optimal, try increasing the light available to your orchid.
Dendrobium orchids in closeup
In nature, these plants thrive in partial sunlight. However, they’ll likely need to be near the brightest window to bloom well indoors—preferably a south-facing window.
Keikis can indicate that the plant is not receiving enough light. On the other hand, if you notice yellow leaves, you may have provided the plant with excessive direct sunlight.
Orchids do not grow in standard potting soil but in a unique mixture closely mimics their natural habitat. Utilize a commercial orchid potting medium of peat moss, perlite, or fir bark.
Alternatively, you can make your own using those ingredients. Assure that the medium is aerated and well-draining so that the roots are not left in excessive moisture for an extended time.
These plants like moisture during the cultivating season but should not be allowed to sit in a saturated medium. Excessive watering can result in root rot, which eventually causes the plant to yellow or wilt.
Stick your finger in the medium to determine when to water. If it feels damp, wait until it has slightly dried before watering.
You can extend the time between regular waterings during the winter months but do not allow the medium to dry out completely.
Thermodynamics and Humidity
Dendrobium orchids thrive in warm climates and require a constant temperature of at least 60 degrees Fahrenheit.
While they may be able to tolerate slightly cooler nighttime temperatures of up to 50 degrees Fahrenheit, prolonged exposure to cold is not recommended.
Additionally, they prefer a humidity level of 50% and 70%. (with a minimum of 45 percent ). Brown leaf tips could indicate that the air around your orchid is too dry. 2
Feed orchids regularly throughout the growing season using a balanced orchid fertilizer according to the label directions. Reduce fertilizer by about half at the end of the growing season.
Varieties of Dendrobium Orchid
Several popular orchid species include the following:
Dendrobium aphyllum: Flowers in shades of yellow, lavender, pink, or white
Dendrobium anosmum: Flowers are pink or purple.
White flowers on spindle-like stems Dendrobium crumenatum
White or yellow-green flowers with red-purple stripes Dendrobium cucumerinum
Dendrobium taurinum: Flowers range from lavender to purple and have horn-shaped petals.
Dendrobium Reflexipetalum also known as The Reflexed Petal Dendrobium
Dendrobium reflexitepalum
Dendrobium reflexitepalum
Flower Size .2″ [5 mm]
Supported in Java and Sumatra at altitudes of 200 to 1000 meters as a medium-sized, hot to warm spreading epiphyte with erect in youth growing into pendulous with age stems which are leafy below and leafless above, and bearing ovate, acute, imbricate petals that flowers in the fall, winter, and first spring on new stems from the apice and on established plants from along the leafless apical segment with individual flowered, short inflorescence.
Dendrobium Orchid Propagation
Dendrobiums, like many other plants, can be divided. This is the optimal time when new growth has begun, and new roots are visible.
When dividing, ensure that each division retains at least three to four stems (also called canes).
Another method is to pot up the small plantlets that develop at the ends of older canes. These are referred to as keikis, a Hawaiian term that translates as “small plants.”
A third technique is to cut older canes between the nodes and place them on moist sphagnum moss.
Most dendrobiums prefer to be underpotted (planted in small containers), as their roots are typically short, thin, and wiry.
Previous articleFloribunda Rose History, Top 3 Great Varieties
Next articleWoodturning With Resin, 3 Golden Rules
|
The fog-horn of war
I suspect most people attribute the metaphoric “fog of war” to Clausewitz’s critical reflections on the Napoleonic wars and then fast-forward to Erroll Morris‘s brilliant 2003 film about Robert Strange (sic) McNamara, The Fog of War, which centres on the Second World War and Vietnam.
Carl von Clausewitz (1780-1831)Both turn out to be more problematic than they seem. First, although Clausewitz mentioned ‘fog’ several times in his manuscript On War he never used the phrase “fog of war”; this is the sentence that probably comes closest to the contemporary meaning:
‘War is the realm of uncertainty; three quarters of the factors on which action is based are wrapped in a fog of greater or lesser uncertainty’ [Der Krieg ist das Gebiet der Ungewißheit; drei Vierteile derjenigen Dinge, worauf das Handeln im Kriege gebaut wird, liegen im Nebel einer mehr oder weniger großen Ungewißheit’].
In a brief commentary, Eugenia Kiesling suggests that Clausewitz made much more of ‘the friction of war’ than ‘fog’ (and was right to do so) and, more originally, that he gave it a peculiar moral force. In fact, the two are closely connected: Victor Rosello had already noted that following ‘the metaphorical path of Clausewitzian fog-shrouded battlefields which defy attempts at penetration owing to insurmountable uncertainty’ led directly to ‘the ascendancy of the moral domain’:
‘These moral influences are the role of chance; the imponderables of fog and friction and their effects on the reliability of information; the limitation inherent in observation; the inability to penetrate the mind of the adversary; the dominance of preconception over fact; and the limitations of intelligence analysis.
ERROLL MORRIS's film "The Fog of War"And, second, here is McNamara, at the very end of Morris’s film (from the transcript):
We all make mistakes. We know we make mistakes. I don’t know any military commander, who is honest, who would say he has not made a mistake. There’s a wonderful phrase: “the fog of war.” What “the fog of war” means is: war is so complex it’s beyond the ability of the human mind to comprehend all the variables. Our judgment, our understanding, are not adequate. And we kill people unnecessarily.
This isn’t quite what Clausewitz meant, but in any case even at the age of 85 McNamara had lost none of his ability to manufacture his own fog: as Fred Kaplan asked at the time, ‘What’s true and what’s a lie in The Fog of War?’ In effect, McNamara turns Clausewitz on his head and uses ‘the fog of war’ as an ethical defence (or, more accurately perhaps, distraction).
I’ve been led down these pathways by my continuing preoccupation with the First World War. I’ve noted before the metricization of the battlefield on the Western Front: the meticulous planning of what Clausewitz would have called ‘paper war’ on the abstract spaces of maps. No matter how frequently these were updated, revised and annotated, their purchase on the course of combat was ineluctably limited once the infantry went over the top. On the British side, telephone and telegraph lines snaked back from the front lines to an ascending series of headquarters in the rear (an appropriate location in more ways than one), but as John Keegan wrote in The Face of Battle, these lines of communication, however imperfect, had one further, disabling limitation: they stopped at the end of no man’s land.
On the Somme, he wrote,
Over the top, Western Front‘Once the troops left their trenches, as at 7.30 a.m. on July 1st, they passed beyond the carry of their signals system into the unknown. The army had provided them with some makeshifts to indicate their position: rockets, tin triangles sewn to the backs of their packs as air recognition symbols, lamps and flags, and some one-way signalling expedients, Morse shutters, semaphore flags and carrier pigeons; but none were to prove of real use on July 1st.
‘That a party could disappear so completely, not in the Antarctic wastes but at a point almost within visual range of their own lines, seems incomprehensible today, so attuned are we to thinking of wireless providing instant communication across the battlefield. But the cloud of unknowing which descended on a First World War battlefield at zero hour was accepted as one of its hazards by contemporary generals. Since the middle of the nineteenth century, the width of battlefields had been extending so rapidly that no general could hope to be present, as Wellington had made himself, at each successive point of crisis; since the end of the century the range and volume of small-arms fire had been increasing to such an extent that no general could hope to survey, as Wellington had done, the line of battle from the front rank. The main work of the general, it had been accepted, had now to be done in his office, before the battle began.’
As Keegan says, there were various attempts to allow GHQ to monitor the progress of the battle. Here is one of the most fanciful, not included in his list, which involved re-imposing cartographic vision – what Edmund Blunden called in Undertones of war its ‘innocuous arrows’ and ‘matter-of-fact symbols’ – on the obdurately physical, all-too-corporeal battefield. In Somme success: the Royal Flying Corps and the Battle of the Somme, Peter Hart reproduces this report from 2nd Liuetenant Cecil Lewis, describing a so-called ‘contact patrol’:
Aerial observation, Western Front‘We had all our contact patrol technique perfected and we went right down to 3,000 feet to see what was happening. We had a klaxon horn on the undercarriage of the Morane – a great big 12 volt klaxon, and I had a button which I used to press out a letter to tell the infantry we wanted to know where they were. When they heard us hawking them from above, they had little red Bengal flares, they carried them in their pockets, they would put a match to their flares. All along the line wherever there was a chap there would be a flare, and we would note those flares down on the map and Bob’s your uncle!’
Not difficult to see why A.M. Burrage gloomily wrote in War is war that the infantry were ‘the little flags which the General sticks on the war-map to show the position of the front line’… But, Lewis continued,
‘It was one thing to practice this but quite another for them to really do it when they were under fire, and particularly when things began to go a bit badly. Then they jolly well wouldn’t light anything and small blame to them because it drew the fire of the enemy on to them at once.’
1 thought on “The fog-horn of war
Comments are closed.
|
Skip to main content
‘The Whole World Is Watching’: The Political Crossfire of DAPL
The struggle over the Dakota Access Pipeline (DAPL) has many levels.
The struggle over the Dakota Access Pipeline (DAPL) has many levels. The people most immediately affected, the people being told they are in the way, are the Standing Rock Sioux in particular, and the Great Sioux Nation in general. They are backed by tribal nations coast to coast that see their interests rising or falling with Standing Rock’s.
If the Black Snake pipeline prevails over the indigenous people of the Dakotas, there are circles of concern that take in Bismarck, North Dakota, the U.S. and Canada. It’s no exaggeration to say the fight to kill the Black Snake lends a whole new meaning to a chant from the days of the mainstream Civil Rights Movement: “The whole world is watching!”
Treaties and tribal sovereignty for the Great Sioux Nation are at the core of this movement, but Standing Rock has attracted allies who know nothing of that history and are protecting their own interests.
The history is as complicated as any attempt to describe the Great Sioux Nation to persons who come to it cold. As simply as can be stated, that great nation normally refers in the U.S. to the constituents of the Seven Fires Council, principally the Lakota and Dakota. The Nakota or Assiniboine people are linguistic relatives but live mostly in Canada in political confederacy with the Cree.
Another way to understand the Great Sioux Nation is through geography rather than linguistics, encompassing the totality of the Indian reservations in the Northern Great Plains region of the U.S. and consisting of lands called by the colonists the Dakotas, Wyoming, Nebraska and Montana.
The shooting part of the Indian wars ended with two iconic engagements between the U.S. Army and the Great Sioux Nation. The Sioux, with Cheyenne and Arapaho allies, were attacked by the Seventh Cavalry under the colorful Indian fighter and Civil War hero George Armstrong Custer. The resulting engagement destroyed five of the Seventh Cavalry’s 12 companies and is remembered by Indians as the Greasy Grass Fight. The colonists call it the Battle of the Little Bighorn or Custer’s Last Stand. Regardless of the name, both sides have reason to remember the bloodshed of June 25-26, 1876.
Greasy Grass was the high point of post-Civil War Indian resistance on the Northern Plains. The low point was the massacre of non-combatants set in motion at the Standing Rock Agency when Hunkpapa holy man Sitting Bull was “killed while resisting arrest” on December 15, 1890. Sitting Bull’s vision had guided the Indian resistance at Greasy Grass and foretold the outcome.
Upon Sitting Bull’s death, about 200 Hunkpapa fled Standing Rock to join Miniconjou Chief Spotted Elk. The reconstituted Seventh Cavalry forced Spotted Elk’s band to camp at Wounded Knee Creek, where they were outnumbered and facing four Hotchkiss guns. On December 29, 1890, the shooting part of the Indian wars ended with the massacre of some 300 Sioux, mostly non-combatants. Photographs of frozen bodies of women and children and the elderly Spotted Elk turned public opinion against the killing that had become so one-sided.
The Indian wars—understood as the process of separating indigenous people from their property—continued by less violent means and are being fought to this day.
A part of the history that is more obscure to the colonists than Greasy Grass or Wounded Knee but is in the face of the Standing Rock Sioux every day is the Black Snake’s encroachment on a spiritual battleground. The first camp of the water protectors was called Sacred Stones, after the perfectly round formations that used to be produced by the currents at the confluence of the Missouri and Cannonball Rivers.
The river quit disgorging sacred stones after 1958, when the U.S. Army Corps of Engineers rearranged the currents of the Cannonball by dredging for the construction of the Oahe Dam to create the lake of the same name. The Standing Rock people, having had their spiritual values and practices trampled by the creation of Oahe Lake, now object to that same lake becoming the lair of the Black Snake.
Even colonists who find those spiritual values to be the stuff of superstition should be able to understand that the Oahe Dam destroyed more Indian land than any other single public works project in North America. The Corps of Engineers inundated almost 56,000 acres of Standing Rock land and 104,000 acres of the Cheyenne River Reservation.
Like the Eastern Band Cherokee after the Tennessee Valley Authority built the Tellico Dam, the Sioux were left with sacred sites only reachable with scuba gear. Like the Cherokee, they adopted the practices suitable for the surface water and the rest became another blood memory of the colonial onslaught.
The trail of the Black Snake requires no federal permit because most of the route is across private land. So it is state governments rather than the federal government that can cede eminent domain authority to a company formed to make a profit.
The federal government has only a few choke points where the Black Snake requires a permit, principally to cross federally regulated waters. The immediate permitting issue is allowing the pipeline to cross Lake Oahe. The Corps of Engineers is charged with protecting the values expressed in the Clean Water Act and the Rivers and Harbors Act.
The issue hidden within the process is how specific the permitting process needs to be to comply with the law. The Corps of Engineers has created a class of permit called “Nationwide 12” (NWP 12) that amounts to a fast-track approval to appease the political forces in the U.S. that are in perpetual grievance mode about the time and money environmental regulations consume.
The first use of NWP 12 for a major project was in 2012, for the Gulf Coast Pipeline, the new moniker for the southern part of the Keystone XL Pipeline, proposed to move bitumen produced in the Canadian tar sands to refining facilities in Texas and Louisiana. Using NWP 12 meant that the Gulf Coast Pipeline got permission to cross 1,950 federally regulated waterways in four states without having to make investigations specific to each crossing to assure enforcement of the Clean Water Act and the National Environmental Policy Act (NEPA).
NWP 12 was created to finesse a serious political problem in which the Standing Rock Sioux are being tossed by the crosscurrents. When Republican candidates line up for public inspection, they have a wish list of federal agencies they wish to abolish. The Environmental Protection Agency usually vies with the Education Department for number one on the hit list.
Every NEPA review is a cost center, requiring studies on the ground and repeated defense of the project at public hearings as well as responses to written comments.
Environmentalists, somewhat hypocritically, now are more interested in the climate emergency the U.S. is polluting itself into than in this or that watercourse or endangered species. The objective becomes, if you gave them truth serum, to make every project involving fossil fuels more expensive and slower, without regard to the specifics on the ground.
Tribal governments, holding federal promises of consultation, are caught in the middle between no regulation and regulation so stiff as to make many projects prohibitively expensive.
The tribal issue often involves cultural preservation. Petroglyphs, sacred sites, and burials that are important tribal values become ammunition in the larger political clash. The purpose of tribal consultation goes far beyond clean water and sets up the National Historic Preservation Act as a means to protect artifacts associated with tribal values.
The bureaucratic method to fast-track pipeline projects is to break them into segments and claim that each segment is too insignificant to require a full NEPA review. Dallas Goldtooth, representing the Indigenous Environmental Network in a press statement, came out strongly against abusing NWP 12 to silence tribal consultations:
Goldtooth is correct, but getting the NEPA toothpaste back in the tube after the Obama administration squeezed it out may present a political challenge. As Obama gets closer to the end of his second term, it becomes more feasible for the pipeline companies to run out the clock. The Washington Post reported this week that Obama’s wish to re-route the Black Snake away from Standing Rock is running into the same problem.
NWP 12 tends to shut off tribal input, and lack of tribal input is asserted as the legal basis for stopping DAPL. The problem with that objection to DAPL is that a faulty procedure can always be cured by a do-over. The Corps of Engineers has more time for a do-over than Obama has left in office.
As the NWP 12 regulations are used right now, the project is considered small enough to fast-track if it is “single and complete�� and would result in loss of no more than one half acre of waters under federal jurisdiction. While that sounds reasonable, allowing pipeline companies to stack up hundreds of “small project” permits results in a separate category for pipelines and renders what should be transparent to affected communities opaque until the pipeline company has already spent so much money that stopping the project looks unfair.
Samantha L. Varaslona, writing in Georgetown Environmental Law Review, claims that the Gulf Coast Pipeline runs for 485 miles and crosses federal waters 2,227 times. This shows that the Corps of Engineers has succeeded in containing costs and shortening the process for its own convenience and that of the pipeline companies.
What the Corps overlooks is that the public policy dilemma has more than one horn. On the other side is clean water and cultural preservation—avoiding harm—and the Standing Rock Sioux are searching in vain for a fair hearing.
That’s only the first level of the circles of concern here that span the entire planet.
|
The Cardinal Edge
Climate change has been recognized as a severe threat to biodiversity. In the rapidly growing collection of literature on the consequences of global change, researchers have recently noticed a dramatic decrease in insect populations in a wide range of habitats. Insects are extremely susceptible to climatic change, especially with regard to fluctuations in moisture and temperature. However, insects often exhibit phenotypic plasticity, where organisms will express different phenotypes when presented with a specific environmental stimulus. In developmental plasticity, environmental stimuli at the larval stage can determine adult phenotypes. This review focuses on case studies of developmental plasticity in insects, with temperature and moisture as specific stimuli. This review also discusses the role of developmental plasticity on insect population survival and possible future adaptation in the context of global environmental change.
The initial version of this article has a mislabelled title. This version reflects the most accurate and up to date article.
|
01. Introduction Uncategorized
How does Bitcoin have value?
It is the common consensus, belief and the perception that gives value to the bitcoin. All the participants in this system have consensus on the following −
• immutability and integrity of the blockchain
• security and validity of the payments
• rules of the system
Bitcoin was the first practical implementation of blockchain technology and is currently the most significant triple entry bookkeeping system globally. In a bitcoin ecosystem, access to entire source code is available to everyone always and any one can review or modify the code. The authenticity of each transaction is secured by digital signatures of the sending parties thus ensuring that all users have complete control over sending bitcoins.
Thus, leaving a little room for fraud, no chargebacks and no identifying information that could be hacked resulting in identity theft.
Here is a list of some of the entities who accept Bitcoins −
• WordPress
• Namecheap
• Microsoft
• Dell Computers
• Bitpay
Leave a Reply
|
Hokan stock
The following families of languages indigenous to Mexico belong to the Hokan stock:
Serian family [Seri]
Tequistlatecan family [Oaxaca Chontal]
Yuman family [Cocopa, Kiliwa, Kumiai (Diegueño) and Paipai]
The Hokan languages include families of languages in Mexico and in the western part of the United States, especially in California. This group is somewhat famous because its validity as a group has been a topic of considerable dispute. The name "Hokan-Coahuiltecan" was also used previously because some linguists were proposing the inclusion of the Coahuiltec language (now extinct) from the state of Coahuila. The predominant view currently is more conservative and does not include it.
The name "Hokan" comes from the word for 'two' that supposedly is one of the pieces of evidence for the genetic relationship of these languages: the root is [*xwak] in Proto-Yuman, [ookx] in Seri, and [ogé?] or [ukwe?] in Oaxaca Chontal (Highland and Lowland, respectively.)
Serian family
The Seri language (cmique iitom) is considered a language isolate within the Hokan stock. It is spoken in two villages (El Desemboque del Río San Ignacio and Punta Chueca) on the coast of the state of Sonora, Mexico. Tiburón Island in the Gulf of California is part of the traditional Seri homeland, and it is called Tahéjöc.
The Seri people call themselves the comcáac (singular: cmique). Until the middle of the twentieth century they were hunter-gatherers. Their livelihood today is based on commercial fishing and the sale of shell necklaces, ironwood carvings, and traditional baskets.
Tequistlatecan family
The Tequistlatecan family currently consists of two languages that have been called Oaxacan Chontal. Highland Oaxacan Chontal has some dialectical variation. Speakers of Lowland Oaxacan Chontal (also called Huamelulteca) are very bilingual (in Spanish) and the native language is in danger of extinction. The Summer Institute of Linguistics has concluded its work in this language family.
Yuman family
Cocopa, Kiliwa, Tipai, Diegueño (Kumiai), Paipai
|
1997: Asian Financial Crisis
The Asian financial crisis was a period of financial crisis that gripped much of Asia beginning in July 1997, and raised fears of a worldwide economic meltdown due to financial contagion.
The crisis started in Thailand with the financial collapse of the Thai baht caused by the decision of the Thai government to float the baht, cutting its peg to the U.S. dollar, after exhaustive efforts to support it in the face of a severe financial overextension that was in part real estate driven. At the time, Thailand had acquired a burden offoreign debt that made the country effectively bankrupt even before the collapse of its currency. As the crisis spread, most of Southeast Asia and Japan saw slumping currencies, devalued stock markets and other asset prices, and a precipitous rise in private debt.[1]
Though there has been general agreement on the existence of a crisis and its consequences, what is less clear are the causes of the crisis, as well as its scope and resolution. IndonesiaSouth Korea and Thailand were the countries most affected by the crisis. Hong KongMalaysiaLaos and the Philippines were also hurt by the slump. ThePeople's Republic of ChinaPakistanIndiaTaiwanSingaporeBrunei and Vietnam were less affected, although all suffered from a loss of demand and confidence throughout the region.
Foreign debt-to-GDP ratios rose from 100% to 167% in the four large Association of Southeast Asian Nationseconomies in 1993–96, then shot up beyond 180% during the worst of the crisis. In South Korea, the ratios rose from 13 to 21% and then as high as 40%, while the other northern newly industrialized countries fared much better. Only in Thailand and South Korea did debt service-to-exports ratios rise.[2]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.