text
stringlengths 144
682k
|
|---|
Destroying Sacred Cows for the Cause of Christ
• Idol Killer
The Didache: The Teaching of the Twelve Apostles
Updated: Sep 1, 2018
Regardless of denominational affiliation the majority of modern Christians believe that everything about Jesus and the early church can be found in the Bible. Granted there is some disagreement on what constitutes actual Biblical canon, with Protestants affirming 66 books, Roman Catholics having 73 books and Eastern Orthodox affirming 78 books. So it should come as no surprise that the early Church included writings in their canon which are absent in today's Bibles.
If one stops and ponders the matter for a moment they are sure to realize our canon(s) could not possibly contain everything God has done, but instead provide us a solid account of who He is and summary of what He did. Thus it dawns on us that the Christian life, though affirmed by Scripture, must extend itself beyond mere study of the written word and into practice – that is to say Christianity is a relationship with God Himself, where one is rooted in Christ. So we see that while Scripture is breathed out by God, profitable for teaching, reproof, correction, and training in righteousness it must be applied and lived out that the man of God may be complete, equipped for every good work.
Despite these differences in canon, principle made practice is a constant theme throughout Scripture. The early Church also understood the necessity of principle made practice and compiled instructions on how to have a living faith.
Originally regarded by some of the early Church Fathers as part of the New Testament, the Didache was considered lost until its rediscovery in 1873. Dating back to between 50 AD and 110 AD, the Didache or “The Teaching of the Twelve Apostles”, constitutes the oldest surviving written catechism. It deals with Christian ethics, practices such as baptism and Eucharist, as well as Church organization and gives us a window into early Christian life. Essentially, it organizes many scriptures found throughout the New Testament into a concise summary - a mini-Bible if you will.
The author(s) of the Didache does not give their name(s), and while the title suggests authorship by the twelve Apostles themselves, it is also possible that its name is simply a reference to the teachings they passed down to their Disciples. Some consider the work to be part of the second-generation of Christian writings known as the Apostolic Fathers – that group of men who personally knew the Apostles. So while the Didache was highly regarded by the early Church and considered by some as part of the New Testament, it was ultimately excluded from the collection(s) we have today.
Clement of Alexandria (150-215 AD), Origen (184-253 AD), Eusebius (260-341 AD), and Athanasius (296-373 AD) were known to have all quoted from the Didache, with the latter recommending it for instruction on baptism of converts. Eusebius called it the "Institutions of the Apostles, listing it among writings he considered spurious such as Revelation."The Epistle of Barnabas" (130-131 AD) also quotes from the Didache.
Lost for centuries and rediscovered in 1873 by Philotheos Bryennios, a Greek Metropolitan of Nicomedia, the Didache entered the modern age as part of the Jerusalem Codex. Since this discovery, fragments in Latin, Coptic, Ethiopic, and Syriac have also been found, as well as a complete translation in Georgian.
Should the Didache have remained part of the New Testament and be included in our canon? There is disagreement today, just as there was then. Our canon(s), those books largely agreed upon across denominational lines, help to serve as a litmus test when considering if something is of God or not. Despite being rooted in accepted Scripture, the Didache remains largely ignored by mainstream Christianity today.
Do you agree with men like Clement, Origen and Athanasius, or do you simply view the Didache as a source of good teaching, yet not inspired? If you've not yet considered the matter, we invite you to do so.
The Didache is recognized by both historians and theologians as a window into the primitive life of the early Church.
Following is a translation of the Didache for you to read and consider yourself. Bear in mind, some of these admonitions were designed to counter the religious teachings of the Pharisees and the Pagan culture of the time.
The Didache: The Teaching of the Twelve Apostles
1. There are two Ways, one of Life and one of Death, and there is a great difference between the two Ways. 2. The way of life is this:" First, you shalt love the God who made thee, secondly, thy neighbor as thyself; and whatsoever thou wouldst not have done to thyself, do not thou to another." 3. Now, the teaching of these words is this: "Bless those that curse you, and pray for your enemies, and fast for those that persecute you. For what credit is it to you if you love those that love you? Do not even the heathen do the same?" But, for your part, "love those that hate you," and you will have no enemy. 4. "Abstain from carnal" and bodily "lusts." "If any man smite thee on the right cheek, turn to him the other cheek also," and thou wilt be perfect. "If any man impress thee to go with him one mile, go with him two. If any man take thy coat, give him thy shirt also. If any man will take from thee what is thine, refuse it not," not even if thou canst. 5. Give to everyone that asks thee, and do not refuse, for the Father's will is that we give to all from the gifts we have received. Blessed is he that gives according to the mandate; for he is innocent; but he who receives it without need shall be tried as to why he took and for what, and being in prison he shall be examined as to his deeds, and "he shall not come out thence until he pay the last farthing." 6. But concerning this it was also said, "Let thine alms sweat into thine hands until thou knowest to whom thou art giving."
1. But the second commandment of the teaching is this: 2. "Thou shalt do no murder; thou shalt not commit adultery"; thou shalt not commit sodomy; thou shalt not commit fornication; thou shalt not steal; thou shalt not use magic; thou shalt not use philtres; thou shalt not procure abortion, nor commit infanticide; "thou shalt not covet thy neighbor's goods"; 3. Thou shalt not commit perjury, "thou shall not bear false witness"; thou shalt not speak evil; thou shalt not bear malice. 4. Thou shalt not be double-minded nor double-tongued, for to be double-tongued is the snare of death. 5. Thy speech shall not be false nor vain, but completed in action. 6. Thou shalt not be covetous nor extortionate, nor a hypocrite, nor malignant, nor proud, thou shalt make no evil plan against thy neighbor. 7. Thou shalt hate no man; but some thou shalt reprove, and for some shalt thou pray, and some thou shalt love more then thine own life.
1. My child, thou shalt remember, day and night, him who speaks the word of God to thee, and thou shalt honor him as the Lord, for where the Lord's nature is spoken of, there is he present. 2. And thou shalt seek daily the presence of the saints, that thou mayest find rest in their words. 3. Thou shalt not desire a schism, but shalt reconcile those that strive. Thou shalt give righteous judgement; thou shalt favor no mans person in reproving transgression. 4. Thou shalt not be of two minds whether it shall be or not. 5. Be not one who stretches out his hands to receive, but shuts them when it comes to giving. 6. Of whatsoever thou hast gained by thy hands thou shalt give a ransom for thy sins. 7. Thou shalt not hesitate to give, nor shalt thou grumble when thou givest, for thou shalt know who is the good Paymaster of the reward. 8. Thou shalt not turn away the needy, but shalt share everything with thy brother, and shalt not say it is thine own, for if you are sharers in the imperishable, how much more in the things which perish? 9. Thou shalt not withhold thine hand from thy son or from thy daughter, but thou shalt teach them the fear of God from their youth up. 10. Thou shalt not command in thy bitterness thy slave or thine handmaid, who hope in the same God, lest they cease to fear the God who is over you both; for he comes not to call men with respect of persons, but those whom the Spirit has prepared. 11. But do you who are slaves be subject to your master, as to God's representative, in reverence and fear. 12. Thou shalt hate all hypocrisy, and everything that is not pleasing to the Lord. 13. Thou shalt not forsake the commandments of the Lord, but thou shalt keep what thou didst receive, "Adding nothing to it and taking nothing away." 14. In the congregation thou shalt confess thy transgressions, and thou shalt not betake thyself to prayer with an evil conscience. This is the way of life.
1. Concerning baptism, baptise thus: Having first rehearsed all these things, "baptise, in the Name of the Father and of the Son and of the Holy Ghost," in running water; 2. But if thou hast no running water, baptise in other water, and if thou canst not in cold, then in warm. 3. But if thou hast neither, pour water three times on the head "in the Name of the Father, Son and Holy Ghost." 4. And before the baptism let the baptiser and him who is to be baptised fast, and any others who are able. And thou shalt bid him who is to be baptised to fast one or two days before.
1. And concerning the Eucharist, hold Eucharist thus: 2. First concerning the Cup, "We give thanks to thee, our Father, for the Holy Vine of David thy child, which, thou didst make known to us through Jesus thy Child; to thee be glory for ever." 3. And concerning the broken Bread: "We give thee thanks, our Father, for the life and knowledge which thou didst make known to us through Jesus thy Child. To thee be glory for ever. 4. As this broken bread was scattered upon the mountains, but was brought together and became one, so let thy Church be gathered together from the ends of the earth into thy kingdom, for thine is the glory and the power through Jesus Christ for ever." 5. But let none eat or drink of your Eucharist except those who have been baptised in the Lord's Name. For concerning this also did the Lord say, "Give not that which is holy to the dogs."
1. But after you are satisfied with food, thus give thanks: 2. "We give thanks to thee, O Holy Father, for thy Holy Name which thou didst make to tabernacle in out hearts, and for the knowledge and faith and immortality which thou didst make known to us through Jesus thy Child. To thee be glory for ever. 3. Thou, Lord Almighty, didst create all things for thy Name's sake, and didst give food and drink to men for their enjoyment, that they might give thanks to thee, but us hast thou blessed with spiritual food and drink and eternal light through thy Child. 4. Above all we give thanks to thee for that thou art mighty. To thee be glory for ever. 5. Remember, Lord, thy Church, to deliver it from all evil and to make it perfect in thy love, and gather it together in its holiness from the four winds to thy kingdom which thou hast prepared for it. For thine is the power and the glory for ever. 6. Let grace come and let this world pass away. Hosannah to the God of David. If any man be holy, let him come! if any man be not, let him repent: Maranatha ("Our Lord! Come!"), Amen." 7. But suffer the prophets to hold Eucharist as they will.
1. Whosoever then comes and teaches you all these things aforesaid, receive him. 2. But if the teacher himself be perverted and teach another doctrine to destroy these things, do not listen to him, but if his teaching be for the increase of righteousness and knowledge of the Lord, receive him as the Lord. 3. And concerning the Apostles and Prophets, act thus according to the ordinance of the Gospel. 4. Let every Apostle who comes to you be received as the Lord, 5. But let him not stay more than one day, or if need be a second as well; but if he stay three days, he is a false prophet. 6. And when an Apostle goes forth let him accept nothing but bread till he reach his night's lodging; but if he ask for money, he is a false prophet. 7. Do not test or examine any prophet who is speaking in a spirit, "for every sin shall be forgiven, but this sin shall not be forgiven." 8. But not everyone who speaks in a spirit is a prophet, except he have the behaviour of the Lord. From his behaviour, then, the false prophet and the true prophet shall be known. 9. And no prophet who orders a meal in a spirit shall eat of it: otherwise he is a false prophet. 10. And every prophet who teaches truth, if he do not what he teaches, is a false prophet. 11. But no prophet who has been tried and is genuine, though he enact a worldly mystery of the Church, if he teach not others to do what he does himself, shall be judged by you: for he has his judgment with God, for so also did the prophets of old. 12. But whosoever shall say in a spirit "Give me money, or something else," you shall not listen to him; but if he tell you to give on behalf of others in want, let none judge him.
1. "Watch" over your life "let your lamps" be not quenched "and your loins" be not ungirded, but be "ready," for ye know not "the hour in which our Lord cometh." 2. But be frequently gathered together seeking the things which are profitable for your souls, for the whole time of your faith shall not profit you except ye be found perfect at the last time; 3. For in the last days the false prophets and the corruptors shall be multiplied, and the sheep shall be turned into wolves, and love shall change to hate; 4. For as lawlessness increaseth they shall hate one another and persecute and betray, and then shall appear the deceiver of the world as a Son of God, and shall do signs and wonders and the earth shall be given over into his hands and he shall commit iniquities which have never been since the world began. 5. Then shall the creation of mankind come to the fiery trial and "many shall be offended" and be lost, but "they who endure" in their faith "shall be saved" by the curse itself. 6. And "then shall appear the signs" of the truth. First the sign spread out in Heaven, then the sign of the sound of the trumpet, and thirdly the resurrection of the dead: 7. But not of all the dead, but as it was said, "The Lord shall come and all his saints with him." 8. Then shall the world "see the Lord coming on the clouds of Heaven."
209 views0 comments
Your details were sent successfully!
|
Most people know the story of how, after Galileo published his Dialogue Concerning the Two Chief World Systems, he was threatened with torture and forced to “abjure, curse and detest” his heretical view that the Earth goes round the Sun. His book was placed on the infamous Index librorum prohibitorum – the index of banned books – and he was placed under house arrest for the rest of his life.
What most people don’t know is that Galileo’s house arrest was not entirely onerous. He spent the first part of it as an honoured guest in the home of the Tuscan ambassador before moving to the residence of the archbishop of Siena, where he was given facilities to start writing his book, which he called Dialogues Concerning Two New Sciences, a book that he continued to write after finally being allowed to return to his own villa in Arcetri.
Even fewer people know that Dialogues Concerning Two New Sciences was smuggled out of Italy and printed by the Dutch publisher Louis Elsevier. Elsevier was taking quite a risk, since he had sought advice from the Inquisition and had specifically been told that all of Galileo’s writings were banned from publication, both in Italy and elsewhere. He was rewarded, however, by a best-seller that created a sensation in the European scientific community through its revelations of hitherto unknown laws and their practical applications. It contains the first description of the scaling laws that describe the strength of material structures, and which form the basis of modern architectural and engineering practice. Galileo also demonstrated in Dialogues Concerning Two New Sciences that objects moving at a constant velocity will keep on doing so even if nothing is pushing or pulling them, thus demolishing the old Aristotelian ideas at a stroke, and providing a basis for Newton’s First Law of Motion.
So the original Elsevier made a major contribution to science and civilization. It will be interesting to see what happens now to the firm that has inherited his mantle.
ADAPTED FROM CHAPTER 2 of Weighing the Soul
Share This
|
England, Wales and Spain recorded the highest death rates in Europe during the first wave of the coronavirus pandemic.
Scotland was close behind according to analysis of excess deaths per 100,000 of the population between February and May.
Researchers at Imperial College London looked at weekly death data from 19 European countries, Australia, and New Zealand.
They looked at deaths compared to what would normally be expected in previous years to include deaths from Covid-19 as well as the knock on impacts of lockdown.
This was equivalent to a 37% increase in deaths in England and Wales, and 38% increase in deaths in Spain.
England and Wales, together with Sweden - the only country that did not put in place a mandatory lockdown and only used voluntary social distancing measures - had the longest durations of excess mortality.
The results, published in the journal Nature Medicine, showed Scotland had a 28% increase in death rates.
Austria, which had very low numbers of deaths from all causes, has nearly three times the number of hospital beds per head of population than the UK.
England, Wales and Spain recorded the highest death rates in Europe
Senior author Professor Majid Ezzati said: “Long-term investment in the national health system is what allows a country to both respond to a pandemic, and to continue to provide the day to day routine care that people need.
“We cannot dismantle the health system through austerity and then expect it to serve people when the need is at its highest, especially in poor and marginalised communities.”
Over all 21 countries 206,000 more people died from all causes than would have been expected had the pandemic not taken place. This was an 18% increase.
England and Wales alone accounted for 28% of excess deaths across all countries combined, while Italy accounted for 24%, and Spain 22%.
The number of coronavirus cases in the UK is soaring (
Getty Images)
Author Dr Jonathan Pearson-Stuttard, from Imperial College London, said a number of factors may have influenced why England, Wales and Scotland had high death rates from all causes.
He said that a combination of the general population health, the resilience of the public health and social care system, and the policy response to the pandemic, may have contributed to “what looks like the highest excess deaths across the 21 countries”.
He added: “What Covid 19 pandemic in that first wave has done is identify just how frail and vulnerable our society and our economy is to our public’s ill health.
“So everything that has been [an] issue - whether that’s obesity, whether that’s relative inequalities and so forth - each of those are risk factors for worst Covid outcomes - and that’s as individuals or communities or whole nations.
“On many of those aspects, our public health has lagged behind other countries for some years, and the Covid 19 pandemic has brought that to the fore.”
The 21 countries in the analysis were Australia, Austria, Belgium, Bulgaria, Czechia, Denmark, England and Wales, Finland, France, Hungary, Italy, Netherlands, New Zealand, Norway, Poland, Portugal, Scotland, Slovakia, Spain, Sweden and Switzerland.
The research team separated countries in the study into four categories, depending on each country’s overall death toll.
The first group were those that avoided a detectable rise in deaths, and included Bulgaria, New Zealand, Slovakia, Australia, Czechia, Hungary, Poland, Norway, Denmark and Finland.
The low impact group included Austria, Switzerland and Portugal, while the medium impact group included France, the Netherlands and Sweden.
The fourth group, which experienced the highest number of deaths, included Belgium, Italy, Scotland, Spain and England and Wales.
|
| Agree
What Is Scrum?
Scrum is an important approach in agile projects. It is used for delivering software products with iterative incremental process. Scrum basically follows rugby stint in which teams use short sprints. Short sprints are useful for delivering a steady progress in a strong leadership. The leader in short sprints is referred as Scrum Master.
The Merlin Project Magic on the Mac and iPad
|
From Lightning on Jupiter to Apollo 13’s Call for Help, Hear Some of NASA’s Greatest Recordings
An audio archive captures some iconic moments of space history
Sputnik 1 was too small to see. But its persistent, unwavering beep...beep...beep...beep...beep...was picked up by ham radio operators and nervous governments across the planet: the beachball-sized Soviet satellite had sparked the beginning of the space age.
Like the beep of Sputnik 1, JFK's space race speech and the roar of an Atlas V rocket, some of the most iconic moments in space history are known not for their imagery but for their sounds.
That's one small step for man. One giant leap for man kind.”
“The Eagle has landed.”
Houston, we've had a problem.”
On its new Soundcloud page, NASA has collected the recordings of these and other notable moments, a sound archive of some of space history's greatest hits.
Aside from the human drama of the early space race, the page houses recordings notable not necessarily for how they sound, but for what they represent. Take, for instance, this recording, capturing the sound of electromagnetic waves known as “chorus waves”.
Or listen to these recordings of lightning on Jupiter and of the dust from Comet Tempel 1 hitting the Stardust spacecraft. Those are the sounds of humanity pushing to the edge of our capabilities.
Like most of NASA's creations, the sound recordings uploaded to their Soundcloud page are free to do with what you will. Just, maybe, think long and hard before you sample NASA's clips for your next album.
|
Identifying signs of infertility: Symptoms, causes and first steps
Reproductive endocrinologist consults with a couple
Infertility is more common than many may think. It affects one in eight heterosexual couples who are trying to get pregnant. Overall, 12 to 15% of people are infertile.
As a reproductive endocrinologist, my colleagues and I witness firsthand the impact that infertility has on people who desire to have a baby. It’s part of what drives our passion to help them find hope whenever possible. At the Center for Reproductive Medicine and Fertility, we start by identifying the causes of infertility in each person or couple, then exploring every possible option to help our patients build their families.
How do I know if my partner or I may be infertile?
Infertility is defined as the inability to conceive after one year of regular, unprotected intercourse. If you or your female partner are over the age of 35, you should see a fertility specialist after six months. After age 40, we recommend seeking help right away because we know fertility declines as age increases.
What are other signs and symptoms of infertility in females?
Aside from having trouble conceiving, symptoms can vary significantly from one person to another. Depending on the reason for infertility, sometimes women may experience pelvic pain, heavy periods, skipped periods or unpredictable vaginal bleeding. It’s important to discuss any unusual symptoms with your doctor. Some of these symptoms may represent underlying hormonal conditions that should be addressed even if you are not trying to conceive.
What are the most common causes of infertility?
Infertility can be a result of many different factors — even in one person or couple. The most common causes include problems with ovulation, structural issues in the uterus or fallopian tubes, or abnormalities in sperm.
In females, medical conditions such as uterine fibroids, endometriosis, polycystic ovary syndrome (PCOS), uterine polyps or a history of pelvic infections are often associated with infertility.
Can you still be infertile if you have a period?
Yes. Having regular predictable periods is a good indicator that you ovulate regularly. In other words, it means an egg is being released from your ovaries on a regular basis. But, ovulation alone does not guarantee that you can get pregnant. Sometimes there can be an issue with egg quality, how the egg is fertilized, its ability to be transported to the uterus, or how it becomes implanted in the uterus. There may also be a problem with the sperm. If you’re having a period and regular intercourse but have not conceived, it is important to talk to your health care provider about whether a referral to a fertility specialist is warranted.
How is infertility a unique health challenge for women of color?
Outcomes indicate that black, Asian and Hispanic or Latina women may have less success with fertility treatments compared to white women. Overall, we don’t know exactly why this happens. However, we do know certain medical conditions that can impact fertility occur more frequently in some races than in others. For example, women of color have a higher likelihood of having uterine fibroids than white women.
Additionally, studies show that women of color are more likely to be affected by socioeconomic factors that can make it more difficult to get treatment — or even to understand the need the need for treatment. At UChicago Medicine, we take a personalized approach to educating each patient about their reproductive health and family-building options. Our team is committed to providing excellent care for everyone.
What should I do if I think I’m infertile?
If you believe that you or your partner may be experiencing infertility, it’s important to seek an evaluation with a reproductive endocrinologist as soon as you can. Typically, the first steps will involve a visit to the doctor, blood tests, a pelvic ultrasound and a semen analysis.
Can infertility be cured?
Infertility isn’t cured but it can be treated. In many cases, factors that lead to infertility can be overcome with treatments like intrauterine insemination (IUI) or in vitro fertilization (IVF). Your doctor can help you understand which family-building options may work best for you.
Amanda Adeleye, MD
Amanda Adeleye, MD
Reproductive endocrinologist Amanda Adeleye, MD, specializes in reproductive medicine and infertility treatment, including fertility preservation, in vitro fertilization (IVF) and intrauterine insemination (IUI).
Learn more about Dr. Adeleye
Convenient Clinic Locations in Hyde Park, South Loop and Hinsdale
Request an Appointment
You can also make an appointment with our providers by:
Requesting an online second opinion from our specialists
* Required Field
Reproductive Endocrinology and Infertility
|
How to Achieve Composition. Imagine a base logging class that has gradually gained subclasses as developers needed … I’ll do my best to sum up both ideas with really simple example code. Both composition and inheritance are object-oriented programming concepts.They are not tied up with any specific programming language such as Java. I did come up with a solution for multiple inheritance in PHP, but I’ll explain why I don’t recommend using it. Extensibility: Derived classes can override or extend class methods. While inheritance is very useful within PHP and OOP, it is notably better to favor composition over inheritance. I’m learning about dealing with PHP’s lack of multiple inheritance. Composition is in the end much more flexible than inheritance and dependency injection is a valid way to compose a class. Along the way I learned a lot about multiple inheritance vs. composition in PHP. The biggest problem of Inheritance is class coupling. To achieve composition, one class is composed in other class as instance field. Composition vs Inheritance. Inheritance is when you design your types after what they are, while composition … Here, an instance variable is used to refer to other classes in its composed classed. "Favour composition over inheritance" is more accurately expressed as "Never, ever, ever use class inheritance", with the clarification that implementing an interface is not inheritance and if your chosen language only supports abstract classes, not interfaces, inheriting from a 100% abstract class can be viewed as "not inheritance" as well. The big selling points of Inheritance are: Reusability: Derived classes inherit code for reuse. We’ll learn what the differences are and which use cases suit each best. A commonly-heard phrase in the world of OOD is “composition over inheritance”. In this video walkthrough, JREAMdesign demonstrates the composition concept with interrelated functions, and also shows a way of logically organizing the resultant files.
Aldi Margarine For Baking, Dental Implants Cost Thailand, Bdo Guild Storage Calpheon, Common Knowledge Game Theory, Covid Hand Washing Poster, Buy Ficus Benghalensis,
|
Learning About Photography – The History, Uses And Production of Photography
Learning More About PhotographyPhotography is the art and process of creating still life pictures by recording radiation on a sensitive medium. This is photographic film, or electronic imaging sensors that can capture the picture. Light is usually used instead of radiation in most cases of photography. When the light is reflected from the objects that are being captured, the objects form a real image on a light sensitive film or plate inside the camera using a timed exposure. This image can then be developed into a visual image for many purposes.History of PhotographyPhotography was initially invented in the 19th century. It created a whole new way to capture images instead of using paintings and sculptures. The usable process of photography dates back to the 1820’s however when chemical photography was thought of. The first photoetching was produced in 1822 by Nicephore Niepce. He and Louis Daguerre invented a new way to take pictures quicker using silver and chalk. The first ever photo taken of a person was taken in 1839 with the new invention. Negative images were created in 1840 by a man named Talbot; his print is the oldest known negative in existence to this day. The blueprint was developed by John Herschel in 1819 by the use of silver halides. His discovery allowed pictures to be permanent, and made the first glass negative in 1839.The wet plate collodion process of photography was used widely between 1852 and the late 1860’s before the dry plate was introduced. It involved a positive image on glass, positive image on metal, and then the negative that was printed on salt paper. Advancements in photography continued to expand throughout the 19th century. The plates were replaced with film which is used today in 1884. Colors were introduced in 1908 by Gabriel Lippmann who won the Nobel Laureate in Physics for this creation.Uses of PhotographyA lot of people gained interest in photography for many reasons since it has been introduced. One of the biggest uses was for scientists to record and study movements in space, animals, and humans. Artists also gained interest in photography because they like to capture reality, and also turn reality into fantasy by distorting the images that they take creating art from these images for display. The military also uses photography for surveillance and date storage. Everyday people use photography to capture special moments in life, and to preserve those times in the pictures as well as a source for entertainment.Production of PhotographyAmateur production of photography is when photography is done in a not for profit way, and as a hobby. A person who is an amateur might have the skills of a professional, but do not want to turn their photographs into a professional type of work. Commercial production of photography is when the photographer is paid for their photography and used for a number of different things. Some of these things include advertising, fashion, crime scene photography, still life, museums, food, editorial, photojournalism, wedding photography and other professional portraits, landscape, paparazzi, and also wildlife photography. They are then produced in different outlets such as magazines and newspapers. The photographers are usually paid for their work.Photography has since been a long time hobby and fun activity for people all over the world. There is a deep history involved with photography, many purposes for photography, and a general love of photography all over the world. Photography might not be for everyone, but it is a hobby or job for some. Whether the photographer wants to use their images for themselves or for a profit, photography is something that helps the world go around to this day.
|
Practical | Al-Hijra (Islamic New Year)
Practical | Al-Hijra (Islamic New Year)
The Prophet Muhammad’s mission lasted 23 years. It began when he was 40 in his birth-city of Makka, and ended 23 years later in a city called Yathrib, which was renamed Madinah-tun-Nabi, or simply, City of the Prophet. The first 10 of those years in Makka were tough. This was a major trading city bustling with commercial power players, that is, authoritative Arab families, who blocked, banished and persecuted the few loyal followers Muhammad had gained.
The city of Yathrib, about 200 miles away, was different. Locals had heard news of this “Messenger” person in Makka and felt their city needed a leader like him. Muhammad saw a vision of this city in a dream and understood it to mean a message from God. After some private meetings with a delegation from Yathrib, at a time when the powerful in Makka made a concrete plan to assassinate Muhammad in his sleep, the Prophet migrated (“al-hijra”) to Yathrib with his close friend Abu-Bakr, skilfully evading bounty hunters sent after him. On arriving in Yathrib, he was cheered by the locals, who burst into song. Other followers from Makka later joined him and the new community of believers had found a peaceful new home. People bonded, renamed the city in the Prophet’s honour, and built a mosque together with a small house next to it for Muhammad. There he lived until he died.
The Prophet Muhammad would use the mosque to teach his believers (men and women together, it must be noted) about God and living morally righteous lives, as well as leading the entire city in civic matters. Migration or “hijra” to this new city proved so pivotal to the Prophet’s mission that, some 5 years after his death, it was chosen as the starting point for a new Islamic calendar - a calendar that is roughly 6 centuries newer than our conventional one.
There isn’t really a “happy new year” celebration in Muslim customs in the way that new years are celebrated. This is probably explained by another historical event within 50 years of Muhammad passing, that occurred early into the new year. The Prophet Muhammad’s grandson Hussain (or Husayn) was killed gruesomely in a battle for political power. The killing has special resonance with (the section of) “Shia” Muslims, and more broadly, the sobriety of the event has coloured the time of new year to take away its “happy” element. Indeed, because the start of the year is associated with sadness, marriages rarely take place during the (first Islamic calendar) month of Muharram in many Muslim societies.
Back to the Practical section, which takes you to resources that help with daily lived experiences.
Back to Resources for Mentors main section.
|
Mpumalanga Province Freight Data Bank > Roads > Overview
Not a member yet? Register now and get started.
lock and key
Sign in to your account.
Account Login
Forgot your password?
Role of transport in society
It must be understood that transportation is a means to an end. It supports quality of life and the economy if it delivers the services that individuals and institutions need, in such a way that the users are able to access the services, and that the services are effective and reliable. Access, reliability and effectiveness imply several issues, including sustainability of the services. If the appropriate transport infrastructure has been provided, and is functioning correctly, then they meet community aspirations, and reduce the cost to communities to access social services. Similarly, but from an economic development point of view, transport lowers the cost of production and consumption. Thus transportation is essential in the operation of a market economy and transport infrastructure is linked to economic growth.
Importance of road transport infrastructure
The road network in Mpumalanga is the province's major asset, offering the province and the country a relatively significant competitive advantage. However, large sections of the network have been showing signs of severe stress for some time now, and in other cases, complete failure. This has created a relatively unreliable trading system, leaving prospective businesses at the mercy of unsafe and cumbersome trade routes. On average, transport costs are therefore higher than in other similar regions of the world.
It is also pertinent to note that like national and provincial roads, local circulation networks in municipalities (access roads, pedestrian bridges, paths, drainage systems, etc.) like other types of infrastructure are assets that can indeed be valued. And, like any commodity, deterioration of the condition and riding quality will result in the value of the asset dropping. The further the asset is allowed to deteriorate the more difficult and costly it becomes to restore, and the more the local economy will be adversely affected. Clearly, well-appointed infrastructure underpinned by beneficiary-oriented programs improves productivity, promotes employment creation, positively impacts income-growth, promotes regional integration and eventually irreversibly erodes poverty. Given that such investment in infrastructure is largely lumpy and costly, local authorities, particularly small municipalities which are at the coalface of development endeavours often are unable to raise the requisite investment funds, nor have the capacity to manage such funds if they were available. These authorities tend to depend on an outside system of economics to meet their requirements.
The responsibility for the total road network in Mpumalanga is shared among the three spheres of Government, as follows:
• National roads, managed by the SANRAL
• Provincial roads, managed by the MDOPWRT through its Roads Infrastructure Branch; and
• Local municipal roads and streets, managed by the different district and local municipalities.
|
What is the first step in the weaving cycle?
The first step in the weaving process is the warping of threads. Warping is the process of winding and arranging threads from the cone into a desired sequence and length, and with equal tension, along a warp beam prior to weaving.
What is the first step in the weaving process?
Start with the first yarn of the warp. Weave the shed stick under the first yarn and over the next yarn. Continue this pattern until you get to the end of the warp. Keep the shed stick in between the warp yarns that you just woven through.
What is weaving cycle?
One complete cycle of shedding, filling insertion, beat-up, and warp let-off.
Which step in the weaving process comes after shedding?
The process of weaving can be simplified to a series of four steps: the shed is raised, the shuttle is passed through, the shed is closed, and the weft thread is beaten into place. These steps are then repeated, with a different set of threads being raised so as to interlace the warp and weft.
INTERESTING: Can stitches turn white?
What are the preparations required before weaving?
The processing steps are as follows: • Winding and clearing • Weft winding • Warping • Sizing and other applications • Entering and knotting • Loom operation • Finishing • Inspection and measuring. performance of woven fabrics. … These requirements apply to varying extents to the warp and weft yarns, respectively.
What are two of the most common methods of basket weaving?
Which is the longest repeated process in weaving?
What are the three types of weaving?
What is the first stage of weaving saree?
A. Preparing the yarn is the first step for any kind of weaving industry. Properly prepared yarn is easier to work with as it does not snap or break while being woven.
Can beat up mechanism?
The third primary weaving motion is performed by the beat-up mechanism (sley mechanism). The main function of the beat-up mechanism is the reciprocating motion of the reed. density and fabric width precisely. the weft carrier across the warp sheet.
How long does it take to weave fabric?
Very (very) generally speaking, i would suspect that a five foot by 20 inch piece of fabric would take between three to five hours, depending on the experience of the weaver, the tools available to them (including the loom, shuttles, and measuring accessories), and thread.
INTERESTING: Is it OK to crochet wet dreads?
What is open shed in weaving?
Shedding. “Shedding” is the process of creating an open path across and through the warp yarns by raising some warp threads by their harnesses and leaving others down. While the shed is open, the filling yarn is inserted. The shed is then changed as dictated by the pattern.
The world of creativity
|
Skip to Main Content
• The skin participates directly in thermal, electrolyte, hormonal, metabolic, and immune regulation.
• Percutaneous absorption depends on the xenobiotic's hydrophobicity, which affects its ability to partition into epidermal lipid, and rate of diffusion through this barrier.
• The cells of the epidermis and pilosebaceous units express biotransformation enzymes.
• Irritant dermatitis is a nonimmune-related response caused by the direct action of an agent on the skin.
• Allergic contact dermatitis represents a delayed (type IV) hypersensitivity reaction, whereby minute quantities of material elicit overt reactions.
The skin protects the body against external insults in order to maintain internal homeostasis. It participates directly in thermal, electrolyte, hormonal, metabolic, and immune regulation. Rather than merely repelling noxious physical agents, the skin may react to them with various defensive mechanisms that serve to prevent internal or widespread cutaneous damage. If an insult is severe or intense enough to overwhelm the protective function of the skin, acute or chronic injury becomes readily manifest. The specific presentation depends on a variety of intrinsic and extrinsic factors including body site, duration of exposure, and other environmental conditions (Table 19–1).
Table 19–1 Factors influencing cutaneous responses.
Skin Histology
The skin consists of two major components: the outer epidermis and the underlying dermis, which are separated by a basement membrane (Figure 19–1). The junction ordinarily is not flat but has an undulating appearance (rete ridges). In addition, epidermal appendages (hair follicles, sebaceous glands, and eccrine glands) span the epidermis and are embedded in the dermis. In thickness, the dermis makes up approximately 90 percent of the skin and has largely a supportive function. Separating the dermis from underlying tissues is a layer of adipocytes, whose accumulation of fat has a cushioning action. The blood supply to the epidermis originates in the capillaries located in the rete ridges at the dermal–epidermal junction. Capillaries also supply the bulbs of the hair follicles and the secretory cells of the eccrine (sweat) glands. The ducts from these glands carry a dilute salt solution to the surface of the skin, where its evaporation provides cooling.
Figure 19–1
Pop-up div Successfully Displayed
|
What does Carillion’s failure tell us about procurement skills?
ADR’s Development Needs Analysis skills assessment shows that the biggest skills gap in procurement professionals globally is supplier cost analysis.
This article explains how this skills insight links to recent events at Carillion.
What is is required to ensure the suppliers we select are stable?
There has been much media commentary this week about the failure of the 2nd largest construction firm in the UK, Carillion. Much of this has focused on the government purchasing team who selected Carillion. Some have criticized the buyers’ lack of rigor in the supplier selection process that allowed Carillion to be chosen as a vendor despite them not having the financial strength to support the investment.
However, most major organizations have the appropriate processes and procedures to ensure prospective suppliers are evaluated in a manner that is appropriate to the scope and scale of the project that they are being considered for. Such processes include:
1. Evaluation against customer technical requirements
The extent to which the suppliers’ proposals meet the specification / statement of work.
2. Evaluation against customer business requirements
A review of the extent to which the suppliers’ proposals meet the business requirements of the buyer in terms of availability, service, quality, cost and legal / corporate social responsibly.
3. Evaluation against customer budget requirements
A review of the suppliers’ proposed prices and actual input costs.
4. Evaluation against customer financial sustainability requirements
A review of the suppliers’ financial performance is done for risk management purposes (does the supplier have adequate cash, liquidity and solvency to initiate and sustain the work?) It is also done to seek opportunities for improvement (does the supplier have profit in line with the industry norm, does it manage its operation efficiently, according to the financial data?)
5. Evaluation against customer business values
Many organizations also attempt to assess whether their suppliers meet their business values that could include areas such as social impact, ethical behavior or problem-solving practices.
Given these well-established processes that help ensure that the suppliers we select meet our needs now and in the longer term, why does supplier evaluation and selection often go so wrong? How can it result in a supplier being selected that subsequently cannot perform according to your needs and the contractual agreement?
With over 30,000 online evaluations ADR’s online skills assessment tool has shown that while tender procedures are often applied well, what is often lacking is the right cost analysis skills to support supplier evaluation and selection.
What cost analysis skills are required to support reliable supplier selection?
1. Collecting the right cost analysis data
Every supplier proposal should include a breakdown of the actual input costs that the supplier will invest in order to implement and manage their solution. This includes:
• Direct and Indirect Labor: Wages and social costs of the number of people with the varying qualifications and skills required to perform the work. Indirect labor refers to the people performing services that are required for the project to be effected, but are not directly working on it.
• Materials: Actual prices paid for the quantities of bought-in materials, consumables, tools, supporting equipment.
• Overheads: The premises required to perform the work, and the premises of people, equipment and items that support the work. In addition, software and other enablers like licenses and maintenance to support the work. Finally come overheads, which are business costs like rent, tax and energy. Overhead costs are allocated proportionately across all the suppliers’ customers.
2. Analyzing cost analysis data
Analysis goes wrong when the correct information is:
• Uncollected – either because suppliers haven’t been asked, or asked for the right thing. Or suppliers refuse to provide the data, and this is tolerated.
• Untested – the information is given to buyers but there is no sensitivity analysis to determine what would happen if any of the conditions changed from the time of contracting to later in the agreement life. For example, market environment, currency, volume of work, political change could all impact the input costs of the supplier.
• Unchallenged – the information is given to buyers but it is not questioned. In particular, there is a lack of interrogation of the source of funding for supplier investment, inadequate review of the efficacy of the suppliers’ own procurement approach, and a failure to question the assumptions behind the costs.
• Unsustainable – the supplier has put forward a solution that will win them work but it is not possible to maintain the agreement without additional funding. They may attempt to get this money from buyers in the form of a price increase later on in the agreement life.
• Invalidated – the information is given to buyers, who fail to check its veracity. The data can be checked a variety of ways including using third party sources, site visits, and benchmarking.
3. Creating a business continuity plan
Business continuity planning involves creating options for if the supplier does not fulfil their contract through interrupted or ceased operation, unavailable or late goods or services or any other form of performance failure. To address this, the buyer should prepare the following:
a. Account cost plan
Buyers should expect that all prospective supplier bids include an account plan that details the financial investment at all stages of the proposed contract. This may include recruitment and training costs, ongoing maintenance costs, costs to replace equipment that is end-of-life, and plans to mitigate adverse cost conditions in the market. The implications to the buyer should be clear, for example the associated support and resourcing costs that the buyer will have to pay for as part of their internal business costs. This demonstrates that suppliers are considering the whole life cost of a proposal, not just the element that relates to their part of the supply chain.
b. Risk plan
Buyers should create business continuity plans with the help of their prospective suppliers. This is a plan that details what will happen in the event of supplier failure for any reason, to ensure that the operation is maintained. Typically this is included in a sourcing strategy and could include options such as additional qualified sources or temporary insourcing.
c. Forecast
Buyers should also concern themselves with the customer / service user side of the project, if the purchase is being used for external stakeholders rather maintenance, repair and operations (MRO) items for internal use. Suppliers need good visibility of upcoming requirements, volumes and changing needs. This helps them to plan their business, resourcing and procurement plan. Buyers who are able to give suppliers stable forecast information help their suppliers to give cost and financial data that is reliable, with suitable caveats included.
d. Supplier Performance management plan
Buyers and suppliers work together throughout the bid and contracting stage to jointly develop performance measures that will do several things:
1. Provide an early warning system, to ensure rapid detection and mitigation of risk elements. Such a system would highlight incidences such as suppliers extending their payment terms to their own supply chain (which is often an indication of cash flow difficulties).
2. Give an indicator of whether current performance is on track to deliver the future performance required (hence, key performance indicators, or KPIs).
3. Ensure the buyer is getting earned value. In other words, the supplier is doing what they promised to do, at the cost levels that were agreed.
4. Be adaptable in the event of variation in requirements, market conditions or input costs.
5. Motivate both buyers and suppliers to feel committed to achieving continued good performance.
Effective supplier selection and management requires an expert understanding of price and cost analysis combined with a robust supplier evaluation process. Organizations that train and develop their procurement people and business stakeholders in understanding price and cost will be more effective in supplier management, and are more likely to deliver sustainable, reliable supply agreements.
|
trough belt conveyor
A trough belt conveyor is used for various purposes like short-distance moving, heavy lifting, and assembly. The belt is usually made from rubber or PVC and experiences a constant elongation during the conveyor movement. Convey long tons of objects. Also, handle gradual inclines above even long runs of 150 or so. This type of conveyor is used to transport different type of loads such as, bulk grains, raw materials, finished products, vehicles, and containers.
How to Know Trough Belt Conveyor for Different Purposestrough belt conveyor
There are several types of belt conveyors including, flatbed, wire mesh, wire belt, boom, and off-track systems. Flat bed and wire mesh conveyors are commonly used in assembly and manufacturing process. Boom and off-track systems are used for short distance moving and transportation of large objects. The quality of the product and service offered depends on the type of the conveyor chosen.
For instance, a flatbed conveyor has a fixed surface, while an off-track conveyor has a hanging surface, usually constructed of poles, which rest against each other and move with the concave curve of the trough bed. In addition, the type and size of the product being transported affects the type of conveyor needed. In case of bulk materials, the conveyor should be capable of handling the weight of the load being moved. If the material being transported is light, then a wire mesh conveyor is the best option while if heavy materials are being moved, then a boom or a trough belt conveyor is more suitable.
|
What is the importance of changing management in your organization?
Change management is one of the most important infrastructure information disciplines. Wikipedia defines change management as “change management objectives in this context is to ensure that standardized methods and procedures are used for efficient and fast handling of all changes to controlling IT infrastructure, to minimize the amount and impact of each related incidence.”
Change management is always an integral part of business management, but with the advent of information technology gathered. Information Technology Infrastructure Management is one broad term that includes all elements needed to ensure the smooth functioning of business processes that might be threatened because of technology problems or other incidents. This is the attitude of “change is a rule” (as created by several experts) which forces these entrepreneurs to change their attitude towards change management. Good change management techniques always help businessmen to adapt and adopt new ways in doing business. Change management is not only the implementation of new techniques to address changes in the organization; Instead it is the infrastructure discipline of information technology infrastructure where changes are managed with a more systematic, reliable, strict and disciplined approach. Changes are included in the system when the integrity of the business organization is challenged because of several incidents or customer requests or technological updates.
The management change process is revealed through the following steps
1. Identify the need for changes in the organization.
2. Designing needs specific changes to curb with organizational needs.
3. Make others understand why changes are needed for the functioning of the organization.
4. Change organizational processes such as process, technology, and performance meters to combine changes.
5. Manage production and change to ensure that customers and stakeholders continue to be bound to each other for the long term.
According to Wikipedia change management involves process management related to hardware, communication equipment and software, system software, and all documentation and procedures associated with run, support and maintenance of the Live system.
Project management is another aspect of change management, which needs to enter its values for the right function. There are several touch points between project management and change management. Project management is about handling changing with Elance. This is defined as a discipline of planning, organizing and management of resources to ensure the success of the project settlement. The purpose of each project management effort is to achieve successful results even though there are constraints such as space, time, change, quality, time and budget. Each project was developed around several permutations and combination methodology. Changes are made on the existing methodology to avoid potential failure. Identifying, managing and controlling changes is important for the smooth functioning of the project. According to several experts “the project is the change and change is the project”. So it becomes difficult to distinguish or draw the line between reliability between project management and change management.
This div height required for enabling the sticky sidebar
|
Buying and selling And The Basics Of The Business Cycle
Business CycleEnterprise cycles or financial fluctuations are the upswings and downswings in mixture economic exercise. A peak is normally identified after it happens as a result of that is the time when a country’s expansion is at its highest stage. Another example of how the modeling assumptions can influence the results can be present in Ariizumi and Schirle ( 2012 ). They estimate the relationship between enterprise cycle situations and age-particular mortality rates.
For instance, an investor could select to invest in commodities and expertise shares at the end of the business cycle because they may be low cost, after which promote them during the early a part of an enlargement. Third, for stabilization policy to be efficient given lags, policymakers should have accurate economic forecasts.
He concludes that state-stage unemployment charges are negatively correlated with whole alcohol consumption. It’s price repeating that the Federal Reserve tries to reasonable the great occasions available in the market by raising and decreasing the discount rate. The one properly-established discovering is that mental health deteriorates throughout economic slowdowns.
The partisan business cycle suggests that cycles outcome from the successive elections of administrations with totally different policy regimes. If expansionary fiscal policy leads to higher interest rates, it can entice overseas capital in search of the next rate of return.
Equally, the influence of delicate recessions on mortality is likely to be different than the impact of robust financial crisis (Ruhm, 2016 ). Insight into financial cycles will be very helpful for companies and traders. As a result of prices regulate regularly, spending can briefly grow faster or slower than the potential development charge of the availability facet of the economic system.
|
Quick Answer: Is Nicotine Good For Pain?
Can nicotine cause body aches?
Smokers Have More Aches and Pains Jan.
8, 2003 — As if lung cancer, heart disease, and emphysema weren’t enough, researchers now say smoking may be to blame for some common aches and pains, too.
A new study shows smokers are more likely to complain about pain in their back, neck, arms, and legs than non-smokers..
Does nicotine help you lose weight?
What happens to your brain when you quit nicotine?
Another study found that quitting tobacco can create positive structural changes to the brain’s cortex — though it can be a long process. Mayo Clinic reports that once you stop entirely, the number of nicotine receptors in your brain will return to normal, and cravings should subside.
Does nicotine have any benefits?
Does nicotine cause muscle pain?
You’re going to experience more muscle pain: When the body can’t repair itself as readily, muscle inflammation increases, and you’re more likely to be fatigued and sore. The study1 cited persistent shoulder pain and tendonitis as a symptom of smoking, which is a risk factor for rotator cuff tears.
Is nicotine an anti inflammatory?
Nicotine is being considered as an anti-inflammatory agent for the treatment of some diseases such as AD, PD, and Crohn’s disease. The effect of nicotine on immune cells, however, is incompletely characterized and controversial.
How long does it take to kick a nicotine addiction?
Does nicotine help you sleep?
Does nicotine kill brain cells?
What harm is nicotine to the body?
Is nicotine on its own harmful?
Is nicotine good for anxiety?
Does nicotine make your legs hurt?
We know that nicotine causes constriction in blood vessel walls, which in turn creates an environment for plaque buildup. These blockages in the vessels around the heart lead to a heart attack. In the legs, these blockages start by causing pain.
How much does nicotine raise your blood pressure?
Caffeine alone induced a significant increase in blood pressure associated with a decrease in heart rate, whereas nicotine alone increased both blood pressure and heart rate. The combination of caffeine and nicotine increased systolic and diastolic blood pressure by 10.8 +/- 2.0 and 12.4 +/- 1.9 mm Hg, respectively.
Can you get body aches from nicotine withdrawal?
Klein explains that smokers often fail multiple attempts to quit, in part, because of the unpleasant symptoms that accompany nicotine withdrawal, including depression, fatigue, muscle aches and appetite changes.
|
Designing Sustainable Supply Chains
If you own a MacBook (or even if you don’t), perhaps you recall Apple’s campaign claiming it was the “world’s greenest notebook.” Beyond the exciting tagline, there was no real way for consumers to know whether this was an accurate statement. Dr. Leo Bonanni noticed the disconnect – the huge gap between sustainability claims and consumer information was the beginning of Sourcemap.
Sourcemap, of which Bonanni is founder and CEO, visualizes end-to-end supply chains with input from all along the chain, creating a social network of suppliers. In his “Transforming Architectural Practice” session on 03.16.15, entitled “Designing Sustainable Supply Chains,” Bonanni traced the evolution of his own practice, and the growing transparency of corporate sustainability claims. He also presented examples that demonstrate why understanding supply chains is not only a competitive advantages for businesses looking to be more transparent with their customers, but also crucial to improving overall function and productivity.
Bonanni had us consider an item that many carry all day, but likely don’t know much about – a smartphone. Through his own supply chain mapping and analysis, he learned that a phone contains every element on the periodic table (save the radioactive ones), travels to more than 50 countries, and passes through hundreds of thousands of hands before reaching the consumer. Through the growing networking of things, we can fill in the missing links to the raw materials that make up products we use every day. The advent of communication technologies and social media all over the world, including in formerly hard-to-reach regions, allows participants in the supply chain to communicate more directly and efficiently than ever before and opens up more channels for information sharing.
The opacity of supply chains is generally not due to any competitive advantage, but rather to a lack of knowledge at all levels about what goes into products and business processes; in fact, Bonanni has found that there is a significant competitive advantage to sharing supply chain information. It can help inform potential savings, or prevent or reduce the effects of catastrophic events – like at a GM plant in Shreveport, LA, which stopped production in 2011 when a crucial part from Japan was unavailable after the Fukushima disaster, or the response to the fuel shortage across the Northeast after Hurricane Sandy. With more information, we can anticipate and prepare for potential problems before they happen, rather than simply react once they do.
Lessons in understanding supply chains can be applied to all sorts of fields, including architecture and design. While many think the demand for sustainability and transparency in materials and processes is consumer-driven, Bonanni maintains that the bigger driver is recruiting and retaining top talent. Ultimately, pursuing this information is about understanding the impact of what you buy and use, and holding yourself and your suppliers accountable for sustainable practices. Over the last few years, more and more companies have uncovered some of the ways in which they do business, from Stonyfield to Apple (now offering much more information about their green claims). Overall, supply chains have actually gotten shorter and simpler, and today’s consumer expects to know more about what goes into the products they use – including buildings. With greater knowledge of, and accountability for, the sourcing and responsible use of materials, architects can strengthen their ties to clients, talent, and suppliers, and may discover new opportunities for operating more efficiently.
Event: Transforming Architectural Practice 2015: Designing Sustainable Supply Chains
Location: Center for Architecture, 03.16.15
Speaker: Leonardo Bonanni, Ph.D., Founder and CEO, Sourcemap
Organizer: AIANY Professional Practice Committee
|
Frequent question: Is Scrum a project management methodology?
Is scrum a methodology?
Does scrum fall under project management?
What is scrum in term of project management?
Scrum is one of the agile methodologies designed to guide teams in the iterative and incremental delivery of a product. Often referred to as “an agile project management framework,” its focus is on the use of an empirical process that allows teams to respond rapidly, efficiently, and effectively to change.
IT IS IMPORTANT: How do I renew my agile certificate?
What are the 5 values of scrum?
Scrum Values. A team’s success with Scrum depends on five values: commitment, courage, focus, openness and respect.
What are the 4 core principles of Agile methodology?
Four values of Agile
What are the 3 key elements of agile methodology?
Is Scrum master better than PMP?
What are the 3 roles of a scrum team?
What are the three scrum roles? Scrum has three roles: product owner, scrum master and the development team members.
What are the steps in Scrum?
The scrum models have 5 steps also called phases in scrum.
1. Step 1: Product Backlog Creation. …
2. Step 2: Sprint planning and creating backlog. …
3. Step 3: Working on sprint. …
4. Step 4: Testing and Product Demonstration. …
5. Step 5: Retrospective and the next sprint planning.
Which is better Scrum or agile?
IT IS IMPORTANT: Your question: Who is responsible for engaging stakeholders scrum?
What are Scrum methodologies?
Scrum is an agile development methodology used in the development of Software based on an iterative and incremental processes. … The primary objective of Scrum is to satisfy the customer’s need through an environment of transparency in communication, collective responsibility and continuous progress.
|
Password Hashing Techniques in PHP
Josh Sherman
3 min read
Software Development PHP
If you’re not familiar with salts and such, let me go into a bit more detail for you. Please note, that for the sake of the examples I’ll be using the SHA-1 function. I don’t recommend using these as copy and paste examples but more for the logic being demonstrated. More information about why you shouldn’t use sha1() for securing passwords is available here.
Salted Hashes
A salt is a string (usually on the short side) that is appended (or prepended) to the string you are about to generate a hash for. The salt itself is stored either as part of the hash or it could live in a separate column in your database on the user’s table. The salt itself should be randomly generated for each user as well as regenerated during password changes.
$salt = 'salt';
$password = 'abc123';
$hash = sha1($salt . $password);
Peppered Hashes
Peppered hashes are similar to salted hashes as they contain a string that is appended or prepended to the plaintext password before hashing. Unlike a salt, the pepper is stored at the application level (in your code) and not the database or in the hash. This creates another touch point that a potential attacker would have to compromise to be able to accurately crack the password hashes.
$pepper = 'cayenne';
$password = 'abc123';
$hash = sha1($pepper . $password);
Generally speaking, pepper values are somewhat of a static solution. You could easily implement multiple pepper values based on what the user’s unique ID is in the database or some other value like the first character of their email.
Key Stretching
Key stretching adds complexity by hashing the password multiple times, usually at least 1,000 times or more. The most simple implementation would be to apply the same function multiple times in a loop:
$password = 'abc123';
for ($i; $i < 1000; $++)
$password = sha1($password);
By doing this you’re increasing the time it takes to generate the password hash and thus slowing down any brute force attacks against the hash. You can expand upon this by adding in the salt and/or pepper to each iteration. I’ve seen this referred to as “spicy hashing” because of all of the pepper that’s being added ;)
$password = 'abc123';
$password = sha1($pepper . $password);
How you hash your stored passwords is just as important as which hashing function you choose to use. Salting, peppering or key stretching on their own can add enough complexity to avoid having your password hashes susceptible to rainbow table attacks. Combine all of the techniques (and making sure to use a unique salt per user) will make it nearly impossible to crack your hashes if you’re using a stronger hashing function like Blowfish or SHA-512.
Keep in mind that these techniques may stand the test of time but the hashing functions themselves can be compromised in the future. The faster computers become, the easier it will become to crack the algorithms that we consider secure today.
Join the Conversation
Good stuff? Want more?
Weekly emails about technology, development, and sometimes sauerkraut.
100% Fresh, Grade A Content, Never Spam.
Related Articles
|
kids encyclopedia robot
Deep learning facts for kids
Kids Encyclopedia Facts
MultiLayerNeuralNetworkBigger english
A Multi layer neural network.
Deep learning (also called deep structured learning or hierarchical learning) is a kind of machine learning, which is mostly used with certain kinds of neural networks. As with other kinds of machine-learning, learning sessions can be unsupervised, semi-supervised, or supervised. In many cases, structures are organised so that there is at least one intermediate layer (or hidden layer), between the input layer and the output layer.
Certain tasks, such as as recognizing and understanding speech, images or handwriting, is easy to do for humans. However, for a computer, these tasks are very difficult to do. In a multi-layer neural network (having more than two layers), the information processed will become more abstract with each added layer.
Deep learning models are inspired by information processing and communication patterns in biological nervous systems; they are different from the structural and functional properties of biological brains (especially the human brain) in many ways, which make them incompatible with neuroscience evidences.
Images for kids
kids search engine
Deep learning Facts for Kids. Kiddle Encyclopedia.
|
Digital health passports, or vaccine passports, were included in the recent Report of the Global Travel Taskforce. For some, they are considered to be one of the main routes back to ‘normal’ international travel.
A recent report from The Royal Society highlighted 12 criteria which need to be met in order to provide an effective vaccine passport. Inevitably, this included legal, ethical and scientific considerations. From a legal perspective, the stand-out concern is around privacy and data protection. How can organisations ensure compliance with data protection laws such as GDPR and the Data Protection Act 2018? This is particularly important given the data being collected relates to a person’s health and medical status –considered ‘special category’ data and held to a higher standard of compliance. The transfer of this data between multiple organisations and countries, some of which with lower standards of personal data protection than the UK and EU, is also a cause for concern. The security of the personal data will need to be prioritised, as well as ensuring purpose limitation. The data should not be used for anything other than the limited purpose for which it has been collected, and it should not be held for any longer than needed.
Challenges have also been raised around potential discrimination against those who have not, or cannot, be vaccinated. This may be due to age (since the vaccine roll-out is mostly being done by age group), as well as religious or spiritual beliefs which may not allow for vaccination. It could also include those deemed unsuitable for vaccination due to a medical condition, pregnancy, or disability. All of those are potentially protected characteristics and organisations should be careful not to unduly discriminate against such persons. The inclusion of negative tests results and antibody tests may help offset this.
The Report of the Global Travel Taskforce suggests travel certification as a possible strategy for re-opening international travel. It also recommends close coordination with industry to ensure third party apps can be integrated with a national digital certification system that is interoperable, safe and secure.
|
tree goats in morocco
tree goats in morocco
Why are goats in trees in Morocco?
Goats in Morocco do climb trees naturally, and help to create argan oil in the process — they eat the trees ‘ fruit, and then release nuts through their waste.
Where are the goats in trees in Morocco?
Morocco’s Argania trees are infested with nut-hungry goats . Grown almost exclusively in Sous Valley in southwestern Morocco , the Argania is a rare and protected species after years of over-farming and clear-cutting.
Where can you find goats that climb trees?
The gnarled, thorny plants grown exclusively in southwestern Morocco and western Algeria may not be pretty, but they attract plenty of fans. Herds of hungry goats pose in their crooked branches, sometimes more than one dozen in a single tree. There’s an explanation for the strange phenomenon.
Can goats live in trees?
A lesser-known talent of some goats is the ability to climb trees , even fairly tall ones, and stand on small branches that look like they can barely hold their weight. This is particularly common in Morocco, where food can be scarce and argan trees produce a fruit that is particularly appealing to goats .
Can goats be out in the rain?
Goats are usually able to stay out in the rain without any shelter and not develop issues. In fact, they can even be left in the rain overnight and will likely be fine. Normally, goats are pretty good at coming back in when the rain gets too cold or the weather gets too severe.
What’s the difference between Moroccan oil and argan oil?
Native only to Morocco , Argan Oil is pressed from the kernels of the argan tree. This is the purest form of Argan Oil and is used natively for various purposes such as: skin treatment(acne and moisturization), hair care and even cooking. Moroccan Oil : Commercial Moroccan Oil is a modified version of Argan Oil .
You might be interested: morocco during ww2
Why are goats so good at climbing?
Why do goats like to be up high?
One reason is that goats are prey animals and it’s wired into them to get to the highest point to watch for predators. If you watch a herd of goats browsing, there will always be one that is on a higher point than all the others. This is the watcher who will alert the herd to a predator nearby.
Do goats have good balance?
Goats have remarkable balance : their hooves are split, which allows gripping and spreading into even the smallest of spaces. You rarely find goats toppling over… unless they’re fainting goats .
Will goats eat fruit trees?
Goats will damage and eventually kill trees by browsing on the leaves and shoots, stripping the bark, and rubbing their horns on the trees . Your goats cause worse damage when they don’t have access to any other plants to eat , but they enjoy tender bark and leaves even when grass and shrubs are available.
How high can goats climb?
13,000 feet
Can a goat eat anything?
Goats get their reputation for eating almost anything because they like to walk around and sample a wide variety of foods, as opposed to grazing a pasture like cows or sheep. Goats will eat hay, grasses, weeds, grain, and sometimes even tree bark!
Tom Smith
leave a comment
Create Account
Log In Your Account
|
Melomys: The Little Brown Rat Has Left The Coal Mine*
Melomys is the First Mammal Extinction Directly Attributed To Anthropomorphic Global Warming.
One of the characteristics of a certain type of climate denier is a tendency to spout ignorant quips in the face of climate facts. For example, “Hey, I wouldn’t mind a little more warmer weather”or “who cares what goes on at the North Pole, I don’t live there.”
So we may expect a similar reaction from this unique population segment when they hear of the ultimate extinction of a little brown Australian rat known as the Bramble Cay Melomys.
“Hey one less rat in the world. I hate rats.”
Australia’s “little brown rat” is not a particularly aesthetic creature, but it does have the distinction of being the first mammal to permanently exit the planet due to anthropogenic climate change. It was native to the Great Barrier Reef and lived on a small island in the Torres Straits.
But not a single melomys has been spotted since 2009 and the species was finally declared officially extinct this week. The cause was the rapid sea level rise driven by warming oceans, accompanied by devastating storm surges that wiped out habitat and food supply. Since these are burrowing mammals, many likely drowned in their homes.
While humans tend to focus on designer species such as whales, elephants and big cats, there is a very long list of other species that have recently disappeared, or are under extreme threat of extinction. As most people are aware, the loss of any species in the food chain changes everything.
Over the ages, species have always gone extinct and been replaced. But has more and more humans swarm the planet, things have changed. The current extinction rate – sometimes known as the Sixth Great Extinction – is approximately 100 extinctions per million species per year. This rate is 1,000 times higher than historic background rates. That rate is expected to climb as global warming accelerates.
In most geological ages of the past, species have adjusted to changing climate conditions by moving. Whether flora or fauna, species that live in alpine environments move up. Species that live in warming regions move toward the poles if they can. But in the Anthropocene, the rate of change is hundreds of time faster than in eras past. Species have barely started packing before they are gone.
Humans, the most mobile of species, has always adjusted through migration, voluntary or involuntary. The current global refugee crisis is due in large part to collapsing ecosystems on every continent.
Ultimately, Australia’s little brown rat met an untimely end because it was unable to migrate. It ran out of island.
Humans will also continue to relocate on a mass scale…unless they run out of planet.
Although the melomys is the first mammal to be declared extinct specifically due to global warming, it is only the latest in a growing list of mammals (and birds and planets) declared extinct in the past few decades. Generally speaking, cause for extinction are complex and virtually always incorporate some combination of human activity and climate change.
• Vaquita Porpoise 2017
• Javan Rhino 2011
• Aloatra Grebe 2010
• Baiaji Dolphin 2006
• West African Rhino 2001
• Dusky Seaside Sparrow 1989
• Golden Toad 1989
Departing soon
• Yangtze Giant Softshell Turtle (Down to 3)
• Most amphibians are threatened with extinction
You may also be aware on the margins of your consciousness that the global insect population is in steep decline. If your reaction is “Good, I hate insects,” well, you are probably having the wrong reaction.
* Understood to be a mixed metaphor.
The Third Pole and Global Glacier Collapse
Rapid Glacier Melt In the Himalayan Plateau Threatens Water Supply for Billions | Abridged Ecosystem Transformation
With perhaps hundreds of feet of sea level rise contained in the rapidly melting land ice of Greenland and Antarctica, the situation at the poles is enough of a planetary climate crisis for anyone. But the catastrophe at the Third Pole may be more destructive in it’s short term effects.
The “Third Pole” is a term coined to describe the Himalaya-Hindu Kush mountain range and the Tibetan Plateau. These extensive ice fields hold the planet’s largest reserve of fresh water outside of the Greenland and Antarctica. Up to 1.3 billion people depend on the ten river systems that originate here for drinking water, irrigation and power in eight countries in South Asia. Among the rivers with sources in the glaciers are the Ganges, Indus, Yellow and Yangtze. If the people downstream from the sources are factored in, the number climbs to 2 billion humans (about 25% of the global population).
Warming temperatures are liquefying glaciers across the vast Himalayan region. By one count, 509 glaciers have disappeared over the past 50 years.
Before global warming kicked in, winter snowfall replenished the glaciers. Now, as temperatures climb, snow falls when temperatures are fairly high, so much of the water flows directly into rivers. The decay of the glaciers is visible from year to year. What is happening across the Third Pole is the most prominent example of this slow mo global climate phenomenon as alpine glaciers in South America, Europe and Alaska recede.
As quoted in The Big Thaw, a February 2019 National Graphic article by Daniel Glick, Daniel Fagre of the U.S. Geological Survey Global Change Research Program said, “Things that normally happen in geologic time are happening during the span of a human lifetime. It’s like watching the Statue of Liberty melt.” As the article goes on to point out, the iconic snows of Kilimanjaro have receded more than 80% since 1912. In the Andes, an artist is painting the bare rocks white in honor of the glaciers that used to provide water for the villages.
Link to full article
In places where glaciers have disappeared completely, the impact on the water supply is devastating. However, where glaciers still exist, they melting more rapidly: the local effect may actually be more fresh water collecting in new lakes. So more water in the short term, followed by no water.
This gives rise to a frightening event at the opposite end of the spectrum: a phenomenon called Glacial Lake Outburst Flood (GLOF), in which an ice dam holding back meltwater suddenly collapses, releasing a wall of water into the valley. This is not a new type of event, but as glaciers recede it is becoming more common.
In the Third Pole Region, smaller spring-fed forest rivers are also drying up, due to climate change, deforestation, migration and unenlightened hydro projects. Rainfall and snowfall have decreased significantly over the past three decades and groundwater has been depleted by indiscriminate drilling (see Ogallala Aquifer depletion).
Monsoons also feed the large rivers, so the receding glaciers are only part of the overall scenario. But the monsoons have also become unreliable.
The impending drinking water crisis is not the only change looming in the Hindu-Kush. Agriculture and herding lifestyles have already been severely compromised as yet another regional migration gathers steam. Less grass grows and it does not grow as high. Herds are depleted remnants of their former selves. Local herbs are disappearing.
As temperatures continue to warm, biodiversity is beginning to crash. The term “biodiversity” sounds scientific and vaguely liberal, but what it refers to is the death of species across the spectrum of life. Whether in the mountains, the oceans or prairies, the consequences of the sixth great extinction are just beginning to manifest themselves across the food chain.
Most species that inhabits an alpine ecosystem have nowhere to go but up. And out.
Humans are different. The pastoralists can leave the alpine valleys and find work in the towns and cities for now. But their herds can’t come with them.
We are inundated by statistics all day long. It can be difficult to assess what they really mean. For example, does the fact that 2018 was only the 4th hottest year on record mean global warming is slowing? Not hardly, in fact, quite the opposite: Please read on.
NOAA, NASA, the European Union’s Copernicus Climate Change Service and Berkeley Earth have all confirmed that 2018 was the 4th hottest in terms of global surface air temperatures. Average Mean Annual Land Surface Air Temperatures were 14.7° Celsius (58.4°F in 2018, just 0.2C (.3°F) off the record year 2016.
According to Berkeley Earth, in 2018 about 4.3% of the planet set new local records for the warmest annual average, including significant areas in Europe and the Middle East.
The five warmest years have been 2014, 2015, 2016, 2017 and 2018 (not in that order). The ten hottest years have occurred since 1998. While there are variations in the details, other agencies from around the planet report remarkably consistent overall results.
2018 was a La Niña year, a natural oceanic temperature cycle that alternates with El Niño. La Niña years are virtually always cooler than El Niño years. The fact that 2018 made it into the top five is all the more alarming for that reason. BTW, an El Niño appears to be forming, although prediction is not 100% accurate.
The next statistic is more troubling:
2018 was the warmest year on record for global ocean temperatures.
Polar Ice Primer
Over past two decades, the ocean has been warming about 40% faster than previously understood. To a large degree, that is because the oceans have been acting as a buffer, storing the heat trapped by greenhouse gases and temporarily delaying the onslaught of global warming. As the planet has warmed, the oceans have provided a sort of climate change cushion.
For the past 20 years, the waters of the Earth have been absorbing and storing massive amounts of heat energy as polar sea ice disappears. See the page on polar ice classifications. (Incidentally, the oceans have also been sucking CO2 out of the atmosphere as well.)
Unlike the atmosphere, ocean temperatures fluctuates over decades. When the ocean stores heat, it is slowly released back into the atmosphere, another feedback that may well be irreversible.
Global atmospheric temperatures will continue to set records over the next five years, according to the British Weather Service (MET).
Antarctic sea ice extent is at a record low and in the Arctic, temperatures are climbing about twice as fast as the rest of the globe. Global wind patterns are being disrupted, causing extreme weather events around the planet. This is the origin of the polar vortex, but that is only one manifestation.
The ecosystems of both polar regions are changing so profoundly and so fast that scientists are hard pressed to keep up. And of course, the permafrost is not so perma any more. That is a separate topic.
The final statistic: Atmospheric CO₂ crossed the 414 PPM for the first time at the Mona Loa, HI recording station for the first time last month. Pledges and world conferences aside, the growth of CO₂ in the atmosphere is accelerating, not decreasing. Prior to the industrial revolution, the average CO₂ measurement would have been 280 PPM. The Earth broke the 400 PPM measurement in 2016. Continued CO₂ growth is forecast for 2019 as emissions continue to rise and ecosystems absorb CO₂. If the predicted El Niño takes hold, the results will be magnified.
The greenhouse effect of CO₂ peak about ten years after it is emitted. Carbon dioxide levels today are higher than at any point in at least the past 800,000 years.
The chart curves up logarithmically and yes, this looks just like Al Gore’s hockey stick chart. (Actually it’s Michael Mann’s hockey stick and the original was for temperature, but they are most definitely related). But whether you like Al Gore or not has nothing to do with whether or not he has his facts straight.
Albert Einstein could also be a bit of an jerk, they say, and yet, you know: pretty smart guy.
The Disappearing Ogallala Aquifer
The Ogallala Aquifer Crisis Is Uniquely American, With Global Consequences
The Ogallala Aquifer is a huge table of groundwater that covers portions of eight Western States. The system contains as much water as Lake Huron and is one of the planet’s largest sources of fresh water. Unlike “actual” lakes, the water lies just beneath the surface, visible in a few locations as wetlands or ponds. Most people have never heard of the Ogallala (also known as the High Plains Aquifer) to some degree because it is rarely visible as surface water..
Yet the Ogallala is the water supply that keeps a large component of western American industrial agriculture in business, the heartland’s wheat fields, also the source of corn, sorghum, soybeans, wheat and cotton. This is where the irrigation circles (otherwise known as pivot irrigation) get their water. About $25 billion of annual agricultural output depends on this vast reservoir.
But the Ogallala is on the verge getting tapped out.
What farmers thought was inexhaustible 25 years ago has been depleted many times faster than it can be replenished. If it runs dry, it will take about 6,000 years to fill back up. Whe
As one scientist put it: there are too many straws in the resource. Wells are now 300+ feet deep and the aquifer simply can’t replenish itself as fast as the crops drink it up. Not even close.
At this point in time, water is being pumped that has been deep underground for hundreds of thousands of years. Water levels in Kansas have dropped up to 14ft since 1996, about a foot a year. BUT in 2011, level drop rates more than doubled, to 2.2 ft. per year. In some places in southern Kansas, water level has declined 150 feet and wells have been abandoned.
In some parts of the region, it takes one year to recharge the aquifer 1 inch through natural percolation.
Do the math.
There is more bad news: the region – already rated as semi-arid – has been in the throes of severe drought since 2011. The condition has vacillates from severe to extreme to exceptional drought, the two worst categories.
Climate scientists expect this state of affairs to persist and worsen. This is a long term event that will increase demand on the aquifer while reducing the ability of the aquifer to recharge.
Unless major changes are made.
Western states are generally Red states, led by hands off Republicans inclined to let the farmers handle it themselves. It’s not that they don’t know there is a crisis looming, it’s that they lack the political courage to do anything about it. Some farmers and institutions are taking steps, but the future is unclear. Humans sometimes do amazing things when threatened. Some of the amazing things are good. Sometimes they are the opposite of good, like electing strong men they think will save them.
Sometimes they wait until it’s too late.
Since we seem to get stuck on economic arguments, consider the economic price of losing the Ogallala: a slow moving economic and cultural catastrophe that will change the face of America.
Kind of like global warming.
|
Great Dane – The Mighty Working Dog
The greatness of dogs reached a whole new level over the years, wherein some breeds were developed for the sole purpose of showcasing their mighty characteristics. In turn, many people are amazed at these dog’s fascinating features. In our modern world, we can see that there are dogs of diverse origins; these breeds possess distinct qualities, which makes them unique. And out of all these breeds, there are some considered dominant over the others.
One breed that is sure to showcase its superior class is the Great Dane. Apparent from its name, Dane is a solid definition of a mighty and tremendous dog. It is a breed known for its massive size, making it a pretty attractive breed.
Because of its largeness, many dog lovers admire the Great Dane. Furthermore, this dog proves to be a significant breed, not just in size but also in other features. Such qualities possessed by the Great Dane are enough for it to gain recognition; in fact, this dog is among the most popular breeds around the globe. With this said, we can say that the Great Dane is a truly magnificent breed that would definitely give any owner a smile on their face.
Origins of the Great Dane
Despite the breed’s greatness, there is no much information on its origins. Speculations show that this breed is associated with Denmark. However, the Great Dane is known for being native in Germany, wherein it is called the Deutsche Dog, also known as German dog.
As expected from its enormous body and keen instincts, the Great Dane first dominated the fields as a ferocious hunting dog, along with its master. German nobles were the earliest people to utilize the greatness of this breed on the field; they used them in hunting wild boars. Over the years, the Great Dane was employed in other jobs, and one of the most significant works it participated in was a guard dog. Its might was a perfect trait that works best in guarding the house; it also possesses other incredible qualities, which makes it a truly dependable dog. Because of this, the Great Dane inevitably gained recognition as a trusted guard dog, wherein its popularity skyrocketed.
Years later, the Great Dane’s popularity continued to widespread, and many people began to notice the breed’s greatness. Until today, the Great Dane remains to be one of the most famous breeds worldwide; it is still known for its outstanding work on the field, as well as the eagerness to finish the job.
Characteristics of the Great Dane
Height: 28 – 32 inches
Weight: 110 – 175 pounds
Life Expectancy: 7 – 10 years
Hypoallergenic: No
As mentioned earlier, the Great Dane is an enormous dog that could stand up to a whopping thirty-two inches and weighs up to 175 pounds. This breed’s size and weight alone are more than enough to emphasize its effectiveness as a guard dog; just by looking at it would surely keep intruders at bay.
Massive size is not the only beautiful feature of the Great Dane. It also holds various characteristics, which makes it more attractive. This breed is known for its muscular and sleek body; it is combined with a smooth coat that comes with different colors and patterns, such as black, brown, or white. It has a sturdy-looking skull, which defines the breed’s powerful physique; its ears convey a dependable nature, and its eyes expressing friendliness.
Moreover, the Great Dane is a concrete definition of a strong and affectionate dog. It is widely known for its highly loving nature, making it an ideal family dog. Furthermore, it is a breed that could quickly identify its family, as well as threats. Combined with its dependable nature and intelligence, any intruder would be hesitant to go into your home with a Great Dane on sight.
Overall, the Great Dane is a perfect combination of enormous size, elegant appearance, powerful body, and affectionate nature. All of these qualities are factors that make the Great Dane a fascinating dog. This breed is not just an excellent working dog; it also proves to be an ideal household companion. With proper care, the Great Dane is no doubt a breed that any dog enthusiasts would surely love.
|
Best Answer
It is generally accepted that of the total German military combat deaths in WWII, which numbered about 3.4 million, about 75% and possibly 80% were on the eastern front. Additionally, up to June 1944 (Normandy) the percentage was even higher since there was little combat elsewhere that generated large casualties. German material losses on the eastern front were also huge representing over 80% of artillery, tank and truck losses. Aircraft losses were about 50/50 on the eastern and western front. Of course, naval losses were almost exclusively in the west, even including defensive actions taken against Soviet attempts to massacre civilians on the open seas late in the war in the Baltic. It is said that, 9 out of 10 German soldiers killed in World War II were on the Eastern Front.
User Avatar
Wiki User
2006-12-17 15:25:40
This answer is:
User Avatar
Study guides
See all Study Guides
Create a Study Guide
Add your answer:
Earn +20 pts
Q: How large were German losses on the eastern front?
Write your answer...
Related questions
Which country killed the most Germans in world war 2?
The Soviet Union killed the most Germans of the three major Allied Powers. This happened during the large-scale fighting on the Eastern Front. German military deaths were huge on the Eastern Front, and civilian losses (missing & killed) were extremely large. Likewise, in return the Soviet Union had the largest number of deaths from the Germans as any of the Allied nations.
What was the style of fighting on the eastern front during World War 1 as oposed to the western front?
The Eastern Front had trenches like the Western Front, but it was so large that the fighting was more mobilised, especially in Ukraine. German Uhlans and Ukranian/Russian cavalry were able to move around and fight each other.
Which large allied countries defended the eastern front of Europe in 1944?
The Soviet Union.
How long or big can a German Shepherds feet be?
I have a large German Shephard. That being said, his front paws are 5 1/2" long and 3" wide.
Is a German shepherd a large breed or giant breed?
German shepherds are a large breed
What large group of mammals have large front teeth that never stop growing?
Rodents have large front teeth .
What part of Europe has been most dominate by large empires?
Eastern Europe
What do you call animals that have large front teeth?
Rodents are the ones who have large front theeth.
What is in front of a crowd called?
"In front of a crowd" could mean presenting to a large audience, or performing in front of a large audience.
Why did independent nation-states develop later in eastern Europe?
Eastern Europe was ruled by large empires for a long time. The Russian Empire, German Empire, Austrian-Hungarian Empire, and Ottoman Empire ruled over most of Eastern Europe until the end of World War I (even prior to) where new Eastern European countries eventually started coming about.
Large plateau in the north-eastern part of California?
what is a plateau in the north eastern of California
Who is Luddendorf?
Erich von Luddendorf was the chief of staff for General Paul von Hindenburg. He was responsible for the northern sector of the Eastern Front during 1914-1916. He was known for making very risky maneuvers in order to try and destroy the Russian Army. he also had a large dislike for General Falkenhayn who was in charge of all Germany's armed forces from 1914-1916, both on the Western front and on the Eastern front.
Animals that have large front teeth?
why do chipmunks have large front teethA beaver also has large front teeth and so does a walrus. :)
What large river in eastern europe passes through Vienna Austria and Budapest Hungary?
The Danube River. It is better known as the "Doanu" in German and the "Duna" in Hungarian.
Who settled in the prairie provinces in Canada about 100 years ago?
Mostly European and Eastern European immigrants. There is a very large Ukrainian/Polish/German/Dutch presence in the prairies
What is a large wash?
The Wash ( in eastern England).
Name the large lake in eastern Africa that is the source of the nile?
Lake Victoria is the large lake in eastern Africa that is the source of the Nile. It is the largest river in Africa.
A large lake in Eastern Africa?
Lake Victoria is a large lake located in eastern Africa. It forms a portion of the border for Rwanda, Uganda, Kenya and Tanzania.
What large body of water is located in tge cebtral to eastern part of Canada?
the hudson bay is the large body of water from central to eastern part of canada.
What is the large gulf that touches the south eastern United States and eastern Mexico?
It is the Gulf of Mexico
Which type of irrigation losses the most water to evaporation?
Large irrigation channels losses more water by evaporation.
Was the allied invasion of Italy a success?
Yes, it pulled forces away from both the Eastern and Western fronts. Yes and no. It pulled German forces from Western and Eastern fronts, but it cost a large amount of Allied casualities and didn't give the Allies any strategic points from which to attack Germany from.
How many Jews died from being shot?
A large proportion of those killed on the Eastern front were shot. All together i would say about one and a half million, give or take.
What front produces large amounts of precipitation?
cold front
Name a large lake in eastern Africa?
A large lake in eastern Africa is Lake Victoria, also known as Victoria Nyanza. It is on the borders of Uganda, Kenya, and Tanzania.
|
By John Fernández and Paulo Ferrão. MIT Press, 2013, 264 pages, $35.
Helping Cities Go Green
In 2012, officials in Dubai asserted that their city would rank among the most sustainable metropolises in the world by 2020. About the same time, Washington, D.C., Mayor Vincent Grey trumpeted greenest-city status by 2032. A glimpse of the cities' sustainability plans shows two different approaches to the same goal. For Dubai, it means supplying five percent of electricity photovoltaically and outlawing energy-hog buildings. While Washington also aims for renewable-energy use and efficient structures, it prioritizes cleaning up the Anacostia River and increasing urban agriculture.
Sustainable Urban Metabolism
In concept, tailoring one city's sustainability initiatives to reflect its climate, culture, and stage of development should benefit all cities—or at least maximize the environmental benefits. But even the best intentions will not necessarily yield positive results. What if Dubai's photovoltaics are sourced irresponsibly upstream? What if urban farming in Washington causes a spike in the insecticides and fertilizers that wash into the Anacostia? Because the causes and effects of environmental management are complex and far from linear, urban-scale sustainability is littered with possible backfirings: electrical vehicles that draw their power from coal-fired plants, local manufacturing initiatives that lead to transit inequity, and so on.
In this book, the authors shed light on the inconsistent terms and blind spots that plague urban sustainability initiatives. Professors of mechanical engineering and building technology, respectively, Ferrão and Fernández want to arm municipal stewards with data that are currently unavailable to them. Without an accurate portrayal of environmental inputs and outputs, decisions may lack impact or do more harm than good.
The first step in measuring and analyzing the resources a city consumes and the waste it emits is to establish a methodology. Employing the metaphor of urban metabolism, the authors show how to measure the environmental systems that converge in a city. They set criteria, weight them according to different city typologies, and identify the data sources to quantify those terms. Watching this framework unfold is like witnessing the creation of an algorithm. One can imagine this book spawning the next round of Code for America fellowships.
For readers who are not smart-cities acolytes or app programmers, this narrative may seem more technical than compelling. Its potential usefulness, on the other hand, should thrill anyone: standardized data could help planners from Dubai and Washington form a mutual understanding of what it means to be green (or greenest). Perhaps more important, it could help planners from cities similar in resource flows or physical form compare policies for mutual improvement. Sustainable Urban Metabolism applies the management adage, "What gets measured gets done" to the 21st-century game of planetary survival.
|
voice Jan 06, 2020
The easiest way to improve public speaking is not only the speech writing or content itself.
As a vocal coach, I'm fascinated by the power of the human voice - and know the power it has in capturing an audience’s attention.
The voice is the most direct and intimate communication tool we have - and makes a massive difference when it comes to speaking in front of a crowd.
Voice can seduce.
Be boring or arouse interest.
Get people to stand to their feet - or fall asleep like it’s 7th grade math class post-lunch.
Thus, the voice can significantly influence one's own success. I’m going to reveal why and exactly how to use your voice to have you feelin Meryl-Streep-status confident in front of a crowd.
The subconscious approach to speaking in front of a crowd
The voice sneakily makes the first impression on others. The truth? We often rate people according to the sound of their voice.
With science as the foundation, our brains are typically wired to feel:
• Too high a voice is unsafe (Think of startled animals who have high barks/meows / whoops that signify danger)
• A voice that is too loud very dominant
• A monotonous voice is tired and not vital to listen to
What studies have revealed
According to the American Scientist publication, “Using recorded voices…psychologists and linguists have demonstrated that a person’s voice pitch affects how others perceive her or him.”
For example, a study was done by Cara Tigue, along with her colleagues at McMaster University, to see how people corresponded voice pitch to voting for a president. In this study, a few things happened.
• Known male-leaders were played to test subjects and 67% voted for the deeper-sounding voice.
• Voices of unknown were played to test subjects and 69% chose the lower-pitched voices.
This was by VOICE - not content.
Now, with men and women voice does take on different connotations for people but the more important point here is that how we carry our voice.
*This is NOT about changing your natural pitch - Your natural voice is perfect!*
What is important? A full, carefully carried, steady voice sets accents and conveys credibility.
Our voices tend to shoot up in interviews and situations when we’re nervous, like talking on a stage.
However, lower pitch voices are assumed to be more dominant and authoritative.
All this to say, if you speak slower, your more likely to feel more relaxed and have your pitch drop to your natural, perfect level.
Not only that, when you hear your voice sound more natural and level, you will FEEL calmer and in control.
Communication comes from the voice and the _______.
The word 'communication' comes from Latin and means to share, to do together.
In essence, communication is about having your listener feel that they are in the conversation with you vs. only being talked at. That’s why having clear thoughts is not enough, especially when you have a short time frame to speak on stage and share a message.
A powerful (& easy) way to have your voice speak at a calm, strong level?
Make sure your body language is working for you and your audience. Here’s why.
The body reinforces the voice: Communication comes from the voice and the body.
In fact, the whole body is vital to the formation of sound. This is why how you carry yourself and your body posture is vital to the voice you present and vibes listeners get from you.
The human voice can only sound properly when the entire body is free of unnecessary physical and mental tension - so even though you may be nervous about public speaking, know that is okay.
The 3 ways to shift body language and tone immediately
It’s absolutely okay. Here are ways to prepare for strong body language that also leads to powerful voice delivery!
1. TRY THIS: Try slumping your shoulders, looking downward and saying a happy phrase. “I am excited to give this speech!” It feels so impossible, doesn’t it? Now, try these next tips and repeat the phrase - and how much happier and joyful your voice sounds!
2. Make sure shoulders are rolled back so chest and heart are open - This makes you look AND appear more confident + gives room for your voice, too!.
3. Take up space. Whether arms come out a bit wide while talking or legs are hip-width distance apart... taking up space physically makes you feel confident to take up space with your voice and your message.
How to do audience centred speaking
The word 'communication' comes from Latin and means to share, to communicate, to do together. It is not enough to have clear thoughts. Especially as an amazing entrepreneur who is excited and ready to grow her business, it is important to express these thoughts clearly and directly.
By taking up space with your arm gestures…
By having shoulders relaxed and down, taking away tension from your neck and letting your voice sound calm and strong…
You’re allowing your voice to have that natural, powerful sound that is also in communication with each listener.
Using your voice, as shared in this blog, is absolutely vital to making the audience feel like the center of the talk.
To get more specific strategies that’ll improve your public speaking skills and have the listeners feel absolutely understood, get the Stage Ready Workbook: 10 dynamic steps to become a “Wow, she’s so good!” speaker
Join My FB Group
50% Complete
Two Step
|
Protect your teeth from the cold!
Many illnesses such as colds and coughs emerge during colder months, including oral issues too!
Your teeth are used to your normal body temperature, so changes to the temperature in the environment will more or less cause discomfort in the mouth. However, if you do experience pain consistently lasting more than a few days, be sure to contact your dentist in case of any other underlying issues.
So how can I protect my teeth?
• Breathe in cold air through your nose - Breathing in cold air through your mouth can affect certain areas of your teeth and the gum-line
• Refrain from clenching your jaw - Clenching your jaw can cause several issues, including jaw pain, tooth pain and even tooth erosion!
• Take good care of your sinuses - Inflammation from your respiratory tracts can often times be mistaken for tooth pain.
• Try using sensitive toothpaste - Try using a fluoride base toothpaste to reduce tooth sensitivity during the cold!
Remember, sometimes tooth pain in cold conditions is inevitable, just be sure to follow your normal tooth care routine!
Featured Posts
|
Blog post
Why coronavirus action plan should include psychological wellbeing
The release of the Government’s Coronavirus Action Plan has led Dr William Van Gordon, Associate Professor of Contemplative Psychology at the University of Derby to examine the need for psychological wellbeing strategies to be included in the plan based on a strong argument demonstrating a link between psychological health and immune response.
By Dr William Van Gordon - 6 March 2020
In addition to monitoring and containment measures, examples of key strategies currently being employed to minimise the negative impact of the coronavirus disease (COVID-19) include promoting good hygiene (e.g., washing hands regularly and thoroughly with soap and water or alcohol-based hand rubs), social distancing (i.e., keeping at least one metre distance from a person who is coughing or sneezing), and seeking medical advice early via remote means if individuals feel they have contracted or been exposed to the virus (e.g., such as via dedicated telephone or online coronavirus public healthcare services).
Responses by countries globally to COVID-19 are largely based on what we currently know about the disease, what we know about respiratory diseases that appear to share some similarities with COVID-19, and wider knowledge and evidence relating to best-practice for managing disease outbreaks of this nature and scale. However, given that COVID-19 has only recently been identified, such responses are invariably provided with the caveat that there are still many unknown factors relating to coronavirus and that action plans and advice are subject to change as new insights arise.
Based on the same caveat, there is currently a strong argument for promoting psychological wellbeing strategies as part of the public health response to COVID-19. This is because research conducted over a period of decades demonstrates a link between psychological health and immune response, the latter of which reflects the body’s ability to fend off and respond to infection.
According to the American Psychological Association, if you are stressed or depressed, then “don’t be surprised if you come down with something”. Indeed, research indicates that when we feel stressed or depressed, particularly for periods lasting more than a few days, it can negatively impact the way our immune system responds to infection, making it easier for bacteria and viruses to weaken and harm the body. Such research has been conducted in respect of various different types of infection, including some affecting the respiratory tract such as influenza and some better-known types of coronavirus.
It is important to stress that there is not yet any published research specifically investigating how psychological stress impacts immune response to COVID-19. Nevertheless, based on what we know about how our state of mind influences our immune system and general health, promoting psychological health and psychological wellbeing techniques as part of the wider action plan and response to COVID-19 appears to be a prudent step.
Consequently, as part of minimising the negative impact of COVID-19, from a psychological perspective most people are likely to benefit by staying Calm, Active, Rested, and Mentally simulated:
About the author
Dr William Van Gordon
Dr William Van Gordon
Associate Professor in Contemplative Psychology
Associate Professor in Contemplative Psychology, Dr William Van Gordon is a Chartered Psychologist and international expert in the research and practice of meditation and mindfulness.
View full staff profileView full staff profile
|
Will Oil Spill in the Gulf Affect The Florida Keys?
All eyes are on Florida now as the oil spill that is still leaking from the Deepwater Horizon still floods into the waters of the Gulf of Mexico.
Parts of the oil may already being enterting the loop current, a warm ocean current in the Gulf of Mexico that flows northward between Cuba and the Yucatán peninsula, moves north into the Gulf of Mexico, loops west and south before exiting to the east through the Florida Straits.
Shoreline property owners and real estate professionals are concerned the disaster could literally hit home by decreasing house values. There is no way at this time to tell what the depreciation - if any - will be on the property values in the Florida Keys. Far too many variables play a role in predicting the coastline(s) that will be affected by the spill.
Most oil spill experts say any oil carried by the Loop Current would be more dispersed and highly weathered by the time it gets to the Keys, making it highly unlikely that large “rivers” of heavy oil would impact the Keys. The weathered and diluted oil would likely appear in the form of tar balls. While arrival of oil in any form is unacceptable, tar balls are “significantly less toxic,” according to Florida Department of Environmental Protection Secretary Michael Sole. It is also possible that some areas of the Keys could be affected and others not, or that the oil residues could remain in the Loop Current and Gulf Stream and miss the Keys altogether.
The National Oceanic and Atmospheric Administration is providing oil slick trajectory maps regarding the oil spill and its proximity to the Florida Keys. Forecast maps are updated daily to plot and project approximate positions of the oil slick, and provide the latest available forecast showing movement in and around Gulf waters. The map below was issued on Wednesday, June 2nd, 2010.
|
Wine country fires
The heart of America’s most important wine region went up in flames this fall. Except, that is not exactly what happened.
Napa and Sonoma have some 900 wineries, not counting wineries that are brands without buildings or estate vineyards. Less than a dozen wineries were destroyed, maybe 15 damaged.
More than 90 percent of the grapes had been harvested before the fires. It will be interesting to discover how smoke may—and it is a big may—affect taste. Many of the grapes were fermenting and carbon dioxide created during fermentation shields the juice from smoke. Juice already fermented was safely in tanks and barrels.
As for the American wine industry, Napa-Sonoma make the most famous wines, but they make only ten percent of California’s wines, so worst case is loss of one percent of California’s 2017 wine production.
Many wineries are configured so there is a “defensible space” around them—100 feet or more where there is no flammable material such as dry grass and trees. This space played a key role in saving many wineries.
Vineyards also act as a fire break. Vines are full of moisture at harvest and many growers have moved away from wooden posts for trellises, so there is just not that much flammable material on a vineyard to begin with. In addition to the defensible space around the winery, vineyards provided additional protection.
Wind-driven firebrands did affect outlying buildings and decorative entrances—the wineries knew they could sacrifice those non-essential structures while they defended the winery, many of which are made of stone, concrete, or metal anyway and harder to ignite.
If there is damage to wines, it most likely came from loss of power and access. Temperature controls in fermentation tanks, punchdowns, human monitoring of the process was interrupted. If the wines of 2017 are affected it will be from that more than from smoke.
Damage came not to wineries or vineyards, it came to homes and humans. Couples died in each other’s arms as the fires raced through city blocks. Wine workers lost everything but their lives. They will come back. They would not be in the wine business if they were made of anything less.
Last round: In wine there is wisdom. In beer, freedom. In water, bacteria. You make the call.
|
Martin Luther King Jr.'s The Ways Of Meeting Oppression
525 Words3 Pages
In Martin Luther King Jr.’s piece, “The Ways of Meeting Oppression”, he tries to inform and persuade the reader about the most effective way of dealing with oppression by listing and and describing three characteristic ways people approach and deal with their oppression: acquiescence, physical violence, and nonviolent resistance. King states that by giving in and submitting to their oppressors, one is “cooperating with that system”, a system which is unjust; therefore, the oppressed does not become any better than their oppressor and proves their inferiority. By resorting to physical violence, they often create more complicated problem instead of solving solving them, leading to more destruction. Instead, King advocates nonviolent resistance.
Open Document
|
The newsletter of the Memory Disorders Project at Rutgers University
What is Epilepsy
Epilepsy is a brain disorder characterized by recurrent seizures, which are uncontrolled, excessive electric discharge by the neurons in the brain.
The prevalence of seizures is very common; about 1 person in 20 will experience at least one seizure during a lifetime. However, the prevalence of epilepsy -- defined by multiple seizures -- is much smaller: about 1 person in 200. Epilepsy does run in families, although it is unlikely that a single gene accounts for the seizures.
One feature of epilepsy is the individual variation; for example, the interval between seizures may vary from minutes to weeks to even years. Many individuals with epilepsy experience an aura or warning of impending seizure (which may take the form of a sensation such as smell, or may simply be a "feeling" that a seizure is about to occur).
Epilepsy Symptoms
In many cases, epileptic seizures arise from a particular site or "focus" in the brain. When there is such a focus, it is often the medial temporal lobe. Repeated severe seizures can damage the underlying brain tissue. Thus, many individuals who suffer severe epilepsy show cognitive deficits, particularly memory deficits due to damage to the medial temporal lobe.
In many cases, epileptic seizures can be controlled or eliminated by the use of drugs, called anti-convulsant drugs or antiepileptic drugs. In cases where drugs are ineffective and seizures are so severe as to be life-threatening, surgery may be conducted to remove the part of the brain where the seizures arise. The surgery is only done on one side of the brain, leaving the other side intact.
The surgery is often very effective, and patients may experience little or no impairments resulting from the lost tissue. (In fact, in some cases, patients appear to show cognitive improvement following surgery - possibly because relief from near-continual seizures allows them to concentrate better.)
Further Reading:
|
Dataset Information
Developing of Low-Cost Air Pollution Sensor-Measurements with the Unmanned Aerial Vehicles in Poland.
ABSTRACT: This article presents the capabilities and selected measurement results from the newly developed low-cost air pollution measurement system mounted on an unmanned aerial vehicle (UAV). The system is designed and manufactured by the authors and is intended to facilitate, accelerate, and ensure the safety of operators when measuring air pollutants. It allows the creation of three-dimensional models and measurement visualizations, thanks to which it is possible to observe the location of leakage of substances and the direction of air pollution spread by various types of substances. Based on these models, it is possible to create area audits and strategies for the elimination of pollution sources. Thanks to the usage of a multi-socket microprocessor system, the combination of nine different air quality sensors can be installed in a very small device. The possibility of simultaneously measuring several different substances has been achieved at a very low cost for building the sensor unit: 70 EUR. The very small size of this device makes it easy and safe to mount it on a small drone (UAV). Because of this device, many harmful chemical compounds such as ammonia, hexane, benzene, carbon monoxide, and carbon dioxide, as well as flammable substances such as hydrogen and methane, can be detected. Additionally, a very important function is the ability to perform measurements of PM2.5 and PM10 suspended particulates. Thanks to the use of UAV, the measurement is carried out remotely by the operator, which allows us to avoid the direct exposure of humans to harmful factors. A big advantage is the quick measurement of large spaces, at different heights above the ground, in different weather conditions. Because of the three-dimensional positioning from GPS receiver, users can plot points and use colors reflecting a concentration of measured features to better visualize the air pollution. A human-friendly data output can be used to determine the mostly hazardous regions of the sampled area.
SUBMITTER: Pochwala S
PROVIDER: S-EPMC7348723 | BioStudies | 2020-01-01
REPOSITORIES: biostudies
Similar Datasets
1000-01-01 | S-EPMC5191180 | BioStudies
2018-01-01 | S-EPMC5923822 | BioStudies
2011-01-01 | S-EPMC3214390 | BioStudies
2020-01-01 | S-EPMC7044178 | BioStudies
1000-01-01 | S-EPMC6267103 | BioStudies
2019-01-01 | S-EPMC6311139 | BioStudies
2019-01-01 | S-EPMC6977412 | BioStudies
2018-01-01 | S-EPMC6068810 | BioStudies
2013-01-01 | S-EPMC3556395 | BioStudies
2020-01-01 | S-EPMC7182254 | BioStudies
|
Gunmen don't care about political parties
Once again, we are witnessing an outbreak of gun violence both mass and individual shootings. We have seen too many of these in recent months, and they are increasing.
Shootings used to be largely in large cities, but in recent times they have spread to suburbs, small towns, and rural areas. The types of shootings have changed also. We now see many individual or lone shootings taking place on streets, into cars, and into homes.
As a result, we must do something to get these under control, because they are increasing and mass shootings are now taking place in various unusual places as well in such places as malls, grocery stores, shopping centers, streets. Highways, bars, housing complexes. etc.
For example, it is reported that during the last five years up to February 28, 2021, 122 individuals were killed and 325 were injured from mass shootings, and recently there has been seven mass shootings within a seven-day period. Further, there have been more shootings on the street, into cars, and homes than mass shootings and more individuals have been injured or killed in the first few months of 2021.
Although mass shootings receive the most attention and concern, it is reported that during 2020 approximately 20,000-gun violence cases have occurred in homes, in cars, and on the streets.
It seems that some individuals are shot because they have offended the shooters in some minor way. For example, some security and store employees may be shot because they tell a potential customer that he must wear a mass to enter a store. Some other individuals are shot while driving along a street or highway and may offend a driver in the slight manner. This may be due to cutting in front of a car. driving too slow, or slowing down a driver in some way. There are other shootings for other irrational reasons as well. For some, it seems like they shoot into cars just for the sport of it.
Because of these it makes some of us feel uncomfortable to go out shopping or to drive along streets and highways.
This can also make us feel uncomfortable to say anything to strangers or even look at someone who has misbehaved or show disapproval of strangers in some way.
Something needs to be done about this increasing and dangerous public safety issue. The question is what can we do? We cannot simply stay home and not go shopping or not take care of our other needs.
There have been attempts without much success to strengthen gun sale laws such as requiring national background checks for gun sales and banning the sales of semi-automatic rifles, machine guns, assault weapons, etc. However, these have made little difference. In fact, as we see gun violence has increased.
Even if these were more effective, there would still be a problem with illegal gun sales, sales at gun shows, and individuals mental health issues. Those with mental issues, criminal records, and other issues would still have access to guns illegally and are more likely to use them.
One thing we can do is to make a more serious and committed effort by both the Democratic and Republican parties to work together to deal with this issue. This needs to be done at the local as well as the national levels. When gunmen shoot individuals, they do not ask what party they belong to and as mentioned before this situation is getting worse.
Dr. Rance Thomas is a professor emeritus of Sociology/Criminal Justice, who taught Sociology for 30 years at Lewis & Clark Community College and Southern Illinois University Edwardsville.
|
Do you get the winter blues? 10 - 20% of people find themselves with some form of seasonal depression. Low moods aside, it can simply be difficult to maintain a healthy lifestyle in the winter due to cold and just plain lack of motivation. If you're at all worried about winter and the challenges it may bring, read on for some tips and tricks for optimal health during the winter months.
Focus on Immune-Boosting Foods
Give yourself a boost with foods that'll keep your immune system healthy! Citrus fruits, garlic, green tea, shellfish, ginger, tumeric, and spinach and other leafy greens are all great ingredients to incorporate into your meals. These foods are proven to boost your immune system and help you fight off colds, flus, and other illnesses. Furthermore, they'll give you the vitamins and minerals you need to thrive.
Wash Your Hands More Often
It seems like common sense, but try to wash your hands more often during the winter months. Colds, flus, and other illnesses circulate and spread during the winter. Even if you don't feel like you "need" to wash your hands, dip into a washroom or anywhere there's a sink to give them a scrub. This will ensure you're not carrying germs.
Get a Great Sleep Every Night
Sleeping seven hours a night is always recommended, but it's especially critical during the winter months when moods can be low and immune systems can be compromised. Make sure you get lots of sleep to allow your system to reset your immunity and you'll be able to fight off illness and seasonal depression much easier.
|
06 February, 2008
What is Stock Split ??
Stock splits are akin to getting two Rs 50 notes for a Rs 100 note. They’re aimed at making the stock more affordable and liquid for retail investors.
What is a stock split?
Stock split is the process of splitting shares with high face value into shares of a lower face value. It is like getting Rs 100 note changed for two Rs 50 notes. Does it change the value of your money? Not really. But now, you also have two smaller denomination notes which would be easily accepted by small vendors. A stock split increases the number of shares in a public company. The price is so adjusted such that the market capitalisation of the company almost remains the same.
Why split stocks?
Companies usually split their stock when they think the price of their stock exceeds the amount smaller investors would be willing to pay. “It is aimed at making the stock more affordable and liquid from retail investors’ point of view,” said Indiabulls CEO Gagan Banga. Generally, there are more buyers and sellers of shares trading at Rs 100 than say, Rs 400 as retail shareholders may find low-price stocks to be better bargains. Stock splits are usually initiated after a huge run-up in the share. This run-up may be
linked to the performance of the stock.
The company may declare such splits in different ratios like 2-for-1, 3-for-1, 3-for-2, or like in the case of Mr Gupta, 4-for-1. Some companies may go to the extent of declaring a 10-for-1 split, as power services company GVK Power did recently. To illustrate, say, XYZ Company is trading at Rs 250 and you hold 100 shares, which make the total value of your holding at Rs 25,000 (250 x 100). If this company declares a 2-for-1 stock split, your 100 shares become 200 and the share price is adjusted to Rs 125. The value of your investment still remains the same (this time, 125 x 200). And if the company had 10 lakh outstanding shares before the split, it will now have 20 lakh outstanding shares after this split, keeping the market cap unchanged.
Sometimes, companies may choose to club stock split issue with bonus shares. Bangalore-based jewellery manufacturer Rajesh Exports recently declared a 2-for-1 stock split along with a bonus offer of two shares for each share held. This means that each share becomes two, post-split. Now, for these two shares, shareholders will get four additional shares as bonus. Thus, one share translates into six after stock split and bonus issue. “To ensure increased liquidity for existing shareholders and easy entry point for new shareholders, the decision was made to split the share,” said Rajesh Exports chairman Rajesh Mehta.
How do shareholders benefit?
In pure financial terms, a split in itself is a non-event as fundamentally it changes nothing. However, stock split helps make shares more affordable to retail investors and provides greater liquidity in the market. It may happen that after the split, the stock price goes up as the demand for these shares increase.
Should one buy stocks of ‘splitters’?
Not necessarily. This depends on the confidence one has on the company’s fundamentals. The common logic of looking at a company’s fundamentals and stock performance stand true in this case too.
What are record date & no-delivery period?
The company announces the split ratio on a particular date called the record date. Anyone who wants to benefit, must buy stocks before the record date to be entitled for additional shares post-split. In the no-delivery period, trading is permitted in the scrip, however, these trades are settled only after the period is over. This is done to ensure that in
vestor’s entitlement for corporate actions like stock split is clearly determined. So, Mr Gupta, just relax, the markets have indeed corrected but your stocks have multiplied, in numbers at least.
Courtesy :: Economictimes
No comments:
|
Question #5900050
Physics help please.?
A 6.00 V storage battery is connected to three resistors, 5.25 Ω, 13.0 Ω, and 23.0 Ω, respectively. The resistors are joined in series. Calculate the equivalent resistance. Tried adding them, multiplying them, dividing them. Please help
2013-05-02 03:39:53
TELL US , if you have any answer
There is NEVER a problem, ONLY a challange!
The is a free-to-use knowledgebase.
The was started on: 02.07.2010.
(Unless registration you can just answer the questions anonymously)
Cheers: the PixelFighters
C'mon... follow us!
Made by, history, ect.
|
I was wondering if there is an established method to keep track of the orbit of an exoplanet assuming we know a - the semi-major axis of the orbit, e - the eccentricity of the orbit, and i - the inclination of the orbit. Can we track its position once a month?
• $\begingroup$ What do you mean by "track". Do you mean visually observe, or do you mean create a mathematical model that describes the orbit? $\endgroup$
– James K
Feb 14 at 12:59
• $\begingroup$ @James K I mean the seconf $\endgroup$
– Jokerp
Feb 14 at 14:12
To describe the position of an orbiting body you need 6 numbers. There are different ways to do this:
You can give the position $(x,y,z)$ and velocity $(\dot x, \dot y, \dot z)$. At a given time $t_0$, and then use Newtons laws to work out the position of the planet at any time in the future.
You can give the orbital parameters:
• Eccentricity (the shape of the ellipse)
• semi-major axis (the *size of the ellipse)
• (inclination, Longitude of ascending node, argument of periapsis) The 3d position of the ellipse. These depend on the particular choice of reference frame. For exoplanets it is common to take the reference plane to be the the plane through the star and perpendicular to the line of sight from Earth. This means that many exoplanets have an inclination of about 90 degrees.
• Mean anomaly at epoch (The position of the planet on the ellipse at time $t_0$)
And then use Keplers laws to work out the position of the planet at any time in the future.
So we can track them theoretically.
However determining these six values from observations is not easy, and in many cases we don't know all six at all well. If we can detect a transit we know that the inclination is close to 90, and we might be able to combind transit observations with radial motion observations to get some limits on the other values. Nevertheless, these six values can't be simply measured from the data.
For most exoplanets they can't be directly imaged (they are too close to the glare of the star, and to dim to be seen) There are a few exoplanets that can be tracked They are usually very large, warm planets that are far from their star.
cf. Transformation of Orbit Elements, State and Coordinates of Satellites in Two-Body Motion
• $\begingroup$ For most known exoplanets we do have one point of tracking per orbit... so maybe not directly as the OP ask, but still. The other parameters might follow from RV measurements $\endgroup$ Feb 14 at 13:59
• $\begingroup$ @James K Thanks! Any references on where to find a mathematical description that includes the inclination? All I found up t this point imply mpotion in a plane $\endgroup$
– Jokerp
Feb 14 at 14:15
• 1
$\begingroup$ onlinelibrary.wiley.com/doi/pdf/10.1002/9781118542200.app1 The "hard" part is finding the position of the planet on the ellipse in the plane relative to the periapsis. The "easy" part is then rotating that ellipse to your coordinate system, that is basically a rotation in three dimensions, so is "ordinary" trigonometry, rather than weird things like the Kepler equation. $\endgroup$
– James K
Feb 14 at 14:38
Your Answer
|
With Frederick The Great: A Story of the Seven Years' War cover
George Alfred Henty (1832-1902)
1. 00 - Preface
2. 01 - King and Marshal
3. 02 - Joining
4. 03 - The Outbreak of War
5. 04 - Promotion
6. 05 - Lobositz
7. 06 - A Prisoner
8. 07 - Flight
9. 08 - Progue
10. 09 - In Disguise
11. 10 - Rossbach
12. 11 - Leuthen
13. 12 - Another Step
14. 13 - Hochkirch
15. 14 - Breaking Prison
16. 15 - Escaped
17. 16 - At Mindem
18. 17 - Unexpected News
19. 18 - Engaged
20. 19 - Liegnitz
21. 20 - Torgau
22. 21 - Home
Among the great wars of history there are few, if any, instances of so long and successfully sustained a struggle, against enormous odds, as that of the Seven Years' War, maintained by Prussia--then a small and comparatively insignificant kingdom--against Russia, Austria, and France simultaneously, who were aided also by the forces of most of the minor principalities of Germany. The population of Prussia was not more than five millions, while that of the Allies considerably exceeded a hundred millions. Prussia could put, with the greatest efforts, but a hundred and fifty thousand men into the field, and as these were exhausted she had but small reserves to draw upon; while the Allies could, with comparatively little difficulty, put five hundred thousand men into the field, and replenish them as there was occasion. That the struggle was successfully carried on, for seven years, was due chiefly to the military genius of the king; to his indomitable perseverance; and to a resolution that no disaster could shake, no situation, although apparently hopeless, appall. Something was due also, at the commencement of the war, to the splendid discipline of the Prussian army at that time; but as comparatively few of those who fought at Lobositz could have stood in the ranks at Torgau, the quickness of the Prussian people to acquire military discipline must have been great; and this was aided by the perfect confidence they felt in their king, and the enthusiasm with which he inspired them. (
|
There are a number different types of back and spine pain but the most common ones are –
1.A muscle tension is probably the most common cause of back pain. It can occur at any time and for no reason whatsoever. It is basically when your muscles are unable to relax and loosen like they should do. It can be caused by a number of reasons but stress is the main reason for this condition.
2. When you are stressed your nervous system will automatically react by sending signals to your muscles to be on “alert” or “protect mode”. The nervous system includes both the Central nervous system and Peripheral nervous system. The central nervous system is made up of the brain and spinal cord, and the peripheral nervous system is made up of the Somatic and the Autonomic nervous systems.
3. A low back (lumber region) strain is caused by repeated and/or overuse of muscles which leads to the muscle fibres therein becoming stretched. Or worse, torn. The same can be said of ligaments and tendons being stripped from where they are attached. It is an injury to the lower back. This results in damaged tendons and muscles that can spasm and feel sore. The lumbar vertebra make up the section of the spine in your lower back.
4. Sciatica, which most of my readers know, I suffer from in both my sacroiliac joints is actually a very common back complaint. When the sciatic nerve is compressed, the pain will almost continually radiate in the same area. Likewise, it may spread across the leg. The nerve is located along the lower back all the way down to the hips and around the buttocks. Slip discs are another cause of pressure. Hence, inflammation and pain will unavoidably arise, resulting in sciatica. I have had a number of lumber surgeries to remove bulging discs which they say has now resulted in me having sciatica problems as the nerves have had to work a lot harder in that area due to limited movement from metal work in my lumber spine.
5. Prolapsed, herniated, bulging or ruptured discs (there have been many names for them) come about when the disks on your spine rupture. This is what happened to me in my late 20’s. Back then you were treated by a chiropractor, a physiotherapist then when the discs remained bulging and the pain constant they then operated to remove the problem disc and fuse the others. A ‘slipped’ (prolapsed) disc often causes sudden, severe lower back pain. The disc often presses on a nerve root which can cause pain and other symptoms in a leg. In most cases, the symptoms ease off gradually over several weeks. The usual advice is to carry on as normal as much as possible. Painkillers may help. Physical treatments such as spinal manipulation may also help. Surgery is only an option if the symptoms persist.
6. Poor posture especially nowadays with many people working from home and/or in front of a screen for along time can cause big problems with your back. Many people are usually not aware that their poor posture is the source of pain in the first place. Or that they are, but choose not to rectify this back and spine problem. I only type sitting in a chair with a lumber support, I wear a back support brace and type on an ergonomic keyboard. Little things like this will make a big difference in the way you sit and work on your laptop/computer. Check out my post on this subject here.
7. Finally, Osteoarthritis of the spine can also cause back pain. Osteoarthritis is defined as a tenderness felt along the vertebrae and the regions surrounding it. Often, it is not limited to back pain. Osteoarthritis of the spine is a breakdown of the cartilage of the joints and discs in the neck and lower back. Wear and tea in our cartilages are part of ageing and they can deteriorate and cause pain particularly in the elderly.
Leave a Reply to barmac5 Cancel reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Varberg, Halland – Exploring Sweden
One of the pearls along the western coast of Sweden is the town of Varberg. It is a popular summer destination and houses the majestic Varberg Fortress. With a population of around 36.000 inhabitants, it is the 36th largest locality in the country. The town has a long history and has eventually grown with the fortress and the harbor in its center.
A Short History of Varberg
The history of Varberg goes back to the Middle Ages and a time when the province of Halland was still a part of Denmark. At the time the town was known as Getakärr, of which only the church ruins still exist.
The Danish count Jacob Nielsen built a fortress here in the 13th century on a spot for a phryctoria, a signal fire. The fortress was his refuge after the murder of the king in 1286, a murder of which he was a suspect. The fortress received its name from the previous phyctoria and was known as “Vårdkasberget” or “vardhberg”, meaning watch mountain. The name eventually became Varberg.
The Danes established New Varberg a few kilometers north in the 15th century and over time it completely replaced the old town closer to the fortress. The Kalmar War between Denmark-Norway and Sweden at the beginning of the 17th century marked the end of New Varberg and it was burned by Swedish forces in 1612. The event meant that the focus was returned to the development of Old Varberg.
The fortress was extended and modernized in the 16th and 17th centuries and was completed in 1618, first after the war with Sweden. The Treaty of Brönsebro in 1645 saw the ceding of Halland to Sweden and Varberg becoming Swedish.
The first centuries under Swedish rule saw a major fire in the town each century, in 1666, 1767, and 1863. It was already in the 19th century that Varberg became a spa town. Varberg was at the beginning of the 19th century the largest town in Halland, with a population of around 1400 inhabitants. Because of slow population growth, Halland didn’t get the benefits of industrialization at the same time as the rest of Sweden. However, it turned out to be the nearby textile industry that increased the town’s importance as a harbour. It would also mean the arrival of the railway at the end of the 19th century.
Things to Do and See
For anyone visiting there are of course a few must-see attractions in Varberg. In addition to these the large town center with its square and pedestrian streets will offer both shopping and restaurants.
Varberg Fortress
Varberg Fortress, or Varbergs fästning in Swedish, is the main attraction of the town and one of the majestic sights on the Western coast of Sweden. It was built in the Middle Ages and has seen both Danish and Swedish rulers. Its defensive purpose was eventually changed to being a prison, a task that it kept up until 1931. Today it is open for visitors to explore the history, views, and structure.
Varberg is a spa town and nothing shows that as much as the cold bathhouse. The fortress is spectacular, but it is the bathhouse that is the most eye-catching. Pools were constructed already in the 1820s for cold baths in the harbor area. The current building was constructed in 1903. Right next to the bathhouse is also a sandy beach popular for families.
Beach Walk
Strandpromenaden, the beach walk, was established already in 1912. It begins in the harbour area and will take you around the fortress and along the coast for four kilometers.
Getterön Nature Reserve
Getterön is an island located right outside of the harbour in Varberg. It is accessible by road and houses the airfield, camping, and beaches. There is also a nature reserve covering an area of 350 hectares. Here is also Naturum, a center telling visitors about the surrounding nature.
Societetsparken is a central park with a long history. It was established in the end of the 19th century and was earlier known as the bathhouse park. It includes one of the best preserved society houses in western Sweden, dating back to 1883. Today the park houses many summer activities for visitors.
How to Get to Varberg
Flights: The closest airport is Halmstad Airport (HAD) located 72 kilometers away, with mostly domestic flights. There is also the larger Göteborg Landvetter Airport (GOT) 92 kilometres away, with both domestic and international flights.
Car: Varberg is located along the E6 between Göteborg and Halmstad.
Train: Several train services have departures and arrivals in Varberg, including SJ, Västtåg and Öresundståg. Destinations include Göteborg, Halmstad, Helsingborg, and Malmö..
Bus: Local and regional buses connect Varberg with the surrounding region.
Stockholm – 493 kilometers (5h 44min)
Gothenburg – 75 kilometers (58min)
Malmö – 203 kilometers (2h 16min)
Linköping – 298 kilometers (3h 37min)
Kiruna – 1652 kilometers (19h 45min)
Leave a Comment
%d bloggers like this:
|
TY - JOUR T1 - Stretching before exercise: an evidence based approach JF - British Journal of Sports Medicine JO - Br J Sports Med SP - 324 LP - 325 DO - 10.1136/bjsm.34.5.324 VL - 34 IS - 5 AU - Shrier, Ian Y1 - 2000/10/01 UR - http://bjsm.bmj.com/content/34/5/324.abstract N2 - Clinicians are under increasing pressure to base their treatment of patients on research findings—that is, to practice evidence based medicine.1 Although some authors argue that only research from human randomised clinical trials (RCTs) should be used to determine clinical management,2 an alternative is to consider the study design (RCT, cohort, basic science, etc) as one of many variables, and that no evidence should be discarded a priori. In other words, the careful interpretation of all evidence is, and has always been, the real art of medicine.3 This editorial explores these concepts using the sport medicine example of promoting stretching before exercise to prevent injury. In summary, a previous critical review of both clinical and basic science literature suggested that such stretching would not prevent injury.4 This conclusion was subsequently supported by a large RCT published five months later.5 Had the review relied only on previous RCT data, or even RCT and cohort data, the conclusions would likely have been the opposite, and incorrect.Was there ever any evidence to suggest that stretching before exercise prevents injury? In 1983 Ekstrand et al6 found that a … ER -
|
Flooding in LibyaLibya has been a regular victim of severe flooding for many decades and the problem is only becoming more severe. Heavy rains have caused significant problems, with flooding and landslides in urban and rural areas making day-to-day life infeasible for thousands.
Flooding in Al-Bayda, Libya
On November 6 2020, Al-Bayda, Libya, experienced torrential rains and extreme flooding, resulting in the displacement of thousands. High water levels on public roads have made daily commutes impossible for many. Additionally, the floods have left thousands without electricity and have greatly damaged properties.
The flooding of 2020 is reminiscent of the flooding in the Ghat district in 2019, which affected 20,000 people and displaced 4,500. In June of 2019, flooding devastated areas in south Libya and damaged roads and farmland. Central infrastructure suffered unrecoverable damages, setting the region back. Areas prone to disaster are significantly limited in their progression and development when devastation is so frequent.
Flooding and Poverty
The pattern of flooding in Libya has consistently contributed to problems of economic decline, poor infrastructure and poverty. As one of the most common natural disasters, flooding impacts impoverished areas more severely because their infrastructure is not built to withstand floods or landslides.
Poor countries take a long time to recover from the impact of flooding because they do not have the resources and money to repair property damage and help people to bounce back from the effects. War-affected countries are even more vulnerable and Libya is such a country affected by war and conflict.
Within the country, a two-day holiday was declared on November 9 and 10 of 2020 due to the extreme flooding and $7 million has been allocated to address damages in Al-Bayda municipality. Since the flooding, there has been little recognition and support from the international community.
Humanitarian Aid
A humanitarian aid team from the European Civil Protection and Humanitarian Aid Operation (ECHO) assembled to provide aid to support the city of Al-Bayda and other cities vulnerable to flooding in Libya. The team worked to gather information and identify what resources are most needed to help families get back on their feet and be better prepared for future severe flooding and weather. Cleanup efforts are ongoing and teams started using satellite imaging and other data-collecting resources to help assess and plan for resource distribution.
The Need for Foreign Aid in Libya
In response to Libya’s chronic vulnerability to severe flooding, in 2019, the U.S. Government provided nearly $31.3 million to address the humanitarian needs of conflict-affected populations throughout Libya. Since the floods are ongoing, ongoing assistance is needed. Proactive and preventative measures need to be implemented in response to the devastating pattern of flooding in Libya. These are expensive investments, however, and Libya cannot implement these preventative measures alone. Help from the international community is crucial in order to create a more resilient country.
– Allyson Reeder
Photo: Flickr
|
How a car maker uses drones to make parts for robots
A robot used to build parts for electric cars can now also make parts from drones.
The Chinese company Wuhan Precision Industry Co. has developed a drone-powered assembly tool that can manufacture parts from drone components without using any human hands.
The company is also planning to use drones to manufacture parts for other applications in the future.
The robot was developed by Wuhans factory in the city of Wuhanyi in northwest China, according to the company’s website.
The company says the tool is capable of making a range of industrial-scale components including aluminum parts, titanium parts, steel parts and carbon composite parts.
The tool, which can be operated from a smartphone, is designed to help industrial robots and engineers work on more complex parts.
Wuhan has also launched a drone service to help customers get the most out of their existing robots and automation technologies.
It offers its own drone-based robotic service that offers up to six hours of flight time.
|
Skip to main content
Engineering LibreTexts
7.6: The memory hierarchy
• Page ID
• At some point during this chapter, a question like the following might have occurred to you: “If caches are so much faster than main memory, why not make a really big cache and forget about memory?”
Without going too far into computer architecture, there are two reasons: electronics and economics. Caches are fast because they are small and close to the CPU, which minimizes delays due to capacitance and signal propagation. If you make a cache big, it will be slower.
Also, caches take up space on the processor chip, and bigger chips are more expensive. Main memory is usually dynamic random-access memory (DRAM), which uses only one transistor and one capacitor per bit, so it is possible to pack more memory into the same amount of space. But this way of implementing memory is slower than the way caches are implemented.
Also main memory is usually packaged in a dual in-line memory module (DIMM) that includes 16 or more chips. Several small chips are cheaper than one big one.
The trade-off between speed, size, and cost is the fundamental reason for caching. If there were one memory technology that was fast, big, and cheap, we wouldn’t need anything else.
The same principle applies to storage as well as memory. Solid state drives (SSD) are fast, but they are more expensive than hard drives (HDD), so they tend to be smaller. Tape drives are even slower than hard drives, but they can store large amounts of data relatively cheaply.
The following table shows typical access times, sizes, and costs for each of these technologies.
Table \(\PageIndex{1}\): Memory access times, sizes, and costs.
Device Access time Typical size Cost
Register 0.5 ns 256 B ?
Cache 1 ns 2 MiB ?
DRAM 100 ns 4 GiB $10 / GiB
SSD 10 µs 100 GiB $1 / GiB
HDD 5 ms 500 GiB $0.25 / GiB
Tape minutes 1-2 TiB $0.02 / GiB
The number and size of registers depends on details of the architecture. Current computers have about 32 general-purpose registers, each storing one “word”. On a 32-bit computer, a word is 32 bits or 4 B. On a 64-bit computer, a word is 64 bits or 8 B. So the total size of the register file is 100–300 B.
The cost of registers and caches is hard to quantify. They contribute to the cost of the chips they are on, but consumers don’t see that cost directly.
For the other numbers in the table, I looked at the specifications for typical hardware for sale from online computer hardware stores. By the time you read this, these numbers will be obsolete, but they give you an idea of what the performance and cost gaps looked like at one point in time.
These technologies make up the “memory hierarchy” (note that this use of “memory” also includes storage). Each level of the hierarchy is bigger and slower than the one above it. And in some sense, each level acts as a cache for the one below it. You can think of main memory as a cache for programs and data that are stored permanently on SSDs and HHDs. And if you are working with very large datasets stored on tape, you could use hard drives to cache one subset of the data at a time.
• Was this article helpful?
|
With the Jewish term “Shoah” we mean the genocide of more than 10 millions of Jewish people from 1933 to 1945 during the World War II in Europe because of the fascist regime. As well as Jews, other victims were gypsies, Serbs, opponents of the regime, disabled, homosexuals and Jehovah’s Witnesses. All these people were deported to concentration or extermination camps (“lager” in German) and they were completely deprived of their identity: their hair was shaved, they had a tattooed number which represented their new name and they were dressed with a striped uniform. In addition, they were forced to labour until death in gas chambers. The camps were located in Germany and in Poland as well; among them the most terrible we remember was Auschwitz-Birkenau and the few people who succeeded in surviving just escaped from there, sometimes helped by German people who worked there. The path they were forced to do from their homes to the camps lasted several days, in fact lots of people died of cold, hunger or suffocation before getting to that hell. Once arrived, there was a division between males and females and a selection based on health’s state in order to verify the capacity to forced labour. Kids, old and ill people were immediately brought to the gas chambers and killed. An important testimony about Jews’ life at that time is the memoir “If This is a Man” by Primo Levi. Unfortunately, the survivors of the Holocaust are less and less but we mustn’t forget what happened and especially we must not to repeat this dramatic event in the future, hoping in a better world and helping each other.
Sofia Mangiavillano
4 A turistico
Condividi su facebook
Condividi su twitter
Condividi su linkedin
Premio Finale!
Leggi Tutto »
Thank you!!!
Leggi Tutto »
|
Survey 6- Dreams & Designers (1895-1905)
Lecture Summary
This week during the lecture we covered the time between 1895 and 1905, a time period which is characterized by the immense progress in the arts and design . For example, during this time we see art nouveau rise from the ashes which the arts and crafts movement left behind. Art nouveau took many aesthetique inspiration from the arts and crafts movement which is seen through their shared usage of organic lines and geometric shapes. However they differed in their philosophy, the arts and crafts movement was a push against industrialization and mass production whereas art nouveau embraced it. Important figures in this movement include
Antoni Horta and most famously, Alphonse Mucha. Other notable movements which we see are the secessionist movement which was inspired by the work of the Gaglslow Four (a group of groundbreaking artists from the Gagslow School). This movement included artists such as Gustav Klimpt.
Research – Edwardian Fashion
As always, fashion is much more than the garments which we use to clothe ourselves. It is the a time capsule to its era, an expression of values, influences, and ideals. A time and place fashion is a product of its time, and is affected by the the context in which it was placed.
Edwardian fashion for example, is an example example of this phenom. The major driving force at the time was the industrial revolution, which was in full swing, sending Europe headfirst into a totally foreign world.The revolution came with several effects, including the creation of the middle class, and the facilitation of garment making.Additionally, the creation of the sewing machine greatly facilitated the process of making garment and consequently resulted in a boom of clothing which was created from factories.
As a result, the newly created middle class could enjoy a new range of benefits which normally would have been reserved for the wealthy. Literacy rates, leisure time and money all increased due the effects of the industrial revolution which contributed to a culture which was capable of interesting themselves with other things.With the world which once was turned on its head, the role of its citizens was placed in a , especially women. The world began to stray from its strict Victorian ways, seeking out luxury, opulence and the lifestyle of the elite, such as the British Monarch Charles
The Gibson Girl – An Icon
Image result for charles dana gibson gibson girl
Charles Dana Gibson’s, 1898, “Gibson Girl”
To create a clearer picture of the aesthetic ideals of the time, we can take Charles Dana Gibson illustrations of the unnamed women as the icons/role models of this movement in fashion. “The Gibson Girl”, as she was commonly referred to, was portrayed as bold, and fun loving, all while retaining a cool sophistication. She was the icon of the century and embodied all the elements which the fashion savvy copied.
Image result for charles dana gibson gibson girl
Charles Dana Gibson’s illustration of the Gibson girl was the icon of the Edwardian period and exemplifies its traits, as well as the the figure which was sought after. These elements include her long, slender neck, small waist with an ample bust and hips and her iconic hair-do.
Image result for charles dana gibson gibson girl
Another one of Gibson’s illustrations of the so called “Gibson Girl”, and here we see her embodying the Edwardian Ideal, with the tailored jacket on the left, the iconic hairdo in the middle, and the high collared blouse on the right.
An Edwardian dress featuring the “monobosom”
One notable changes in this style was the silhouette, mostly due the change in corsets. Victorian fashion favoured the hourglass figure, whereas the Edwardian era leaned towards a soft s curve. In order to achieve this, “S-bend Corsets” (also known as “straight-back corsets, and “health corsets”) were popularised. Unlike the Victorian corsets, s bend corsets pushed the chest forward and the hips backwards and “promoted a proud posture”. The corset did not divide up the bust and created a “monobosom” (think pigeon) which was considered fashionable.
Image result for pigeon with the big chest
An uncanny resemblance…
The shape of skirt also changed during the Edwardian era of fashion,as styles such as the bustles of the Victorian Era reached their death. 2 pieced garments were the most common, with skirts that cinched in the waist and flared as they reached the group. The new style of skirts created a lily or a trumpet silhouette. In 1901 skirts embellished with ruffled lace or fabric were also popular. The silhouette of the skirts began to change in 1904, the shape was made fuller and clung less to the hips, and as 1905 approached skirts began to gently fold inwards. Along with this trend, the waist line of these dresses began to rise until 1907. However, throughout the entire period skirts did retain their length of their Victorian counterparts (often (often brushing the floor) but as time passed trains became more and more common (even for everyday attire!) . Tailored jackets were commonly paired with skirts, and they first began to appear in the 1880’s and increased in popularity until their peak in early 1900’s.
Photo Credit
(in order from beginning to first)
Leave a Reply
|
write based on the prompt attached
Project #1: Comparative Analysis and Argument
Note: Some of these instructions are modified from our course text, Successful College Composition.
This assignment builds on what you learned in RWS 200 and other introductory composition courses. It has two parts to it: 1) An analysis and evaluation of two texts, and 2) A brief argument supporting your own position on the topic. Remember that these two parts of the assignment should form a cohesive essay.
Part 1:
Begin your assignment by creating an introduction and thesis statement that overviews your project and position on this topic.
Next, analyze and evaluate “The Case for Torture†by Levin and “The Case Against Torture†by Solomon, both linked to on page 133 of our text.
Think about the rhetorical situation, the purpose of the articles, and the various rhetorical strategies used in these two arguments. Next, explain elements of context embedded in the arguments—the clues that suggest what the arguments are responding to, both in the sense of what has been written before it and in the sense that it is written for an audience in a particular time and place—and to evaluate how effectively the arguments persuade the audience within this specific context.
Finally, decide which one you think is the most convincing to its target audience, based on your analysis of the authors†use of ethos, pathos, and logos, along with other rhetorical devices and strategies. Writing a comparative analysis means more than simply summarizing the different arguments. Instead, you will be making an argument about the two texts, using as support specific examples from the articles you select. For instance, you may claim that one argument is more effective than another because of the reliability and quantity of its support (i.e., logos). You may also make claims about the credentials or biases of the authors and their testimony or their writing strategies, including their definitions of key terms, overall organization, and tone.
In your rhetorical comparative analysis essay, you are expected to:
Explain your project (not the authors†of your texts projects) for the paper using metadiscourse.
Discuss the rhetorical situation of each article, paying particular attention to the authors†intended audience and emphasizing the context in which the arguments are responding.
Identify the major claims, explaining how they relate to the overall arguments.
Compare each argument by evaluating each authorâ€s use of ethos, pathos, logos, and other rhetorical strategies. You may also look at the authors†use of evidence, tone, organization, and other writing strategies.
Explore the significance of your analysis (why does this matter?).
Use an effective structure that carefully guides your reader from one idea to the next, thoroughly editing your writing so it is comprehensible and appropriate for an academic audience.
Part 2:Next, use this analysis and evaluation to support and develop your own position on the topic.
Using your evaluation and analysis of the two texts from Part 1, develop your own position for or against this issue.
Make your appeals in support of your position by using sound, credible evidence. This evidence should be drawn from the articles evaluated above, but you may also use outside sources to strengthen your argument. If you choose to do this, use a balance of facts and opinions from a wide range of sources, such as scientific studies, expert testimony, statistics, and personal anecdotes. Each piece of evidence should be fully explained and clearly stated. See Chapters 1.3 and 4.7 for information on how to correctly incorporate outside sources into your writing.
Make sure that your style and tone are appropriate for your subject and audience. Tailor your language and word choice to these two factors, while still being true to your own voice.
Finally, write a conclusion that effectively summarizes your main argument and reinforces your thesis. Include a brief summary of your analysis from Part 1 and explain how this informed the position you took in Part 2.
Other Requirements:
Your essay must be 4-5 pages long (not including title pages or references pages)
Please use Times New Roman, 12 point font, double spaced
You may use MLA or APA format—choose the format you are most familiar with.
Grading Criteria:
Does the essay incorporate all components from Parts 1 and 2 of the instructions?
Is the essay clear, coherent, and well organized?
Is the analysis of the two arguments well developed, objective, and rhetorically based? Does it address all elements listed in the assignment instructions?
Is the argument clear,well-supported, and persuasive? Does it avoid bias and logical fallacies?
Does the essay clearly show the relationship between the analysis and the development of the argument?
Is the essay revised, edited, and proofread? Is the writing clear?
Does the essay follow either APA or MLA documentation and formatting guidelines? Have the two essays used for analysis been listed on the references or works cited page? Have in-text citations been used correctly and appropriately?
The post write based on the prompt attached appeared first on Essay Writers.
Hey, wait!You Don't want to miss this offer!
|
Music Education: The Center of the Wheel
It is July. We have reached the final month of summer break and our attention is turning to the school year ahead. Decisions are to be made regarding what music will be performed and which events to attend. Every music education program is unique and faces its own challenges. Most have multiple seasons and ensembles to plan. It is easy to set goals for a marching band season or a choral festival, but how is that enhancing your entire program? What is the center of your music education wheel?
Photo Credit Sandy King
Concert Band. Marching Band. Jazz ensemble. Chamber groups. Indoor Percussion. All of these are part of a music education program. Some programs focus heavily on marching band over all others. Some put concert ensembles as their priority. In reality, each program should work towards a common goal.
What is the yearly goal for your music education program? Do you have one?
Every part of our program – every rehearsal, every piece of music, anything you do – must drive your group closer to the yearly goal. Each part of the program should have its own specific goal, but these ideas need to address your yearly goal.
What can your yearly goal be for your music education program? Anything. Maybe it is having students perform all 12 majors scales in quarter-notes at a tempo of 100 beats per minute. How can you use your marching band rehearsal to meet that goal? Or, your goal can be to perform a large ensemble composition with choir and winds. Find ways to work on pitch and tone stability at softer dynamics. There are hundreds of options. These goals can be piece-related, focused on student success, or just fun. The idea is to build a program with a yearly focus. This will help bring continuity and enjoyment to all parts of music education.
Creating a more efficient marching band rehearsal Corey's Commentary
As band directors, we are constantly on the go and working behind the scenes. Sometimes, we do not plan appropriately for the next rehearsal. This often leads to a more chaotic practice and frustration. Having a regular rehearsal routine helps set the standard and expectations every day, allowing students to feel comfortable and ready. In this episode, I present a few things that helped me manage rehearsal expectation and build to a successful day and season.
1. Creating a more efficient marching band rehearsal
2. Center of your music education wheel
3. Considerations when writing for color guard
4. Emotional Pacing of your Marching Band Show
5. Should I join marching band in college?
Leave a comment
|
1. eastern hemisphere; the OrientSee also 西半球
Wikipedia definition
2. Eastern HemisphereThe Eastern Hemisphere is a geographical term for the half of the Earth that is east of the Prime Meridian and west of 180° longitude. It is also used to refer to Europe, Asia, Africa, and Australasia, vis-à-vis the Western Hemisphere, which includes the Americas. In addition, it may be used in a cultural or geopolitical sense as a synonym for 'Old World'.
Read “Eastern Hemisphere” on English Wikipedia
Read “東半球” on Japanese Wikipedia
Read “Eastern Hemisphere” on DBpedia
to talk about this word.
|
Main Tips On How To Write Case Study Analysis
Unsure of how to do a case study analysis? Read on to help you get started and discover useful ways to make it great!
What is a Case Study Analysis?
A case study analysis is a form of academic writing which analyses a situation, event, place or person to form a conclusion. They are useful for phenomena that can’t be studied in a laboratory or via quantitative methods. Case studies are commonly used in several fields, such as, the social sciences, medicine and business.
Difference Between Research Paper and Case Study
There are features which are common to both research papers and case studies, so to understand how to write a case study assignment you need to be aware of the differences. Case studies normally present a full introduction about a topic, but do not require citation of other similar works, or the writer’s own opinion. Conversely, research papers do not require a full introduction about the general topic but do require citing of other similar work as well as the writer’s own views.
Types of Case Studies
There are generally five types of case study and it is important to work out which type you have been tasked to write before you can begin to learn how to write a case study:
1. Historical case studies focus on historical events and contain various information that provides different perspectives of the time period and applies them to current day parallels e.g., ‘Racism Amongst the French Aristocracy in the 1800’s’.
2. Problem-oriented case studies aim to solve a real life or theoretical problem e.g., ‘Homelessness in New York’.
3. Multiple/Collective/Cumulative case studies include the collection of information to provide comparisons, e.g., the value of a specific resource in different countries.
4. Critical/intrinsic case studies investigate causes and effects of a case e.g., Why Toys Remain Gender Stereotyped.
5. Illustrative/instrumental case studies describe particular events, the outcomes and what has been learned as a result. For example, ‘The Effects of Dance Therapy in Depressed Adolescents’.
Get your paper written by a professionals
Case Study Examples
Case study analysis titles are normally expected to include the words ‘case study’. Here are a few case study examples titles:
• Santander’s Expansion in Canada: Case Study Analysis
• Case Study on the Effects of Art Therapy in Children with ADHD
• The National Health Service’s Treatment of People with Learning Disabilities, Case Study Analysis
• Toxicological Case Study of The Mississippi River
• Reading Development in Remote Areas of Nigeria: A Case Study
• Case Study on the Growth of Veganism in Berlin
Writing a Case Study Draft
Many students find it useful to write up a rough draft before beginning. A rough draft can help you get creative and explore options before deciding on the most suitable focus. Sit down with a coffee, paper and pen, and read the case study brief thoroughly. Then begin jotting down various ways, ideas, and possible directions to go before you decide on the best one! Don’t worry about writing neatly, it may hinder your creativity! You can draw up a neat version for your educational instructor (if required) later.
How to Format a Case Study
To know how to write a case study paper, you need to get an idea of the case study format for students, which can consist of up to eight or more sections. A basic generalized formatting guide is as follows:
1. Introduction/The Executive Summary: This initial section gives the reader an overview of what your case study analyses and its findings. Remember, to include a thesis statement.
2. Literary Review/ Background information: Here you can write the most relevant facts and pinpoint the topic issues.
3. Method/Findings/Discussion: This part allows you to focus on the specific case you have chosen and your findings. These sometimes may be required to be written in separate sections.
4. Solutions/Recommendations/Implementation: This is the place you can discuss your chosen solution, why it is appropriate, and how your proposed solutions can be put into practice. The solutions will incorporate realistic and achievable ways to improve a situation or solve a problem. Testable evidence may be included to back up the solutions you are proposing.
5. Conclusion: Provide the reader with a summary of key points from your case study evaluations and proposed solutions.
6. References or Bibliography: The reference or bibliography will appear on a separate page and will list all the sources of information used and consulted within your case study. They will be listed according to your educational establishments required citation style e.g., MLA, APA, Harvard, Chicago, etc.
7. Appendices (if applicable): There may also be material which is too ‘bulky’ (e.g., raw data, graphs, images, notes) to include elsewhere in your work so, the appendices is where this subsidiary material should go.
8. Note: Not all educational establishments require the above case study analysis format. They may only require some of these sections, or request them in a different order so, always check with your instructor what the required format is before beginning work.
Take your paper to the
next level
Professional editors will check your paper for grammar, punctuation, sentence structure, consistency, and academic style.
How to Write a Case Study Outline
A case study outline is a useful way for an educational instructor to see that a student is on track to successfully complete writing a case study analysis and identify any potential problems before the student begins working on the study.
Before beginning your outline, retrieve relevant, credible sources of information on your topic from academic search engines, such as Google Scholar. Follow this, by writing down key points you have discovered from these sources. You may only need to read the abstract or summary of the sources to pick out the key points. Then write your thesis as this will help guide you and keep you on track when writing the outline.
The case study outline may consist of the following information in preparation for writing the case study in full:
1. Title page
• Case study title
• Student’s name
• Educational instructor's name:
• Course name
2. Introduction/Summary
• A paragraph giving an overview of the topic, your thesis, and your key findings.
3. Main Body Paragraphs (x3)
• Literature Review/Background Information
• Method/Findings
• Discussion/Solutions/Recommendations
4. Conclusion
• Paraphrase or answer your thesis
• Summarize your case study
• A statement relating to your future recommendations or ideas and a broad or wide sentence about the topic in general that may encourage the reader to further ponder.
5. Reference List or Bibliography
• List of all the sources of evidence used to create your case study in your educational establishments required citation style (APA, MLA, Chicago, Harvard, Turabian).
How to Write a Case Study
There are various extensive ways to structure a case study but they all generally boil down to five main areas; introduction, literature review, method, discussion, and conclusion. So, now you’ve got the basic information about how to write a case study, let’s explore the general sections in more depth.
1. Introduction/Summary: The introduction should aim to hook the reader's attention in the first few sentences by explaining, in an interesting way, the question you will be answering or the case you will be exploring. Then include some background information on the topic and details of your selected case (explaining how they relate). State why the topic is important and why the selected case enriches current available information on the topic. Summarize your literature review and include previous case studies that your findings will build on. End with the possible ways that your case study can be useful in the future and your thesis statement.
2. Background Information/Literature Review: Present relevant information from various reliable academic sources to help the reader understand the extent of research in your chosen topic and help them understand the importance of your case study (e.g., enhances current understanding, fills a gap in knowledge). Include descriptions of key theories about your topic. You can obviously use the internet and library to locate relevant literature but don’t forget to also check your lecture notes or class textbook to seek ideas/pre-existing research/theories that you may want to include.
3. Method/Findings: Explain why you selected your case, how it is related to the topic/issue, your particular research methods and why you chose them/why they are suitable. Bear in mind that data collection methods for case studies are often qualitative, not quantitative, for instance interviews, focus groups, primary and secondary sources of information are frequently used. Also, try to organize the data you have discovered in a way that makes sense e.g., thematically, chronologically.
4. Discussion/Solutions: Restate your thesis, then draw your own conclusions as a result of what you have discovered from your research and link to your thesis. Clearly inform the reader of your main findings, explaining why the findings are relevant. Think about the following questions:
• Were the findings unexpected? Why/Why not?
• How do your findings compare to previous similar case studies in your literature review?
• Do your findings correlate to previous findings or do they contradict them?
• Are your findings useful for deepening current understanding of the topic?
Next, explore possible alternative explanations or interpretations of your findings. Be subjective and explain your case studies limitations. End with some suggestions for further exploration based on the limitations of your case study.
5. Conclusion: Inform the reader precisely why your case study and your findings are relevant, restate your thesis and your main findings. Give a brief summary of previous case studies you reviewed and how you contributed to the expansion of current knowledge. End by explaining how your case study and its findings could form part of future research on the topic.
Your instructor should have a good example of a case study to show you, so don't be afraid to ask. They will surely want to help you learn how to write a case study!
Catch plagiarism before your teacher does
Check your paper against billions of web pages and publications.
Get an accurate plagiarism report in a few seconds.
It's fast, easy & free!
PapersOwl Plagiarism Checker
How to Create a Title Page and Cite a Case Study
The title page needs to be formatting according to your educational establishments recommendations but a general format normally consists all or some of the following:
• An interesting title that reflects the content of the case study, includes the words ‘Case Study” and is around 5-9 words.
• Your full name
• Your course name
• Your educational instructors name
• The name of the educational establishment you are attending
• The submission date
Whenever you include another writer’s work or ideas in your case study paper, you need to accurately cite the original source in your educational establishments required citation style. You can do a quick internet search on ‘how to do a case study in APA, or ‘how to do a case study analysis in MLA’, ‘how to make a case study in Harvard’, ‘how to do a case study in Chicago’ etc., for example to get more accurate and specific guidelines. Generally, a short in-text citation e.g., (Hruby, 2018) will be written straight after the work or ideas you have used and in a longer, full citation in your reference or bibliography at the end of your case study e.g.,
Hruby, A. (2018). Hruby, A., & Hu, F. B. (2015). The epidemiology of obesity: a big picture. Pharmacoeconomics, 33(7), 673-689.
To conclude, case studies are useful for providing an analysis of an event, a situation, a person, a place or a situation. There is some overlap with research papers, so it is important to pay attention to what your educational instructor has requested. Case studies can be overwhelming as a result of all the various sections and information that may be required, but taking it a section at a time can help make a great case study, as well as allocating sufficient time to research and planning. Hopefully, you now know how to write a case study!
Was this helpful?
Sorry about that
How can we improve it?
Thanks for your feedback!
|
4.1 discussion: devotional – is profit biblical?
Getting Started
Most theologians and Bible studiers would point out that it is the love of money and not money itself that is the problem—as reflected in our selected verses. Remember Jesus said “Render unto Caesar” so he was not going to get involved in petty stuff like taxes. Where Jesus had a problem was when someone put money over God and having money or the acquisition of money became more important. I don’t see that it is wrong to be paid for your skills. Surely Jesus had no problem with his earthly father being paid to build a house, and there is no reason not to expect people to be paid today whether they are filling racks in a retail store or calculating taxes. We could argue that throwing a football is not worth 1300 times the value of carrying a rifle into harm’s way but that is another topic.
What is profit anyway? Profit is bringing in more than you spend and it has applications at home and at the office, both in for-profit entities and non-profit entities. If you are selling your product for less than it costs to make, you cannot support any charity. You also can’t buy new equipment, or develop new products, or provide raises or employee benefits.
By the way, how did Jesus eat and clothe himself? He was called a carpenter, so we can relatively safely assume that he must have practiced that trade at some point in his life to earn a living. However, it does not appear that he worked later in life because there is no mention of it, so he must have lived off of donations and gifts. Can anyone or any company make a donation or gift without first making a profit? No.
• Develop a biblical framework to resolve ethical dilemmas in marketing strategies and tactics.
• Bible (New International Version)
Background Information
2. You identified a company or industry of interest to use in your degree program. Until the world finds a way to not need charities, we need profitable organizations to pay good wages to generate the ability to personally donate. There is often public debate about “reasonable profit.” What is reasonable? Is it an after-tax rate of 3%? 6%? 13%? What about the years when profit is -4% or -11%, meaning a loss was incurred? Can a company then be allowed to make 18% the next year for an average of 7% if 7% is considered “reasonable”? Who and what determines “reasonable profit?”
3. Consider these questions and post your thoughts on them:
1. Is profit inherently evil or can it exist in the kingdom of God? Please explain.
2. Is there such a thing as unreasonable profits in your company or industry of interest? If so, how do we determine that level where profits become unreasonable? If not, what do you say to those who feel there is?
Punctual Essays
Calculate your paper price
Pages (550 words)
Approximate price: -
Why Work with Us
Top Quality and Well-Researched Papers
Professional and Experienced Academic Writers
Free Unlimited Revisions
Prompt Delivery and 100% Money-Back-Guarantee
Original & Confidential
24/7 Customer Support
Try it now!
Calculate the price of your order
Total price:
How it works?
Follow these simple steps to get your paper done
Place your order
Proceed with the payment
Choose the payment system that suits you most.
Receive the final file
Our Services
Essay Writing Service
Admission Essays & Business Writing Help
Editing Support
Revision Support
|
Pin It
Should E.T. finally give Earth a ring, it’s not only important to understand what the message says but why it is being sent, a speaker at a talk about extraterrestrials urged this week. This requires understanding about alien social behavior, also known as sociology.
“We keep complaining about the fact that we know so little about extraterrestrials in general, and even though sociology is mentioned in the Drake Equation, it is generally agreed that is the most difficult aspect to address,” said Morris Jones, an Australian who describes himself as an independent space analyst.
The Drake Equation is a set of variables proposed by astronomer Frank Drake that estimates how many intelligent, communicating civilizations there are in the universe. While speaking at the International Astronautical Congress Wednesday (Oct. 1), Jones pointed out that most talk about alien communications focuses on the basics – how they transmit, and where to search, and whether we can hear them. But to fully understand the message, we have to understand how their society works.
To read more, click here.
|
‘Sneaky, sneaky’: How to type in the UK’s ‘Snoopy’ typeface
In the UK, a typeface is defined by a single character or word that stands for the word or word combination it is supposed to represent.
That’s the kind of thing that would make it difficult to type the UK in a way that isn’t misleading.
The UK has a long and storied history of using sans serifs.
So why did it go all the way back to the days of the Romans?
This is a tricky question, and it’s one that researchers are trying to answer through the work of a new typeface called Sans Mono.
The typeface has been designed to make it easier to type across the UK because it was created by a British designer, David Lipscomb.
But the answer isn’t that simple.
The British were the first to create sans seriff typefaces in the 19th century.
It’s a style of typeface that originated in France and developed in England, Italy, and the United States.
In the early 20th century, many people, including printers, typographers, and typographers’ assistants, also started using sansserifs to type on paper.
And for the next 50 years, British typographers used the sans sera as the default typeface on their papers.
But a new style of serif was introduced by Louis Cyrille and his colleagues in 1797, which was a direct descendant of sans serf.
The typeface was known as Cyrillic, after the French word for serf, and Cyrillos were used by printers in many countries across Europe.
But sans seriffs, sans serfbefs, and sans serflakes were all variations of the same typeface.
In the 18th century there was an explosion in printing and typemaking in Europe.
In 1798, Joseph Smith invented the first commercial printer in America, which became the first printing press to use mechanical ink and metal printing plates.
The first mass-produced typefaces were published in 1819 by the British printer James Watt and were a direct descendent of the Cyrilloid typeface created by James Joyce.
But it wasn’t until the 1930s that serif fonts began to appear on printers’ papers in Europe and America.
The Sans Mono typeface originated from a single British designer called Lipscombs.
In 1883, he designed a type face for the UK called Serif Sans.
He had a great idea: the typeface should be a little bit different from the serif typefaces used in England and France.
The result was a sans serfo, or serif sans serfa, or a sans-serif sans-fafafafafa.
And it was a good choice.
The sans serfgafafa is a very different typeface from the rest of the typefaces that came before it, including the British typeface for which it’s named.
The serif Sans Mono looks very similar to the serf typeface used in the US, which means that people would find it hard to tell the difference.
But this new type of serf was far from perfect.
Like the serff, it had a very narrow stroke width and the stroke length was a bit long.
And like the serfs, it didn’t have the same width as the serfa.
It was also much more difficult to read.
But in 1892, Lips combs’ daughter, Helen Lipscombe, created the type for the first time, and soon the rest was history.
Lipscomb’s son, George Lipsocombs, was able to create another new typefaces with a stroke width of 1.4 inches, and by 1901, the type had appeared on all the leading typesetting houses’ papers.
And in 1905, Linscombe wrote his own serif serif, which he called Sans Sans Mono, and then used it to create the UK typeface in 1906.
By then, the British had replaced the seriffs of the 1800s with serf serf sans serfs.
But the sans mono didn’t stop there.
There were a number of other versions of sans monos.
Some were based on Cyrillics, which are actually a variant of Cyrill, which in turn is a variant on the Cyril.
Some of the variants had a slightly wider stroke width than the serflaves of the original.
And some of the sans mono fonts had a wider stroke than the sans-monos.
The range of serfs that could be used for typeface design and the number of different variants meant that serf-type serif variants were extremely popular, and there were a variety of serfing methods for typefaces to choose from.
In fact, some of these variants were used to create some of today’s most iconic typefaces, including Courier, Times New Roman, Helvetica, and Times New Style.
But in the late 19th and early 20
|
Baker’s Math
At least I didn’t title this: Baker’s Chemistry! Yes, there is math in baking. However, rather than flat images on paper of pies or pizzas, bakers can practice math concepts like fractions with actual, delicious baked goods. For bread making, math is essential, and delicious math is the easiest to comprehend.
Baking recipes are called formulas and are ratios of the ingredients to each other. I’m not a mathematician, so I can’t with confidence say all, but much, if not most, of math is about ratios between things. Bakers need to understand the percentages of the ingredients in a formula. With Baker’s Math, the flour in a formula is always the 100%, with the remaining ingredient percentages based on the total weight of the flour. Feeling dizzy yet? Has dry mouth set in? Here’s an example using the Country Loaf I’ve included in past PCC Classes:
Ingredient Weight in grams Baker’s percentage
High extraction Yecora Rojo flour 800 grams 80%
Whole grain Expresso wheat flour 200 grams 20%
Water (ºF?) 750 grams 75%
Levain 200 grams 20%
Sea salt 20 grams 2%
The total flour here is 1000 grams, making 2 nice sized loaves. The amount of Yecora Rojo flour is 80% of the total flour being used. The amount of whole grain flour is 20% of the total flour being used. This applies to the amounts of water, levain and salt as well. If I wanted to make just one loaf, I would halve the ingredient weights but those percentages would remain the same. These same percentages will apply to a bakery making 100s of loaves. You can, by all means, bake bread with just one type of flour which would eliminate the pesky adding together of the high extraction + whole grain flours. I usually always bake with combinations of flour for flavor and performance. I also use Baker’s Math in the care and feeding of my sourdough starter. Baker’s Math is essential when recipe testing, tweaking percentages of liquid in a formula. I always measure by weight using grams. Grams give me nice whole numbers which halve or increase quantities easily.
While there are several factors involved in successful bread baking, using weights and baker’s percentages will get you very close to making the same loaves of bread each time you bake. If you don’t have a scale, I encourage you to get one. Then armed with scale, paper and pencil, you can create your own delicious bread formulas!
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
%d bloggers like this:
|
Problem Solving Multiple Choice Topic Test
Problem Solving Multiple Choice Topic Test
Test your understanding of problem solving with this ten question, self-marking multiple choice exercise.
Shape (3D) Algebra Angles Arithmetic
Averages Decimals Fractions Mensuration Money Number
Percentages Probability Problem Solving Ratio Sequences Shape
Time Indices Negatives Geometry Other Topics Pentransum
Here are 10 Problem Solving multiple choice questions written by people from around the world while using the main Pentransum activity. You can earn a Transum Trophy for answering at least 9 of them correctly.
1. How many buses will be needed to hold 476 people when a bus can hold 52 people.
Correct Wrong
This question was suggested by Finn Meyer, Christian-von-Dohm-Gymnasium Goslar, Germany
Correct Wrong
3. A snail climbs up a 12m wall. It climbs 3m each day, but slips back 2m each night. On what day will it reach the top of the wall?
Correct Wrong
This question was suggested by Gillian, New Zealand
4. Mr and Mrs Thomson have six children and the sum of their ages is 63. What was the sum of the ages of the Thomson children 7 years ago?
Correct Wrong
This question was suggested by Caitlyn Dawbin,
5. A frog is in a well 12m deep. Every time the frog jumps 2m, it falls down 1 m. how many leaps does the frog have to make to reach to the surface?
Correct Wrong
This question was suggested by Yash, United Arab Emirates
6. I'm thinking of a number: I add 6, divide by 4 and then times it by 5 and my answer is 35, what was my original number?
Correct Wrong
This question was suggested by Rebecca and Michelle, Leeds
Correct Wrong
8. If goldilocks and the three little pigs sat down at a table, how many legs would there be?
Correct Wrong
This question was suggested by Flossie Roberts, Portland, Dorset
9. If Bob has 44p and Bill has 22p how much does Bob have to give Bill so they have the same amount of money?
Correct Wrong
This question was suggested by Sophie Brown, Newcastle
10. The perimeter of a rectangle is 28.6cm. One side of the rectangle is 5.1cm. What is the size of the longer side of the rectangle?
Correct Wrong
This question was suggested by Hockey Puck, Birmingham
Other Problem Solving Activities
Please contact us if you have any suggestions or Questions.
Email address
More Activities:
Teacher! Are you delivering Maths lessons online?
Stay safe and wash yout hands!
Featured Activity
Tran Tunnels
Tran Tunnels
Go Maths
|
Question: Where Did Arabic Come From?
Which country invented Arabic letters?
The origins of the Arabic alphabet can be traced to the writing of the semi-nomadic Nabataean tribes, who inhabited southern Syria and Jordan, Northern Arabia, and the Sinai Peninsula..
Which country speaks the purest Arabic?
Originally Answered: In which country do people speak the best Arabic? In fact, the principle of the best Arabic is a relative notion. If you mean clearer Arabic for foreigners it is Egypt, Syria and Jordan.
What is the purest Arabic?
What was the first language on earth?
What is the origin of Arabic language?
“Some say Arabic script originated from Al Hirah (fourth-to-seventh-century Mesopotamia) in the north, while others say it originated from the south of Arabia, from Himyar (110 BC to AD 525),” Al Naboodah. “The origin of Arabic is a highly debated topic, with new discoveries still happening.”
Is Arabic the origin of all languages?
Not really. Arabic is itself a descendant of a language called ‘Proto-Semitic’ and, while Arabic has lent words to many languages, it is not the actual basis of many. For instance, most European languages (plus Hindi, Urdu, Bengali and Marathi) are derived from the Proto-Indo-European language.
What is the hardest Arabic dialect?
the hardest one depend on your native dialect if you live in Morocco , Tunisia then the Gulf dialect will be the hardest one for you , if you live in the Gulf , Levant , Egypt then darija Moroccan will be the hardest one , its not so difficult its only need that i hear it a lot .
Is Arabic hard to learn?
For the reasons listed above, among others, Arabic is a challenging language to learn. If you’re an English speaker, you’ll need to spend more hours studying Arabic than you would studying Spanish to get up to a similar level. But a harder language is not an unlearnable language.
Who first spoke Arabic?
The Arabic Language has been around for well over 1000 years. It is believed to have originated in the Arabian Peninsula. It was first spoken by nomadic tribes in the northwestern frontier of the Peninsula.
What language do Muslims speak?
Is Arabic older than Hebrew?
Hebrew is far older than Arabic. There are written records of Hebrew going back to the 10th century BC. … The earliest writings in Arabic date from the 6th century AD, so about 1,600 years after Hebrew was first written. It’s hard to know what language was spoken at that time, but it most certainly was not Arabic.
Is Arabic the oldest language?
Arabic is one of the oldest spoken languages and it carries a great history and civilization behind. The earliest example of an Arabic inscription dates back to 512 CE. At present, around 300 million people speak Arabic around the globe. Arabic belongs to the Semitic language family which includes Aramaic.
Is Arabic a dead language?
Which is the mother of all languages?
Is Arabic language older than English?
The earliest Proto-Arabic written texts are from the 8th century BC and Pre-Classical Arabic from the 2nd century BC. English as a West Germanic language and a variety of Proto-Germanic languages dates to the 1st century BC and Old English developing into modern English dates to the 5th century AD. So, Arabic is older.
Who is the father of Arabic?
Who created Arabic?
The earliest attestation of continuous Arabic text in an ancestor of the modern Arabic script are three lines of poetry by a man named Garm(‘)allāhe found in En Avdat, Israel, and dated to around 125 CE.
What is the hardest language to learn?
|
You are here: Home » Health » Physical Health » Unhealthy lifestyle effects: Are you unknowingly doing harm?
Unhealthy lifestyle effects: Are you unknowingly doing harm?
Sitting: Unhealthy lifestyle effects
You might be doing harm to your body through an unhealthy lifestyle, and the effects could be ones that you don’t see for some time. Unfortunately, activities like sitting for long periods and screen addiction can have disastrous effects on humans. Here are some of the modern issues that are leading to a whole new range of 21st-century health problems and ways to turn them around to be healthier.
Consider, for example, the massive consumption of cow’s milk despite the fact that the human digestive system is only able to assimilate its nutrients in 25 percent of the population. As much as people try and keep themselves active and practice self-care, there are some ways in which the modern lifestyle is proving ruinous to your overall health.
3 unhealthy lifestyle effects:
1. Too much sitting
The average person gets out of bed to sit and eat breakfast, then sits in the car, gets to work, and sits at a desk.
When lunchtime comes, you sit and eat lunch. Then, come home, sit and watch TV for a while, then sit down to dinner before going to bed. That’s a whole lot of sitting!
The worst part of it is that sitting for long periods is proven to be damaging to your health. That’s all the more reason to stand more, including implementing standing meetings at work or getting a sit-stand desk to vary between the two positions.
While humans sat on chairs throughout human history, rarely have they spent quite so much much time sitting in them. Sitting is, anthropologically speaking, a relatively new configuration for the body to which people are not yet completely accustomed.
Spending too long sitting can lead to severe unhealthy lifestyle effects. The potential health problems include increasing your risk of muscular and skeletal disorders, obesity, diabetes, and even cancer and heart disease. Walking or cycling to work and getting up for a quick walk every half hour or so can help to mitigate the damage.
2. Screen addiction
Our eyes have not developed to accommodate the glowing rectangles of light that people dedicate so much of their day to staring at obsessively. Yes, it’s the smartphone screen, but other screens too.
Exposure to televisions, computer monitors, tablets, and smartphones can take a toll if you don’t buy computer glasses to help mitigate the damage. Otherwise, you can expect tired, dry, and irritable eyes.
There are other unhealthy lifestyle effects too. For example, you might experience neck and upper back pain from slouching or bending forward to look at screens. So, try to maintain proper posture, have an ergonomic workstation, and take breaks. Stretching regularly is also a good idea.
3. Unnatural eating patterns
There was a time when humans had to roam, forage, and hunt for their food. They grazed on what they found as they went along. That was a long time ago, but human physiology didn’t get the memo.
People usually defer eating until breakfast, lunch, and dinner. But, in doing so, folks run the risk of having low blood sugar.
If that level dips, you might crave sugary, fatty, and salty processed foods. Regular snacking and eating whole, unprocessed foods will not prevent you from making poor dietary choices.
A few last words on unhealthy lifestyle effects
Now you see why it is important to pay attention to an unhealthy lifestyle to prevent unwanted results! The suggestions above can help you be at your best all day long.
What are some other unhealthy habits you frequently see today?
25 thoughts on “Unhealthy lifestyle effects: Are you unknowingly doing harm?”
1. I especially like your comment on eating consistently throughout the day to maintain blood sugar levels and reduce cravings. My Dietician recommends that as well. Growing up my family was so set on breakfast, lunch, big dinner so I am learning a new way around food.
2. Ha..I guess screen time and sitting is something bloggers, vloggers, etc have to work on. I am also trying to get my daughter to do more outdoor ectivities as she is attached to her ipad..
Leave a Reply
Privacy & Cookie Policy
%d bloggers like this:
|
Sergius and Bacchus: Sainted Same-Sex Military Couple
In LGBTQ History Month it’s worth recognizing how difficult it is not to read modern prejudices or hopes back into the lives of famous characters. Before the 19th-century category of sexual orientation labelled “homosexuality” was invented there wasn’t a way to categorize those we’d today expect were, or we’d consider, lesbian, gay, transgender, or queer.
The creation and subsequent demonization of modern categories of sexual orientation and gender identity that aren’t straight enough by today’s definitions has raised complaints about what has been ignored about LGBTQ people in history.
But it’s also invoked negative responses to the reexamining of historical documents without modern homophobic reactions, responses that expend a lot of effort trying to prove that those examples weren’t really lovers but “friends, brothers, sisters” who expressed their love more intimately than siblings and friends would in today’s apparently more homophobic cultures.
LGBTQ History Month — interestingly and to the chagrin of many “traditional” Christianists — has four feast days for recognized “saints” in the pre-homophobic worship patterns of Roman Catholic and Orthodox Churches who non-homophobic evaluation suggests we’d call LGBTQ saints today.
October 8 is the feast day of Saint Pelagia(os) the Penitent; October 9 is the feast day of Saint Athanasia (Athanasios) of Antioch; and October 29 is the feast day of Saint Anna the New (renamed Euphemianos) of Constantinople.
But first we come upon October 7, the traditional feast day of Sergius and Bacchus, two male saints depicted throughout the long history of their veneration as lovingly committed to each other in a depth unquestioned until the rise of that modern category of “homosexual” and the reactionary need to deny that they were lovers by those worried that the long-venerated duo might qualify.
This fourth-century same-sex couple was particularly popular throughout the Mediterranean area. For nearly a thousand years Sergius and Bacchus were the heavenly protectors and official patrons of the Byzantine army. References to their relationship were regularly invoked in rituals for same-sex partnerships.
As with so much purported “history” of saints and martyrs, we have little basis for authenticating the details of the historical claims made in the highly stylized and idealized devotional literature about these two. Like the other martyrs, so much is later, over-worked, and unverifiable.
What we can say is that this loving couple was taken seriously enough to be revered down through history as well as to have shrines built to them. The tomb of Sergius at Resafa became a famous shrine. In the year 431, Bishop Alexander of Hierapolis built a magnificent church in his honor.
In 434, the town of Resafa was raised to the rank of an episcopal see and was named Sergiopolis. Later, Emperor Justinian I enlarged and fortified it and it became one of the most popular pilgrimage sites in the East.
The construction of a Church of Saints Sergius and Bacchus in Istanbul in 527, was one of the first acts of the reign of Justinian I. In fact, another legend says that both saints appeared to Emperor Justin, Justinian’s uncle, to save Justinian by vouching for Justinian’s innocence in a plot against the throne.
Parts of Sergius’ relics were transferred to Venice where these saints were patrons of the ancient cathedral. And by the ninth century a church had been dedicated to them both in Rome.
Their “Acts” have been retold down through history and preserved in Latin, Greek, and Syriac. Though they’re said to have been martyred in the fourth century, the Greek text known as The Passion of Sergius and Bacchus is probably from a century later.
Sergius and Bacchus were military men of high rank according to the legend. Thus, they are not only examples of paired saints but of an ideal in the broader popular lore of the intimate male-male relationships between soldiers and warriors that has fascinated many cultures for ages. See, for just one example, the love of the Biblical warrior pair, David and Jonathan.
The received text is full of stylized and patterned material not unlike that found in the plethora of legends of Christian martyrs, but the details of the days of their torture and deaths emphasize not only their religious faith but the intimacy of their relationship.
Because they refused to worship Roman gods and extolled the Christ of Christianity, they were first humiliated by being paraded on the journey to their ultimate deaths in women’s clothing. Then they were separated and tortured with Bacchus murdered first.
That foundational text says that while Sergius waited in his cell the night following Bacchus’ death, Bacchus appeared to him, telling Sergius not to lose heart for not only were the joys of heaven greater than any suffering he would endure but that his reward would be to reunite with Bacchus in heaven.
Notice how the gist of the message Bacchus brings is framed in the text that’s been passed down through history in terms of the loss of each other:
Why do you grieve and mourn, brother? If I have been taken from you in body, I am still with you in the bond of union, chanting and reciting, “I will run the way of thy commandments, when thou has enlarged my heart.” Hurry then, yourself, brother, through beautiful and perfect confession to pursue and obtain me, when finishing the course. For the crown of justice for me is with you. (John Boswell’s translation)
No one worried about whether this was or was not a romantic sexual relationship between two warrior lovers — they apparently didn’t care — for 1,600 years. But then a distinguished Yale University medieval historian, John Boswell began looking at pre-modern documents without the modern institutional bias that dominated medieval Church historians, who were mostly Roman Catholic.
Boswell knew he was fighting the establishment’s entrenched homophobic traditions. Thus, all his writings are dominated by careful methodological historical discussions, extensive footnotes (almost half of each book), appendices, original documents, and translations as evidence for his upending of medieval studies.
The response to his work was both wide praise and expected and predictable conservative criticisms. But through it all, Sergius and Bacchus remained as icons of an intimate same-sex relationship that homophobia only tried to erase in this past quarter century.
We recognize that historians can’t know with certainty the real history of this couple, described in the oldest material we have about them as erastai (probably “lovers”). But what we do know is that their sainthood was celebrated down through history as a model of male-male love for each other without fear of what that meant about the intimate, romantic, or sexual nature of their relationship.
Only in the last decades with modern homophobia has anyone tried to argue that they weren’t as intimate as the documents we have say they likely were, though most of the criticism of Boswell’s work is meant to reject his suggestions that there were same-sex union ceremonies for romantic couples in the pre-modern church.
All in all, though, why not recognize the centuries-long idealization of Sergius’ and Bacchus’ deep, even romantic, love for each other? And why not celebrate such deep love wherever it is pictured?
Homophobia? Too subversive of anti-LGBTQ dogma? Too threatening to authorized anti-gay Church institutional historical claims?
Is the fear of such ideas too much that it means some have to reject even the possibility of such love? Or are the rejecters still products of a modern straight-acting macho culture where a male soldier can get a medal for killing another man but get killed for loving one?
|
Category Archives: Style
Verbs Shirt
Bad Writing
A Eye for Grammar and an Ear for Style
The metaphors for knowing style indicate that writing style touches on a number of different senses for aesthetics. We speak of having an ear for what sounds right as if writing were akin to music. We speak of having an eye for good writing as if it were a visual aesthetic in much the way painting or fashion are. The direct way of speaking about writing would be to comment on the structure of the grammar that goes into its composition but doing so misses so many of the effects writing style can have on a reader.
Good writing style can often be broken down into grammatical structures, but doing so tells us little about why it affects us so. Most writers usually do not set out to compose a piece with certain structures. Instead they try to tune their style to the subject matter in an intuitive matter.
This intuition is often misunderstood or ignored by much of what I have seen in the writings of those who have been trained in linguistics or those who put a good deal of value on writing grammatically. Most linguists I have read seem to lack any sense of style but write with exquisite grammatical clarity.
Employing an understanding of grammar to compose with a distinct and effective writing style appears to be related to the functioning of what is often called meta-cognition, which involves the ability to monitor your own thinking. Meta-cognition is much like the ability to make a judgment about the quality of effectiveness of one’s own learning or expression.
English instruction from Kindergarten through high school focuses on writing grammatically correct sentences without much attention to the effect the structure of the sentence creates. There is nothing wrong with working on grammar in isolation from style. Doing so probably allows students a greater ability to focus on particular issues and to gain a better understanding of the mechanics behind their own writing. A problem arises when grammar instruction becomes an issue of correctness rather than issue of writing style and clarity of thought.
Thinking In and About Phrases
Phrases are probably the most essential unit of language but so little is written about them.
One of my professors at UC Irvine wrote a books called The Differend, which placed the phrase not just at the center of writing but at the center of what is. Doing this always struck me as somewhat odd since words were smaller units and letters even smaller still. But he was not writing about the units we have broken our language down into as much as he was writing about the units of thought.
We just do not think word by word. We think in phrases and sometimes in clauses. This is where I think much of the discussion and instruction in grammar misses the mark. Without a doubt we run into problems at the level of individual words. We misspell words, not phrases and clauses. We look up words in the dictionary, not phrases and clauses. We search for just the right word, but rarely struggle with just the right phrase or clause. And we mock people who misuse words in the speech, not people who speak in phrases or dependent clauses.
People who write well tend to be able to compose not in words or even in sentences but in phrases that come to mind. I do not think in words. That would be silly and would result in a fairly scatter brained approach to both thinking and writing. And I do not usually think in well developed clauses. I usually do not know, for example, how a sentence is going to end when I have begun writing it. My mind thinks phrase by phrase. It might have words at its disposal and some notion of what kind of clause is going to come out at the end of all this cognition, but my thoughts tend to find themselves connecting phrase to phrase.
The centrality of phrases in thought can probably best be seen in the corrections we make to our own writing. There are usually a few errors in usage or spelling we correct as we compose, but it is far more common to erase or move or add whole phrases.
For this reason I see grammar instruction as misguided when it spends so much time on correcting the usage of words rather than the composition of phrases. We focus too much on the correctness of words rather than the style of the clauses and phrases that we write. Our thinking and our selves are not so well shown by the words we choose as they are by the phrases and clauses we use.
By moving away from words and on to phrases and clauses, we cannot get away from the issues of style that these units of writing bring up. It is in style that we probably have a better chance of getting students to think about how they think and how they write. Word choice has a good deal to do with style, but words also bring baggage with them. They are more often going to be corrected for being right or wrong. If we spend more time on phrases and clauses, we can spend more time on style and how it affects the reception of our writing. Style becomes the organizing principle of grammar.
Words Do Matter, But Not That Much
The most important elements of grammar have less to do with words than most people seem to think. Making correct word usage less important would allow grammar instruction to start working with the structure of our thinking and the development of our writing style.
The correct usage of words makes up a great deal of grammar instruction, but a word alone means next to nothing without the words immediately surrounding it. Most people think and speak in phrases and clauses, not words. Music works in much the same way in that a note suggests very little in nearly all cases, but a few played together create a tune and our minds begin to pay attention and hum the tune that musical phrase suggested.
When students become more adept at utilizing phrases and clauses, much of writing becomes significantly easier. The odd thing is that as grammar instruction progresses beyond identifying the basic parts of speech students are often encouraged to focus more and more on specialized issues with words rather than moving on to issues with phrases and clauses. They are essentially encouraged to become lexicographers rather than writing stylists.
This turns the focus away from the production of ideas and the communication of information and towards the generation of words. Many students who want to write well often get the mistaken idea that the use of unfamiliar vocabulary is the surest way to do so.
Many grammar books and blogs encourage this approach by focusing on the usage of words rather than the effects of style. They often explain the complex histories of words and their uses with an enviable precision. The Gramarphobia blog, for example, does an excellent job of responding to questions and explaining particular issues that students and professionals run into quite frequently. The blog is undeniably one of my favorites because it is well written and shows a sense humor about these nuances between words and their quirky histories. But I also wonder how useful it is for students who want to write well rather than use certain words correctly.
Even when it is done incredibly well and with a playful and disarming style, such an approach to grammar can be quite intimidating, even for someone with a degree or two in English. For someone with limited knowledge of grammar and no real interest in the minutiae of the English language, grammar instruction becomes a long procession of rules for when to use “it’s” rather than “its” or when to use “lie” rather than “lay.”
These distinctions in word usage are important to grammar instruction but only if they come up frequently enough in most people’s writing. Too often they do not. The writing problems that come up more often for students and professionals deal with issues of phrases and clauses and are more closely connected to issues of style. These issues are always rooted in grammar, but to become better writers, students need to understand how their thoughts are constructed and can be reconstructed with phrases and clauses, and to develop that knowledge means developing an understanding of how grammar and style work together.
|
Basic math glossary-B
Basic math glossary-B define words beginning with the letter B
b: This letter is an abbreviation for b
Bar graph: A graph that make use of bars in order to give a visual representation that can be used compare data or amounts of sizes
Base: In a polygon, the base represents one side of a polygon used to find area
Base: In percentage the base represents the amount you are taking a part or percent of
Base: In multiplication with exponents, the base represents the number being multiplied or a factor
Bisect: In geometry, bisect is the process by which one uses a ruler and a compass to cut an angle in half
Geometry ebook
Recent Articles
1. Find the Multiplicity of a Zero
Oct 20, 21 04:45 AM
Read More
1. Click on the HTML link code below.
|
Why Is the Sun Red? Wildfire Smoke Spreads to New York City
3 months ago 22
PR Distribution
U.S.|Why is the prima red? Wildfire fume from a continent distant spreads to New York.
An representation from the National Oceanic and Atmospheric Administration earlier this period showed gray-brown wildfire fume moving crossed North America.
John Schwartz
• July 20, 2021, 1:22 p.m. ET
New York City awoke this greeting to a presumption of the Western wildfires: a reddish prima successful the sky.
Smoke from the blazes burning successful the Western United States and Canada made its mode crossed the continent, contributing to the greeting haze successful the metropolis and elsewhere connected the East Coast, said John Cristantello, a meteorologist with the New York bureau of the National Weather Service.
“The haze oregon the fume that you spot determination is coming from the wildfires retired successful the West,” helium said, “and that’s helping scatter the light, which leads to those much vivid sunrises and sunsets.”
People noticed.
Wildfires retired west, mostly from Western Canada, person sent fume each the mode to the city. That fume is creating the haze you're seeing this greeting and giving the prima a reddish tint. 📸: Bronx - Soundview Neighborhood pic.twitter.com/Zx4AwZCOrY
— Erick Adame (@ErickAdameOnTV) July 20, 2021
Hazy mornings are thing caller during a New York City summer, Mr. Cristantello said. “That happens with oregon without smoke. You person those hazy days,” successful portion due to the fact that of aerial pollution. But the long-traveling fume is portion of the mix, helium said.
New York State issued an aerial prime wellness advisory for Tuesday, lasting until midnight, due to the fact that of precocious levels of good particulate substance successful the air, which wildfires lend to.
Climate alteration is causing wildfires to beryllium larger and much intense, and results are disposable from satellites and connected the ground. The Bootleg Fire successful Oregon present covers much than 388,000 acres, and is truthful aggravated that it is fundamentally making its ain weather. Satellite imagery from the National Oceanic and Atmospheric Administration shows fume from that occurrence and others making its mode crossed wide swaths of the United States and Canada. It archetypal reached New York City astir July 15.
Mr. Cristantello said that a acold beforehand pushing done the New York City country connected Wednesday should wide retired the haze, but it could instrumentality if the fires persist.
Read Entire Article
|
Understanding Good and Evil
– interpreting with Biblical allegory and metaphor
-from the teachings of Eugene Halliday
The Bible is not merely a “religious” book in the sense in which irreligious people use this word. It is a one-volume library of the wisdom of the ancient world, a gathering together of many works of men who had meditated deeply on the basic problems of humanity and of divinity. Throughout the Bible recur certain symbols of the most important principles that rule all living beings. Knowledge of the meaning of these symbols is essential to the true understanding of the real meaning of the word “religion“.
When in the primitive world wickedness had spread through most of mankind, so that their hearts were continually evil, there was a need for some protection of the few intelligent members of the human race from the rest. Only by selection of the most wise men from the others could the human race be saved from total degeneration and a fall backwards into the sub-human stage of life. The Bible symbolises this selection process by God’s command to Noah to build an ark of gopher wood. Because of the violence of men, God would have to destroy them, but because life must go on, Intelligence (Noah), and his three sons and their wives would be saved. A new world would have to be built, with intelligence as the preserver of that world.
Daily we see everywhere more and more violence in the world: in vandalism, air-plane high-jackings, political up-risings, military take-overs, race-riots and individual acts directed against other individuals. The world situation once again repeats the conditions of Noah’s day. But the cure to come will not be in the form of a universal flood, for this was an insufficient measure, and after it men returned again to their evil ways.
The cure for our time promises to come in the form of a world-wide nuclear war. Only the intelligence of a modern Noah, guided by the inner divine principle, can avert this threat. The hot, dark impulses of the unconscious pleasure seeking power symbolised by Ham, must be brought under the control of the higher levels of awareness signified by Shem and Japhet.
The three sons of Noah together symbolise the three powers which constitute intelligence as we know it in the human being. These three powers function respectively as feeling awareness (Shem), impulsive pleasure seeking (Ham), and intellectual analytic capacity (Japhet). The impulsive pleasure-seeking power, acting without contact with the highest intelligence, insensitive to the need to veil the real significance of what was meant by Noah being drunk, exposed his father’s nakedness to his two brothers.
Why should there be a need to veil the significance of Noah, the principle of intelligence, being drunk in his tent? To answer this, we must understand the nature of fallen man, and that of unfallen man as well.
Unfallen man was man as originally created, sensitive, intelligent and of unspoilt pure will. Fallen man, as a result of his error in succumbing to the temptation to know the nature of Good and Evil, lost his original purity of will, for in choosing to know evil as well as good, he exposed himself to the effects of evil.
Evil is a force acting against life. The effects of evil are, therefore, reduction of life-forces, degeneration of the organs of the living body, cessation of living processes and finally, death. The approach to death may be lingering, accompanied by a slow, painful corruption of body tissues, or there may be a swift, immediate departure from this world to the next.
Death can occur at different levels of being, physical, affectional, mentational, conceptual, volitional. It is possible for our physical organs to die from lack of food, or from intake of poison, or from deficient circulation resulting in shortage of oxygen to body-cells. It is possible for our affections to die from lack of kindness; our mind may die from insufficient mental stimulation. Our principles may die from non-application. Our will may die from experience of non-appreciation of our good intentions.
When Adam sinned, he became at once aware that he had in some way diminished his life, reduced his capacity for participation in the living process of all other creatures around him. In choosing to know evil as well as good, he had chosen to experience forces contra life.
Unless we actually experience things by participating in them with our own being, we cannot truly say that we know them. If we hear the word “evil”, and do not actually take part in the type of activity signified by this word, the word remains for us a mere sound, with no real meaning. Only if we engage in an action which manifestly results in a diminution of life do we actually know the meaning of the word “evil”.
How did Adam become aware of the meaning of evil? We human beings are wiser in our depths than our conscious mind comprehends. Modern psychology has accepted the idea that the human mind is rooted in unconscious forces that are capable of actions not only constructive. There are powers at work in our depths which may suddenly flash out in deeds of violence. As long as we are capable of conscious control of our energies, we are able to maintain harmonious relations with our fellow men, and we consider ourselves to this degree “good“. Our energies work constructively, creatively, not destructively.
All the energy in the Universe around us exhibits a two-fold process, building up and tearing down the innumerable forms of things and creatures that constitute for us our environment. As living beings with a love of life, we tend to view the tearing-down aspect of the world-process as “bad” or “evil” especially where it begins to threaten our own continuance. For us the worldwide present fear of a nuclear war illustrates this point very clearly.
Great religious thinkers of the world have seen that the world we live in is a “fallen” world. They have perceived that the tearing down processes of the universe indicate that our world has somehow, somewhere back in history, gone wrong, has fallen from a prior state of perfection, has been precipitated from an eternal perfectly harmonious state of inter-relatedness, into a temporal, imperfect, disharmonious state of disintegrated warring forces. The mystery of this fall has fascinated the greatest intellects of the world.
The Bible has something to say of tremendous importance about this problem. It is clear that the universe, as a product of one supreme power, must originally have been in perfect harmony with itself. Yet the world we live in, the universe around us, is torn apart, everywhere exhibits destructive tendencies, and threatens the lives of all living creatures. True, not all the forces of the universe are destructive, for if they were, we should not be here. Myriads of living creatures go about seeking sustenance and the continuance of their existence. But at any moment a careless boot may crush out the lives of minute ants pursuing their own modes of livelihood; or an earthquake may cast down man-made buildings and destroy thousands of human lives.
How the forces of destruction gained entrance into the universe is indicated in the words of Jesus, where he says that he saw Satan fall like lightning from heaven. Today few educated people believe in the “heaven” referred to in the world’s religious systems, the “after-world” in which, after this life’s fitful fever the “good” shall be eternally happy, unperturbed by the “bad” souls of fallen humans and devils, who have been forever excluded from that paradisical world.
Most people use words without adequate definitions, for the defining of terms can be a very delicate process. Thus the word “heaven” is not usually clearly defined “Heaven” means the condition of perfectly balanced power, a state of being in which all the energies of that being are held in perfect, easy harmonious interplay, a state of spiritual bliss beyond all possible discordances. This was the state of Man before the fall. This was the condition which Adam suddenly lost as he accepted the serpentine suggestion that he should know both good and evil.
Prior to his fall Adam was in a state of perfect harmony with himself and his situation. He was in the condition of supreme good, but had experienced this without the contrast of evil. “Good” was simply his being as he experienced it, with no opposing or impeding forces to his unadulterated bliss.
Let us imagine that in his innocence, Adam did not understand the nature of evil, did not comprehend that in order to know evil he would have himself to experience evil. The good he already knew, but not as contrasted with evil. So far he had lived the good, but not comprehending it as any other than his own natural harmonious state of being, with no opposite with which to compare it. He lived and knew harmony. He had not yet lived and known disharmony or disruption. Suddenly he felt within himself that his own interest in evil had virtually cut him off from his relationship with his Creator, for now he felt that he would have to conceal this interest from God. Hence Adam hid himself, thus committing himself to the first fruits of his disobedience, the alienation of himself from the very source power of his own being.
Of course Man cannot actually completely cut himself off from God. But what man can do, led by his own sense of guilt, is to behave as if so cut off. We all know this type of behaviour. We do something wrong to someone, and at once feel the need to remove ourselves from the presence of the one we have wronged. We remove ourselves physically or mentally from the situation. We are afraid of condemnation, afraid of being proved guilty. So with Adam. At the very moment of his disobedience he experienced the evil of self-imposed alienation. Henceforth he would hide from the Creator who had brought him into being. His hiding would be ineffective, for creative power cannot be totally cut off from the creature it maintains. But from Adam’s standpoint the degree of separation he was able to sustain had to suffice. Again we all know the feeling. We cut ourselves off as much as we can from the person we have wronged. We know that we cannot do this completely, but we think we can do so sufficiently for our purpose. We can suppress our knowledge of the reality to a degree; push it down into the deepest depths of our being. We can create conditions of nearly total unconsciousness, but only nearly. In spite of all our efforts to repress our guilt, we cannot totally succeed; there still remains in us a degree of discomfort about our real position in relation to the person we have wronged, and not only in relation to him but also to all his friends, and others beyond who fear that we may do a similar wrong to them.
We now come again to the question why there should be over Noah, the principle of intelligence, a covering; why Shem and Japhet, respectively the principles of feeling sensitivity and intellectual analysis should, after Ham’s exposure of his father’s condition, cover up his nakedness.
Guilty people do not like their guilt to be exposed to the gaze of others. Prophets who, in the ancient world, spoke against inhumane rulers, were put to death. Intelligent men who criticised the bad behaviour of unintelligent men soon learned that direct criticism brought immediate reprisals, so they devised an indirect way of exposing the stupidities and cruelties of insensitive rulers. This indirect way led to the development of theatre, in which plays could be presented to expose the unintelligent ways of persons in high places. “The play’s the thing”, says Shakespeare, “wherein I’ll catch the conscience of the king”. The Greek word for an actor was “hypocrite“. It meant “one who criticises from below“, that is, “one who indirectly criticises”. Things can be said through the medium of a play that none would dare openly to express.
Thus it came about that Shem (the feeling sensitivity that knew how to name things) and Japhet (the intellectual capacity that knew to analyse correctly a situation) “dwelt together” and conjoined their gifts to control the dark impulsive behaviour of Ham and his descendants. The spiritual intelligence and purpose of Noah had to be covered over so that unfit men, of dark, uncontrolled, impulsive behaviour should be subjected to indirect control.
Naturally, people do not like to think that indirect methods of control are applied to them. They tend to cry out against every influence that may act upon them and determine their behaviour without their knowledge. Especially is this so with people who pride themselves on their own strength of will.
Such people are generally ready to react forcefully against anything that threatens to impede the attainment of their ambitions.
True will is not reactive in this way. It has respect not only for its own goals, but also for those of other beings.
|
Your research can change the world
More on impact ›
Front. Clim., 13 October 2021 |
Technological Demonstration and Life Cycle Assessment of a Negative Emission Value Chain in the Swiss Concrete Sector
Johannes Tiefenthaler1, Lisa Braune1, Christian Bauer2, Romain Sacchi2 and Marco Mazzotti1*
• 1Separation Processes Laboratory, Institute of Energy and Process Engineering, Department of Mechanical and Process Engineering, ETH Zurich, Zurich, Switzerland
• 2Technology Assessment Group, Laboratory for Energy Systems Analysis, Paul Scherrer Institute, Villigen, Switzerland
1. Introduction
Limiting global warming to 1.5–2 degrees requires substantial and fast reduction of greenhouse gas (GHG) emissions in basically all economic sectors (Tollefson, 2018; Allen et al., 2018). Several countries and economic regions have announced the goal of “carbon neutrality” by around mid of this century the European Union and Switzerland among them (Runge-Metzger, 2018; Geden and Schenuit, 2019; Geden et al., 2019; Kirchner, 2020). However, it is virtually impossible to eliminate GHG emissions entirely from some economic activities, such as agriculture, and it is also very likely that in the long term some industrial processes will cause “residual emissions”, which cannot be reduced to zero (Allen et al., 2018; Waisman et al., 2019). Therefore, so-called “negative emission technologies” (NET) or “carbon dioxide removal” (CDR) options become important. Such NET or CDR options permanently remove carbon dioxide from the atmosphere they act as carbon sinks and allow for compensation of residual GHG emissions, therefore enabling “net-zero” GHG emissions without actually reducing overall antrophogenic GHG emissions to zero (Van Vuuren et al., 2013; Rogelj et al., 2015; Geden and Schenuit, 2019; Geden et al., 2019; Kirchner, 2020). Model-based evaluations of energy scenarios have shown that reaching the 1.5 degree goal will be almost impossible without implementing NETs at scale (Gasser et al., 2015; Van Vuuren et al., 2017, 2018; Creutzig et al., 2019; Fuhrman et al., 2019). Without NET in place, drastic reductions of energy demand and consumption in general would be required (Grubler et al., 2018).
A broad set of CDR options could be employed the most frequently discussed ones are afforestation and reforestation, biochar, soil carbon sequestration, enhanced weathering, ocean fertilization, Bioenergy with Carbon Capture and Storage (BECCS) and Direct Air Carbon Capture and Storage (DACCS) (Fuss et al., 2018; Minx et al., 2018). All of them have their specific merits, but can also face certain obstacles and implementation barriers - be it due to high costs, land, water (Rosa et al., 2020a,b) or energy use, or unknown potentially harmful side-effects (Anderson and Peters, 2016; Fajardy and Mac Dowell, 2017; Fuss et al., 2018; Nemet et al., 2018; Bednar et al., 2019). To generate negative emission of the required scale, a portfolio of CDR options - each suitable for specific regional conditions—needs to be deployed and the corresponding obstacles in their value chains have to be addressed and overcome. A CDR option largely overlooked so far is related to the construction sector, which itself constitutes a large source of GHG emissions (CEMSUISSE, 2020; Miller and Moore, 2020): pilot and demonstration projects have shown that recycled concrete aggregate (RCA) can be carbonated, i.e., it can mineralize CO2, when it is exposed to a concentrated CO2 stream.1 This mineralized CO2 is permanently fixed and if the CO2 is of biogenic origin, carbonating RCA results in a negative emission technology. Biogenic CO2 is easily available when biogas, the output of anaerobic digestion of biogenic waste, is “upgraded” to methane by removing the CO2 fraction, which is commonly between 30 and 45% (Angelidaki et al., 2019; Kober et al., 2019; Teske et al., 2019; Zhang et al., 2020). Today, main sources for such biogenic waste are agriculture and wastewater treatment plants. It is commonly agreed that, because this CO2 was initially removed from the atmosphere as a result of biomass growth, its release within a short time span does not contribute to the long-term radiative forcing of the atmosphere (Levasseur et al., 2010; Wiloso et al., 2016; Lueddeckens et al., 2020). Globally, 30 Gigatonnes of concrete have been used in 2020 (van Oss, 2021), and due to growing economies after the Second World War, amounts of available demolition concrete are rising quickly. As a result, CO2 mineralization in RCA could—depending on its actual negative emission effect—represent an important CDR option. As for any CDR option, also the environmental performance of carbonating RCA must be evaluated from a life-cycle perspective before large-scale implementation (Goglio et al., 2020; Terlouw et al., 2021). The method at hand for this purpose is Life Cycle Assessment (LCA), which is used to quantify a comprehensive set of environmental indicators of products and services, considering their production, use, and end-of-life (Guinée and Lindeijer, 2002; Arvanitoyannis, 2008). Some of the environmental impacts of the concrete sector (Worrell et al., 2001; Miller and Moore, 2020) as well as its role on a national pathway toward carbon-neutrality (Obrist et al., 2021) have been analyzed. Also various potentially environmentally beneficial production and recycling pathways have been examined by means of LCA already (Knoeri et al., 2013; Gursel et al., 2014; Vieira et al., 2016; Hafez et al., 2019; Zhang et al., 2019; Colangelo et al., 2020; Farina et al., 2020). However, to the best of our knowledge we present the first LCA of carbonated RCA potentially acting as negative emission technology.
1.1. CO2 Mineralization in the Concrete Sector
Mineralization of CO2 in concrete is a reaction which occurs in concrete structures that are exposed to a CO2 source, e.g., ambient air. More precisely, the carbonation reaction takes place in the pore water. This phenomenon, termed natural carbonation, reduces the pH of concrete, thus triggering corrosion of reinforced steel bars—and then determining the end of life of structural concrete. For this reason, carbonation of concrete structures was intensively studied in the past. The thickness of the carbonated layer increases with exposure time, CO2 partial pressure and diffusivity of CO2 within the pore network of the cement matrix (Lagerblad, 2005; Leemann and Moro, 2017). The diffusivity of CO2 depends on the water saturation of the concrete's pore network. A relative humidity range of 50 to 70% was identified to provide the ideal pore water saturation for a fast progression of carbonation. Based on these relationships, models have been developed which have been experimentally validated (Papadakis et al., 1989, 1991).
In the last decade, CO2 mineralization in concrete during the concrete batching phase (Monkman et al., 2016; Meyer et al., 2018), natural carbonation throughout the service life of a concrete structure and at the end of life (Leemann and Hunkeler, 2016) gained attention since they are viewed as options to mitigate part of the concrete's climate impact. A group of researchers made a model based attempt to determine the global annual CO2 fixed by natural carbonation. They estimate that 43% of the global annual amount of calcination emissions are mitigated by natural carbonation (Xi et al., 2016). However, other researchers experimentally studied the degree of carbonation at end-of-life concrete structures and determined that 3.6% (Birolini, 2019) to 10% (Leemann and Hunkeler, 2016) of the process based cement manufacturing emissions are reabsorbed, thus questioning the conclusions drawn by Xi et al. (2016).
After concrete structures are dismantled and broken into concrete aggregate, they can undergo accelerated carbonation that fixes even more CO2. The principles of accelerated carbonation and of natural carbonation are the same—with the difference that accelerated carbonation processes may profit of increased CO2 partial pressures. Furthermore, Seidemann et al. (2015) identified that a temperature increase from 25 to 50°C can increase the CO2 uptake within a processing time of 3–9 h by a factor of 3 to 10. As a concluding step, Seidemann et al. (2015); Xuan et al. (2016) assessed the performance of carbonated concrete aggregate incorporated in fresh concrete. They measured an increase of the 28-day compressive strength of concrete, in case the RCA had been carbonated prior to its incorporation.
So far, research and development in accelerated carbonation has mainly focused on lab-scale activities. Results are promising, and have motivated an academic and industrial consortium of partners, namely the clean tech company neustark AG, the concrete recycling facility Kästli Bau AG in collaboration with the concrete plant Frischbeton Rubigen AG, the waste water treatment and biogas upgrading plant Ara Region Bern AG in collaboration with ETH Zurich, to demonstrate a negative emission value chain at scale in the scope of the RECARB2 project. For this purpose, all components of the value chain, namely the CO2 supply chain as well as the mineralization technology were constructed and operated—and the use of the material in concrete was tested.
1.2. Scope of This Work
Our analysis primarily answers the questions whether carbonated RCA using biogenic CO2 can indeed deliver negative GHG emissions from a technological and life-cycle perspective, i.e., whether it can permanently remove CO2 from the atmosphere, and what the driving factors determining the amount of potential CO2 removal are. Furthermore, we quantify the effect of using carbonated RCA in concrete on the environmental performance of such concrete compared to virgin concrete and concrete made of standard RCA. We also discuss the main driving forces behind the uptake of CO2 and options to increase uptake rates in the future by improved process design. Finally, we estimate the overall amount of potential CO2 removal, which could be enabled by this NET in Switzerland, as well as globally, based on current and future figures on concrete recycling and available appropriate sources of biogenic CO2.
2. Technology
The CO2 mineralization value chain (red box, Figure 1) consists of the liquefaction, transport and evaporation of CO2 as well as of the CO2 mineralization plant. The process is embedded into the concrete recycling and reuse value chain. The two components of major relevance for the overall system are the operation of the CO2 mineralization plant and the effect of carbonation on concrete mix designs. All other components of the value chain have been in commercial use since many years. Hence, lab-scale experiments were conducted to investigate the phenomenology of the carbonation process, particularly the effect of grain size and processing time. Moreover, an industrial scale mineralization plant was designed, constructed and operated to demonstrate the technology in a relevant industrial setting. Finally, the carbonated RCA was incorporated in fresh concrete to investigate potential effects of carbonation on the quality of the concrete mix design.
Figure 1. System boundaries of the functional units 1 kg of RCA that are carbonated and 1 m3 of concrete of a specific strength class. Material flows and processes of the virgin concrete (VC) system, the recycling concrete (RC) system and the carbonated recycling concrete (C-RC) system.
2.1. Experimental
RCA is concrete crushed into the size of aggregate and sand. It contains gravel and sand surrounded by cement paste. Two different types of concrete aggregate were used. At lab-scale, 80 liters of concrete (composition according to Supplementary Table 1) were batched by Swiss Federal Laboratories for Material Sciences and Technology (EMPA). In order to mimic RCA, the hardened concrete was crushed, sieved into size fractions and stored in airtight containers until it was used in the experiments.
The daily demand of 120 tons RCA for the industrial scale mineralization plant was supplied by commercially available RCA at the concrete plant Kästli. The origin and composition of this material is unknown, but it is very likely that the concrete was produced with Ordinary Portland cement (CEM I) (Jacobs, 2011) and that it is 60–80 years old. Hence, the same composition for both RCA types was assumed. The material was stored outdoors and weathered in stockpiles for a period of several summer months before it was carbonated in the mineralization plant.
Industrial grade liquefied CO2 (Pangas, >99.5% CO2) sourced from bottles and a CO2 semitrailer was used for the experimental campaigns at lab and industrial scale, respectively. The liquefaction step itself reduces the level of impurities. These remaining impurities are typically oxygen, nitrogen and methane, which act as inert gases and thus have no influence on the process of concrete carbonation.
A lab scale setup was designed and constructed to conduct the laboratory tests. The core of the lab setup is a 850 ml reactor, which can be sealed gas tight. The reactor has a gas inlet at the bottom and a gas outlet at the top. In addition, the reactor is placed on a scale.
The mineralization plant uses two reactor containers with a total volume of 34 m3. One of the reactor containers is placed on a 40 t container balance. The gas injection rate and the gas outlet flow rate of the mineralization plant are measured by a gas flow meter. In addition, the gas outlet concentration is measured by a gas analysis device.
2.2. Methods
At the beginning of every lab scale experiment, the setup was filled with a few 100 g of RCA. The 850 ml reactor was sealed air tight and the gas inlet and outlet pipes were connected. The experiments were conducted at 20°C and ambient pressure. At time zero, the gas inlet flow was set at 500 mg of CO2 per minute and reduced stepwise to 50, 25, 12 and 6 mg of CO2 per minute. This procedure was followed to minimize the amount of gas exiting the reactor. The weight of the material was measured as a function of time. After 72 h, the gas flow was switched off, the material discharged and the experiment completed.
At industrial scale, a five axles truck loaded the empty reactor container with a hook and traveled to the RCA stock pile. The roof of the container was opened and a wheel loader heaved approximately 20 t of RCA into the reactor container. The roof was closed again and the truck traveled back to the terminal and placed the container on a 40 t balance. The CO2 inlet and outlet pipes were connected and the amount of material was measured on the balance. CO2 was injected for 2 h, while the change in mass of the RCA was recorded by the balance to quantify the uptake of CO2. The initial CO2 injection rate of 180 kg CO2 per hour was stepwise reduced such that the rate of CO2 mineralization was similar to the CO2 feed rate. The carbonation experiment was conducted at ambient temperature and pressure. After 2 h, the injection was stopped and the RCA was discharged.
2.3. Raw Data and Performance Indicators
The amount of CO2 injected can be calculated as:
minCO2=0tendFin(t)yinCO2dt (1)
where minCO2 is the total amount of CO2 fed into the reactor, at a gas inlet flow rate Fin and a CO2 mole fraction of yinCO2; t is the experimental time. The amount of CO2 leaving the system can be calculated as:
moutCO2=0tendFout(t)youtCO2(t)dt (2)
where moutCO2 is the total amount of CO2 exiting the reactor through the exiting gas stream, Fout the gas outlet flow rate and youtCO2 the CO2 mole fraction in the exiting gas stream. Moreover, when the RCA is discharged from the reactor containers, the residual CO2 (mresCO2) in the void space ϵ in-between the particles is lost during to the atmosphere; the corresponding amount is calculated as:
mresCO2=ϵVRpMCO2yCO2voidRT (3)
where VR is the reactor volume, p the pressure of the gas phase, MCO2 the molar mass of CO2, yCO2void the CO2 mole fraction in the void space of the particles during the discharge of the material, R the ideal gas constant and T the temperature. The amount of CO2 stored in one batch of material is mstoredCO2:
mstoredCO2=minCO2-moutCO2-mres.CO2 (4)
In order to make results better comparable and to mitigate the effect of different material loadings in the reactor, the CO2 mineralized per unit of concrete aggregate ωRCACO2 is determined as:
ωRCACO2=mstoredCO2minRCA (5)
where minRCA is the mass of RCA filled into the reactor. Moreover, the CO2 storage efficiency ηstoredCO2 can be calculated as:
ηstoredCO2=mstoredCO2minCO2 (6)
2.4. Concrete Tests
Part of the RCA was used to test the performance of the carbonated RCA with respect to the non-carbonated RCA in concrete mix designs. For this reason, concrete mix designs with a RCA content of 40% and a cement content of 280, 295 and 315 kg of tpye CEM II B were batched in a 40 liter lab-scale mixer and a 3.5 m3 industrial scale mixer according to SN EN 206. The properties related to strain (SN EN 12350) were measured. Afterwards, the concrete was molded into cubes and cured according to SN EN 12390-2. The compressive strength (SN EN 12390-3), the density (SN EN 12390-7) and the Young's modulus (SN EN 12390-13) were measured at day 28 in an external, certified laboratory.3
3. Method and Data for the Life Cycle Assessment
The ISO standards (International Organization for Standardization, 2006a,b) define LCA in terms of goal and scope, life cycle inventory, life cycle impact assessment and interpretation of the results. This work follows the regulatory framework. Moreover, a specific guideline for LCA about CCS and CCU technologies can be found elsewhere (Müller et al., 2020a).
3.1. Goal and Scope
The main aim of this paper is to investigate the negative emission potential of a CO2 mineralization value chain in the concrete sector. The environmental performance of the negative emission value chain, which has been demonstrated in the RECARB4 project at scale, will be critically assessed. In addition, results will be extrapolated to the Swiss concrete sector, and the evolution of the current, near- and long-term sink capacity, which is generated by this technology, will be discussed. To this end, we use the well-established method of Life Cycle Assessment (LCA), which aims at quantifying the environmental burdens generated along the entire life cycle—production, use, and end-of-life—of products and services (Guinée and Lindeijer, 2002; Arvanitoyannis, 2008). These burdens (i.e., emissions to air, water bodies, and soil as well as resource consumption) are quantified with reference to a functional unit (FU) to provide a common basis for comparison between different products. The environmental burdens are characterized against indicators that represent damages borne by mid- (e.g., Global Warming) and endpoint (e.g., Human health) recipients, respectively, via cause-effect pathways (e.g., from the emission of a greenhouse gas to the radiative forcing of the atmosphere). Within a product system in an LCA, so-called foreground and background data are distinguished. While foreground data represent energy and material flows as well as emissions associated to the product (chain) investigated, background data represent those exchanges associated with material and energy supply chains not directly investigated, but still relevant, e.g., steel supply for the construction of trucks used for the transport of concrete aggregate. The foreground system and data are discussed in the following sub-sections; as source of background data we use the ecoinvent database, version v3.6, system model “allocation, cut-off by classification” (Wernet et al., 2016).
3.1.1. Negative Emission Value Chain RCA Production
Figure 1 visualizes the negative CO2 emission chain we investigate by means of LCA. Old concrete infrastructure such as buildings and bridges are dismantled and the demolition concrete is transported to a concrete recycling facility in the vicinity. There, the demolition concrete is crushed into recycling concrete aggregate (RCA) of a specific grain size distribution. Reinforcing steel as well as non-mineral light weight materials such as wood and plastics are sorted out. CO2 Supply
The CO2 for the mineralization of the RCA comes from a biogenic source such as a wastewater treatment plant. These plants often produce biogas, consisting mainly of methane and carbon dioxide. To increase the calorific value of biogas, biogas upgrading is conducted whereby CO2 is removed (Angelidaki et al., 2019), which is normally released to the atmosphere. In case of mineralization, the CO2 is collected, liquefied, temporarily stored and finally transported to the mineralization plant located at the concrete recycling facility. This LCA only considers biogenic CO2 from biomass waste, or more precisely from the upgrade of biogas generated via anaerobic digestion of biogenic waste. In line with the background LCA database, this biogenic waste is supplied free of environmental burdens, since those are assigned to the service of treating this waste, or in other words to the agricultural sector generating this waste. Mineralization of CO2
Two mineralization containers filled with RCA are transported next to the process center. The CO2 inlet pipe and the gas discharge pipe are attached to the front walls of the containers. The CO2 tank contains the liquid CO2 which has to be evaporated before it feeds the two containers for the process duration (typically 120 min) at a high flow rate. The CO2 reacts with the cement phase of the RCA to yield calcium carbonate; CO2 uptake is measured. At the end of the process, the carbonated RCA are stockpiled. Concrete Production
The carbonated RCA is used as sand and gravel substitute for the production of concrete. According to SN EN 206, concrete with a secondary aggregate content exceeding 25% is categorized as recycling concrete. The carbonated RCA exhibits a decreased porosity and water absorption compared to regular RCA. The reduction in porosity can be attributed to the formation of CaCO3 in the pore network, which occupies more space than the corresponding cement minerals. This effect may alter the properties of new concrete, which can result in concrete of increased 28-day compressive strength or Young's modulus (Seidemann et al., 2015; Xuan et al., 2016). In such a case, the clinker content can be reduced compared to conventional recycling concrete while still complying with the same quality standards. However, the exact extent of clinker reduction depends on many factors, among which the quality of the primary and secondary material, the concrete mix design and the concrete chemistry. In this work, we demonstrate in a set of material tests that concrete made from carbonated RCA provides the same service as concrete made from regular RCA.
In many locations, it is more convenient to use RCA unbound for instance as a road base material. In the scope of this work, this use is not considered. However, the negative emission value chain and thus the associated environmental impacts do not change in case of different uses.
3.1.2. Positive, Negative and Avoided Emissions
Global climate change mitigation policies rely on a combination of emission avoidance and negative emissions. Emission avoidance will allow to reduce the emissions by for example reducing the demand for clinker through a more efficient use of cement and concrete (Habert et al., 2020). Although emission avoidance is of key importance, it will not be sufficient to reduce emissions to zero. Negative emissions technologies remove residual emissions from the atmosphere and fix them permanently. Tanzer and Ramírez (2019) established four criteria to determine whether a technology leads to negative CO2 emissions, i.e., permanent removal of CO2 from the atmosphere. The carbonated RCA process chain must comply with such criteria:
1. Physical greenhouse gases are removed from the atmosphere.
2. The biogenic CO2 produced in the biogas upgrader and used for mineralization is normally released as a waste flow into the atmosphere. With the proposed technology, this CO2—originally taken up from the atmosphere by biomass—is stored in concrete aggregate instead of being released back to the atmosphere in a short period of time.
3. The removed gases are stored away from the atmosphere in a manner intended to be permanent.
4. The CO2 reacts with the cement phase contained in RCA to CaCO3. Even extraordinary environmental conditions such as acidic rain (Teir et al., 2006) and temperatures of 100°C do not result in a release of the fixed CO2 into the atmosphere (Villain et al., 2007). The European Court of Justice considers CO2 mineralized to CaCO3 in one of their sentences as per permanently stored (Siwior and Bukowska, 2018). Since the CO2 undergoes a chemical reaction and is fixed as CaCO2 mineral, the concrete aggregate can undergo further recycling loops or it can be landfilled without releasing the fixed CO2 to the atmosphere.
5. Upstream and downstream greenhouse gas emissions associated with the removal and storage process, such as biomass origin, energy use, gas fate, and co-product fate, are comprehensively estimated and included in the emission balance.
6. The mineralization process has neither a significant impact on the upstream nor on the downstream processes, i.e., the use of the material. Thus, only the emissions of the CO2 value chain have to be considered, which is the case in our LCA.
7. The total quantity of atmospheric greenhouse gases removed and permanently stored is larger than the total quantity of greenhouse gases emitted to the atmosphere
8. To determine whether the value chain is net-negative, the GHG emissions of the CO2 supply chain and of the carbonation of RCA have to be smaller than the amount of biogenic CO2 that can be stored. This evaluation is the core element of this work.
3.2. Definition of System Boundaries and Functional Unit
This LCA aims to validate different aspects of the negative emission value chain. In a first step, the CO2 sink potential in RCA is analyzed. Thus, 1 kg of RCA carbonated has been chosen as the functional unit. In a further step, the environmental performance of CO2 mineralized recycling concrete (C-RC) has been compared to conventional recycling concrete (RC) and concrete using primary material, virgin concrete (VC). For this comparison, the system boundaries are drawn differently and the functional unit is defined as 1 m3 of concrete of a specific strength class.
Figure 1 illustrates the system boundaries of the two functional units, namely in red for 1 kg of carbonated RCA and in yellow for 1 m3 of concrete. The system boundary of the functional unit 1 kg of carbonated RCA includes only the negative emission value chain from the liquefaction of the CO2 to the carbonation of the RCA. Since the carbonated and regular RCA are chemically identical (less than 1% of difference in their composition), and (as shown later in this paper, see Figure 4) fulfill the same service in the downstream processes, a cradle to gate approach is justified (Müller et al., 2020a).
For the comparison of the three different concrete types the system boundary encompasses the entire value chain of concrete production. The systems of the three different concrete types contain the supply of raw materials such as cement, water, admixtures and aggregate (sand and gravel). The amounts of the raw materials per unit volume of concrete vary between the three different concrete types. A certain amount of the primary material in recycling concrete (RC) and carbonated recycling concrete (C-RC) is substituted with recycled concrete aggregates (RCA). The RC and C-RC systems thus encompass concrete recycling. During this process, reinforcing steel is sorted out and recycled. For C-RC, the recycled concrete aggregates are carbonated with biogenic CO2. The CO2, which is used for carbonation, represents a waste flow of biogas upgrading, which is usually released to the atmosphere. Therefore, in line with the system model of our background database, its supply can be considered as “burden-free.” However, the C-RC system entails liquefaction, transport and evaporation of the CO2 and carbonation of RCA. It can be assumed that the processes after the production of concrete are the same for all three concrete types, since they exhibit equivalent properties. Therefore, the transport of the concrete from the concrete facility to the construction site as well as the construction of new infrastructure and the use-phase were excluded from the system boundaries. Also the demolition of old infrastructure was assumed to be the same for all three concrete types. Furthermore, it was assumed that the distance from the demolition site to the concrete recycling facility is the same as to the landfill site. The transport of demolition concrete was thus excluded from the system boundary.
3.3. Life Cycle Inventory (LCI)
In general, the electricity demand and material input of the technology and the devices along the CO2 supply chain are based on the fact sheets of the manufacturers. Transportation distances were estimated by analyzing different concrete facilities in Switzerland. The inventory data was first calculated for the functional unit 1 kg of carbonated RCA, and then extrapolated to the functional unit 1 m3 of concrete and compared with the production of conventional recycling concrete and of virgin concrete. Figure 2 represents a flowchart with material and energy flows of the negative emission value chain. The single components of the chain are described in detail in the following sub-sections. A general modeling choice concerns the end-of-life of the infrastructure of the value chain. This infrastructure (see Table 1) is made of steel and it can be assumed that these components are recycled at the end of their lifetimes. According to the used system model of the ecoinvent database in the background (Wernet et al., 2016), recycled materials enter their markets carrying the environmental burdens of recycling activities per default and therefore, scrapping and recycling does not have to be specifically taken into account in our inventories.
Figure 2. Material energy and CO2 flows of the negative emission value chain for the production of 1 kg of carbonated RCA.
Table 1. Life cycle inventory data based on the Ecoinvent data for the main processing steps in respect to the functional unit 1 kg RCA is listed in the table blow.
3.3.1. Supply of RCA
The process of concrete recycling delivers more than one product or service (i.e., it is an example of joint production): 1) RCA is produced, 2) scrap iron is sorted out, and 3) the recycling of concrete provides a waste treatment service (Grieder et al., 2016) (Supplementary Table 2). In Switzerland, concrete recycling facilities are paid for receiving the demolition concrete. Consequently, the efforts for recycling the concrete are split between the three co-products using economic allocation. The allocation factors and the LCI data for the production of 1 kg of RCA are listed in Supplementary Tables 3, 4, respectively.
RCA is usually stock piled for intermediate storage and afterwards processed by the mineralization plant. Therefore, a wheel loader heaves the RCA from the stock pile into the mineralization container on the truck. The RCA is then transported to the carbonation plant. After the mineralization of CO2, the container with the carbonated aggregates is transported to the silo where the RCA is temporary stored, to be used afterwards as sand and gravel substitute in the concrete production. Then, the truck with the empty container drives back to the stock pile of RCA, to be filled again. Primary materials and non-carbonated RCA are also heaved with a wheel loader from the stock pile into a truck and then transported directly to the silo. If the mineralization process is well integrated into the existing value chain of the concrete facility, the additional transport distance of the carbonated RCA is negligible. It is thus assumed that the logistic efforts for all sorts of sand and aggregates are the same. Different concrete facilities were analyzed to estimate an average transport distance of 1 km for the logistic efforts on site. A diesel fueled EURO 6 lorry, with a gross vehicle weight above 32 metric tons, transports the aggregates. The carbonation container is filled with 20 t of RCA. Further, it has been assumed that the wheel loader, with an hourly diesel consumption of roughly 22 liters, needs 3 min to fill one container.5
3.3.2. CO2 Supply Liquefaction
The electricity consumption of the liquefaction plant needed to liquefy the CO2 prior to transport is 240 kWh per ton of liquefied CO2.6 It is made of 10 tons of stainless steel and contains 55 kg of the refrigerant R449A.7 A yearly leakage rate of 10% was assumed which is based on the average leakage rate of refrigerating and air conditioning units (Koronaki et al., 2012). It was further assumed that the amount of refrigerant that leaks to the environment is refilled. Over a lifetime of 20 years, 165 kg of refrigerant are required, the initial 55 kg plus 110 kg of leaked refrigerant.
The liquefaction plant serves more than one customer. Therefore, the CO2 output of 270 kg of CO2 per hour was used to calculate the total amount of CO2 that is liquefied over a plant lifetime of 20 years. It was assumed that the plant operates 24 h a day, 365 days a year; to take maintenance into account, the operational hours were reduced by 10%. The amount of stainless steel and refrigerant needed for the liquefaction plant was divided by the output of liquefied CO2 per lifetime (42,574 t) to obtain the specific amount of steel and refrigerant per kg of CO2. This value has then been multiplied by the demand of 7.53 g CO2 per kg RCA to get the amount of steel and refrigerant per kg of RCA. The inventory data of the liquefaction plant is listed in Table 1. CO2 Storage and Transport
CO2 tanks, which weigh 8 tons each, are used for the storage and transport of the CO2. There is one CO2 tank that stays at the liquefaction plant to be filled with liquid CO2. The other CO2 tanks are located at the concrete facilities. As soon as the mineralization plant runs out of CO2, CO2 from a full tank, charged at the liquefaction plant, is supplied. The CO2 liquefaction operates at a higher CO2 production rate than the CO2 uptake rate at the mineralization plant. Consequently, the CO2 tank at the liquefaction plant supplies CO2 to different concrete facilities. It is assumed that four concrete facilities share one backup tank. Thus, 10 tons of steel or five fourth of a tank are allocated to one concrete facility. A concrete facility can mineralize on average 120 tons of RCA per day. For the assumed CO2 tank lifetime of 20 years, a total amount of 626,400 tons of RCA can be carbonated. Based on that, the amount of steel per kg of RCA was calculated.
An EURO-6 diesel truck transports the CO2 from the liquefaction plant to the mineralization plant. The transportation distance from wastewater treatment plants to concrete facilities located in different agglomerations were measured and an average transportation distance of 10 km was determined. Supplementary Table 5 lists the distances for various Swiss urban areas. The CO2 tank increases the “empty” weight of the truck.8 The vehicle has a driving mass of 36 tons, with a CO2 load of 20 tons. This corresponds to a load factor of 100% (as the vehicle only departs when full), which is adjusted to a 50% average to account for the return trip driven empty.
The inventory data of the CO2 storage and transport components of the chain is listed in Table 1. Evaporation
For the mineralization, the liquid CO2 has to be evaporated with a reboiler that is made of 243 kg of chromium steel. It is assumed that an atmospheric reboiler can be used from March to October which has an energy consumption of 7.9 kWh per ton of CO2.9 From November to February an electric reboiler has to be used which has a power output of 20 kW and evaporates 180 kg of CO2 per hour. This leads to an electricity consumption of 111.1 kWh per ton of CO2. From mid-December to mid-January the concrete plant undergoes annual maintenance, and thus the reboiler does not operate. Therefore, the electric reboiler runs for 3 months. The evaporation of the CO2 has thus an average energy consumption of 36 kWh per ton of CO2. Table 1 summarizes the inventory data of the evaporation.
3.3.3. Mineralization
The mineralization plant comprises of two mineralization containers, a process center and some connecting pipes. Each mineralization container is made of 3.7 tons of low-alloyed steel. The process center of the mineralization consists of 6 tons of low-alloyed steel. Based on the power output of the different devices of the carbonation plant listed in Supplementary Table 6, a total power output of 2 kW was assumed for the mineralization. Further, it has been assumed that 120 tons of RCA are carbonated per day. Table 1 lists the life cycle inventory data of the mineralizarion plant.
Moreover, the amount of CO2 mineralized per unit of RCA as well as the CO2 storage efficiency are two main process performance indicators, which are required to conduct the LCA. For this purpose, the mineral carbonation plant was operated at industrial scale for several days to collect this data. The results of this campaign are reported in the following section.
3.3.4. Concrete Production
The concrete recipe of the virgin concrete is based on an ecoinvent data set.10 Based on the recycling concrete recipe of four different concrete facilities (Supplementary Table 7), a RCA share of 44% was assumed which corresponds to 860 kg/m3. It was further assumed that the remaining 1,095 kg of aggregates are 64% gravel and 36% sand. The aim of a specific concrete mix design is to meet characteristics as specified by the standard. Thus, the objective is to adjust the cement content so that the physical requirements are met. The use of RCA in concrete can alter its properties compared to the use of primary aggregate, which can be traced back to the heterogeneous nature of the material both in terms of chemical composition and of particle size distribution, particle shape and higher water absorption. Consequently, the cement input has to be increased from 290 to 315 kg/m3 to achieve the same material properties (Marinković et al., 2010). As stated above, the mineralization alters the properties of RCA positively, because it densifies the material (Seidemann et al., 2015). Depending on the concrete mix design, this can result in improved physical properties of the corresponding concrete; as a result less cement is needed (Xuan et al., 2016). For the concrete facility Frischbeton Rubigen AG, this improvement could be demonstrated for the concrete type NPK A. The cement content could be reduced to the regulatory minimum using carbonated aggregates. But the extent of cement reduction is subject to significant variations, since concrete plants utilize different raw materials, different cements and different chemical admixtures. Therefore, two different recipes were considered for the carbonated recycling concrete. On the one hand, the worst case (WC) recipe represents the situation where the concrete facility uses carbonated RCA in their recycling concrete without optimizing the concrete mixture. The best case (BC) recipe on the other hand corresponds to a recipe with carbonated RCA and with an optimized concrete mixture that allows a reduction of the cement content to 290 kg/m3, which is the average amount that is used for virgin concrete. The concrete recipes of the three different concrete types (VC, RC, C-RC) are listed in Supplementary Table 8.
3.4. Life Cycle Impact Assessment (LCIA)
The impact assessment regarding impacts on climate change was performed using the indicator Global Warming Potential with a time horizon of 100 years, calculated according to the characterization factors provided by the IPCC 2013 (Stocker, 2014). Additional indicators from the environmental footprint method EF 3.0 (Fazio et al., 2018) were used to quantify environmental burdens beyond impacts on climate change.
3.5. Projecting Biogenic CO2 Supply and CO2 Sink Capacities in RCA for Switzerland
In order to quantify the scale of negative emissions, which are enabled by the proposed value chain until 2050, one has to determine the sink capacity of demolition concrete over the upcoming decades and match them with the biogenic CO2 supply. For this reason, projections based on historic data of the corresponding industrial sectors are made.
3.5.1. Evolution of Amounts of Demolition Concrete Until 2050
Historical data about concrete demolition is sparse. However, the historic calcination related CO2 emissions of the Swiss cement sector are known (Boden et al., 2008). These CO2 emissions provide the theoretical maximal CO2 storage potential in the corresponding concrete—since every calcined CaCO3 molecule can bind one CO2 molecule. This theoretical storage potential applies, if the cement is fully hydrated, all cement mineral phases are in solid-liquid equilibrium with the aqueous pore solution and the carbonation time is sufficient for the CO2 to diffuse even to the cement minerals deep inside of the particles. Furthermore, to determine the annual CO2 storage potential in Swiss demolition concrete until 2050, an average service life of a concrete structure of 80 years is assumed. Switzerland has no significant cement trade surplus or deficit (CEMSUISSE, 2020), thus the reported emissions correspond to cement which was afterwards used in concrete within the Swiss national borders. Further, Portland Cement of type 1 with a clinker content of 95% was the main cement type used in the twentieth century (Worrell et al., 2001). To translate the emissions into a concrete output and into amounts of demolition concrete, the RCA composition of the Kästli RCA was assumed to be the same as the one of EMPA, as listed in Supplementary Table 1. Since the objective is to capture future trends, rather than to make very accurate projections, we assumed that the amounts of demolition concrete are constant until 2025, then they grow linearly (see Figure 9).
3.5.2. Evolution of Biogenic CO2 Sources Until 2050
The three main CO2 sources for negative emission technologies in Switzerland are biogas upgrading, waste incineration and direct air capture (DAC). Biogas Upgrading
Today, CO2 from biogas upgraders is the only biogenic CO2 available in Switzerland in purified form and larger quantities. Swiss biogas plants produced in 2019 roughly 1,000 GWh biogas, and about one third was upgraded to biomethane. Assuming that CO2 accounts for 40 mole % of the biogas, current Swiss biogas upgraders emit about 50,000 t of biogenic CO2 per year. Furthermore, the sector has seen an annual growth in capacity of about 10% in the past 3 years (Stamm, 2019). Moreover, it was estimated that the Swiss biogas upgrading sector can grow to a maximum size of 175,000 t of CO2 per year (Teske et al., 2019). For this reason, the projection considers an annual growth of the sector of 10,000 tons of CO2, starting at 50,000 t of CO2 in 2020, until the technical potential is reached. The annual biogenic CO2 emission of biogas upgraders between 2010 and 2018 were calculated according to the methodology presented at the beginning of this paragraph. Waste Incineration
Swiss waste incineration generates about 4 Mt CO2 emissions with a fossil share of roughly 50% (Gross, 2018; Swiss Federal Council, 2021). To access the CO2, CO2 post - combustion capture facilities need to be installed. In such a case, the CO2 capture process would be within the system boundaries of the value chain and thus its environmental impacts would need to be considered; however, these can be expected to be smaller than for DAC, since the CO2 is available at high concentrations, and heat and electricity can be provided by the waste incineration itself (Müller et al., 2020b). The implementation of large scale CCS is currently hindered by the lack of CO2 storage sites. First geologic storage sites accessible to European industries will start their operation in the North Sea around 2025.11,12 First feasibility studies have been conducted, which have identified that a first CO2 capture plant may start its operation at a Swiss waste incinerator between 2025 and 2030.13 Each facility will generate on average 50,000 to 100,000 tons of biogenic CO2 per year—summing up to a total of 1.5 Mt for all Swiss waste incinerators in 2050. Direct Air Capture
Other than biogas upgraders and waste incinerators, the capacity of DAC is not limited by current point source emissions, which are projected to experience no significant growth in the future. The limitations in DAC are rather related to the access to carbon lean heat and electricity (Deutz and Bardow, 2021).
4. Experimental Results
The results of the lab tests and of the industrial demonstration are presented in this section. They provide input data for the LCA and confirm the key assumptions which have been made.
4.1. Phenomenology of Experiments
The results of the lab-scale carbonation experiments are reported in Figure 3. Since particle size plays an important role in carbonation processes, the cumulative mass based particle size distribution of the 0–16 mm RCA after crushing is shown in Figure 3A. One can see, that 50 wt.% of the particles are in the 0–4 mm size fraction, whereas the remaining 50 wt. % are in the 4 to 16 mm size fraction. Figure 3B shows the CO2 uptake of the 0 to 16 mm size fraction (black bold line) in g CO2 per kg of RCA as a function of the carbonation time. At time zero, when the CO2 injection starts, the 0 to 16 mm RCA experiences a rapid increase in mass, reaching 11.5 g CO2 per kg RCA after 2 h and 15.7 g of CO2 per kg RCA after 8 h. As the carbonation progresses, the rate of CO2 mineralization decreases, exhibiting a CO2 uptake of 18.5 g of CO2 per kg RCA after 24 h and 20.8 g of CO2 per kg RCA after 72 h. To understand, why this is the case, one has to consider the underlying physical mechanisms of concrete carbonation. RCA embodies gravel and sand which is surrounded and bound together with cement paste. These cement phases, a porous structure partially saturated with water, consist to a large degree of portlandite, C-S-H, monosulfate type and ettringite type phases (Soler, 2007). As CO2 reaches the surface of the particle, it is absorbed by the pore water, which is in a liquid-solid equilibrium with the cement minerals. The absorbed CO2 speciates to bicarbonate and carbonate ions and triggers the precipitation of the poorly soluble salt CaCO3. As a result, calcium ions are removed from the solution - which triggers further dissolution of the cement minerals and release of more calcium. At some point, the cement minerals are depleted in calcium, and the dissolution process stops. In this initial phase, the mineralization reaction is rate limiting. As the mineralization progresses, the CO2 has to diffuse through a growing layer of already carbonated cement paste to find pore water rich in calcium ions. This effect slows down the rate of CO2 mineralization, as observed in the experimental campaign. Thus, the rate of mineralization is limited by the diffusion of CO2 through the carbonated layers of RCA (Thiery et al., 2013). Within the experimental time, the CO2 uptake curve does not flatten yet, thus indicating that CO2 mineralization continues and the cement minerals carbonation have not reached completion.
Figure 3. (A) The sieve curve is plotted over the particle size. (B) The CO2 uptake of different particle size fractions is plotted over the experimental time. (C) The CO2 uptake relative to the 72 h uptake is plotted over time. Furthermore, the relative contribution of the different size fraction to the total uptake is visualized. The orange dot indicates the current operating point in the industrial plant.
4.2. Effect of Particle Size and Carbonation Time
Moreover, the 0 to 16 mm size fraction was sieved in 8 sub fractions and the corresponding results are shown in Figure 3B as a function of carbonation time. All size fractions exhibit the same trend. Initially, they experience a fast CO2 uptake, which slows down with time. Figure 3B shows that the fine material exhibits a more rapid CO2 uptake and in addition can store more CO2 per unit mass of the corresponding particle size within the experimental time. There are two major reasons for this effect. First, smaller particles have a significantly larger surface area per unit mass. Similar to many other processes, the reaction rate is proportional to the surface area of the particles. In case of the larger particles, the CO2 first needs to diffuse through the pore network to carbonate cement phases which are far away from the surface. Second, throughout the crushing process, concrete preferentially breaks along the phase boundaries between the sand and aggregate and cement. Thus, the crushing process leads to a classification of the material - where the finer fractions tend to be rich in hydrated cement, whereas the larger fractions tend to be rich in aggregate (Etxeberria et al., 2007). So far, it has been shown that all particle size fractions mineralize CO2. In a further step, one has to understand which size fractions contribute most to the total CO2 uptake. Depending on the results, one can selectively utilize certain size fractions so as to save reactor volume - or decide to address the concrete aggregate as it leaves the crusher. To quantify the total CO2 uptake per fraction, one can multiply the specific CO2 uptake of the individual fraction reported in Figure 3B with the corresponding weight of the cumulative particle size distribution (Figure 3A). This results in Figure 3C. The black bold line corresponds to the total CO2 uptake of the 0–16 mm size fraction as a function of time relative to the 72 h CO2 uptake. One can see that 55% of the relative uptake are reached after 2 h. This value increases to 77%, respectively, 90% after 8 and 24 h. It is evident that increasing the processing time beyond 24 h does not increase the amount of CO2 stored significantly. Hence, one has to identify if the reactor volume or the amount of RCA is the limiting factor to come up with the best decision about the processing time. The thin lines correspond to the total CO2 uptake of each fraction. Summing up the contributions of the single size fractions gives the thick black line. Moreover, due to their large contribution in mass, the 4–16 mm particles contribute roughly 20% to the total CO2 uptake despite their relatively low specific CO2 uptake.
4.3. Validation at Industrial Scale
So far, it has been demonstrated that the investigated process at lab scale can fix CO2 in RCA of commercial nature in industrial applicable processing times. The technology was scaled by 4 orders of magnitude to validate the results at industrial scale. Key figures of the industrial operation are listed in Table 2. The phenomenology at industrial scale was identical to the one at lab scale. The orange dot in Figure 3B indicates the 2 h CO2 uptake of the industrial scale test. It is on average 8.2 g CO2 per kg RCA, which is below the value measured at lab scale. Since the processing time is the time between the start and stop of the injection, one reason for the lower uptake may be that at industrial scale, part of the RCA is submerged in CO2 for a much shorter time, thus reducing the effective carbonation time. Moreover, the material might slightly differ in chemical composition and humidity. Beyond that, the CO2 storage efficiency, i.e., the amount of CO2 stored over the amount of CO2 supplied to the reactor needs to be determined for the LCA. Throughout the industrial operation, on average 2.3% of the CO2 injected was lost, as it was mixed with air initially and thus discharged from the reactor with the exiting gas stream. Moreover, as the RCA is discharged, part of the void space in-between the particles (about 40% of the reactor volume) is filled with a CO2 rich gas. Single measurements have shown, that the CO2 concentration in the gas phase is about 35% at the time of discharge. Assuming ambient conditions (25°C and 0.95 bars for Switzerland), 236 g of CO2 per cubic meter of RCA equivalent to additional 2.7% of the total CO2 supplied to the reactor are lost throughout discharge. This CO2 leakage as such is CO2-neutral, because it comes from a biogenic source. However, it increases the climate impact of the CO2 supply chain. Overall, the CO2 storage efficiency sums up to 95%. The results of the single tests are listed in Supplementary Table 9.
Table 2. The results of the experimental campaign at industrial scale, conducted at ambient conditions are summarized in the table below.
4.4. Material Tests
Figure 4 shows the physical properties of hardened concrete batched at lab scale and in the industrial scale mixer. The detailed mix designs are reported in Supplementary Table 10. At lab scale, 40 L of concrete were batched per mix design; at industrial scale, 1000 L of concrete were batched per mix design. The black bar corresponds to the 28-day compressive strength read out at the left y-axis; the orange bar to the 28-day Young's modulus read out at the right y-axis. All mix designs exceed the regulatory required minimum compressive strength of 30 MPa. At lab scale, carbonation clearly increases the compressive strength of the corresponding mix design. This trend can be observed with mix designs using 280 and 315 kg cement per m3 of concrete. Moreover, the mix design incorporating carbonated RCA and using the lower amount of cement exhibits superior physical performance than the reference mix design with the higher cement content. The Young's modulus seems to be affected to a lower degree. The same trends, but less marked, could be confirmed at industrial scale. In this section, it has been shown for specific concrete mix designs, that carbonation can be used to store CO2 but also may reduce the cement consumption of the investigated mix design. In other words, carbonated RCA may replace primary aggregate in concrete mix designs without the need of increasing the cement content. As a result of these findings and in the scope of the LCA, the environmental performance of two mix designs with 290 kg and 315 kg cement per m3 concrete, incorporating carbonated RCA will be benchmarked with primary concrete batched with 290 kg cement per m3 concrete. It is worth noting that the results of the material test are specific to the concrete mix designs investigated and have to be validated for every change in e.g., raw material or grain size distribution.
Figure 4. The compressive strength of concrete of type A after 28 days (black bars, left y-axis) for the reference concrete mix designs incorporating regular RCA and the concrete mix designs incorporating carbonated RCA is plotted for tests conducted at lab and industrial scale. Moreover, the Young's modulus is represented by the orange framed bars (right y-axis). The error bars represent one standard deviation. The Young's modulus (orange bars) can be read out at the right y-axis.
5. Results of the LCA
5.1. LCIA Results for 1 kg of RCA
The impacts on climate change in terms of life-cycle GHG emissions of the four sub-processes of the negative emission value chain, namely CO2 liquefaction, transport, evaporation and mineralization, are illustrated in Figures 5, 6. In Figure 5, GHG emissions are grouped in four categories for each sub-process, i.e., infrastructure, electricity demand, refrigerant needs, and transport. The left y-axis indicates the emissions of greenhouse gases per kilogram of RCA, the right y-axis the amount of GHG emissions per ton of CO2 stored. The left panel shows that the ranking of sub-processes in terms of climate impact is CO2 liquefaction first, then transport, mineralization and finally evaporation, with electricity demand being the dominant effect in the case of liquefaction and evaporation, and infrastructure (the steel used to build the relevant CO2 vessels) in the case of transport and mineralization. The right panel shows the results of the left panel in a cumulative way and the positive emissions associated with the four sub-processes in total compared to the corresponding amount of CO2 stored. This shows that the net effect, which is given by the difference between negative and positive emissions, is 936.2 kg CO2,eq. of net negative emissions generated for every ton of CO2,eq. mineralized and stored, or – in other terms – of 6.7 g CO2,eq. for every kg of RCA carbonated.
Figure 5. (Left) The GWP of the sub-processes of the negative emission value chain are visualized along four main process categories. (Right) The cumulative GHG emission of the value chain, including the negative emissions of the mineralization plant.
Figure 6. Sensitivity analysis of different parameters of the LCA regarding impacts on climate change for the RCA. The left axis shows the carbon removal efficiency, i.e., the net amount of CO2 removed from the atmosphere compared to the total amount of CO2 stored. And the right axis indicates the net negative emissions per kg of RCA. The black dot illustrates the reference values. The blue lines in (A–C) indicate the carbon removal efficiency and the CO2 removed per kg RCA as a function of the investigated parameter. In (D), the blue line indicates the carbon removal efficiency, while the red line indicates the amount of CO2 removed per kg RCA.
Figure 6 illustrates the sensitivity of the environmental impacts of the three most impactful categories of GHG emissions on key parameters, namely transport distance (Figure 6A), carbon intensity of electricity (Figure 6B), and lifetime of infrastructure (Figure 6C), thus reflecting the possibly different boundary conditions of each specific implementation of mineralization of CO2 in RCA. Each effect is expressed in terms both of carbon removal efficiency, i.e., the net amount of CO2 removed from the atmosphere compared to the total amount of CO2 stored (left axis) and of net negative emissions (g CO2-eq. per kg RCA carbonated, right axis). Along each curve, a black dot indicates the values representing the reference case considered in this work and used to determine the values plotted in Figure 5. The assumed reference values are listed in Table 3. In panel B, the carbon intensity of electricity of important European countries is also reported. Finally, panel D illustrates the effect of changing (increasing) the CO2 uptake of RCA on the process emissions (minor effect) and on the net negative emissions generated (obviously a major effect). As it can be readily observed, transport distance and infrastructure's lifetime play a minor role when varied within reasonable ranges (panels A and C), whereas the carbon intensity of electricity plays a major role, with a 20% reduction of net negative emissions when the carbon intensity of electricity reaches levels as high as 0.6 kg CO2-eq. per kWh (the reference value that we use is 0.1 kg CO2-eq. per kWh, which refers to today's Swiss consumption mix, including guaranties of origin).
Table 3. Reference values of the performed LCA.
5.2. LCIA Results for 1 m3 of Concrete
Figure 7 illustrates the GWP of the four different concrete types—which fulfill the same service. The cement is responsible for over 90% of the GHG emission of the corresponding concrete mix design, followed by the primary aggregate and the concrete factory. CO2 mineralized recycling concrete stores around 6 kg of CO2-eq. per m3 of concrete. Thus, CO2 mineralized recycling concrete with an unimproved mix design representing the environmental worst case (C-RC(WC)) emits about 6 kg CO2-eq. less per m3 of concrete than conventional recycling concrete, since the emissions of the negative emission value chain are—in comparison—negligible. However, cement is the main contributor to overall GHG emissions of all concrete types and for the CO2 mineralized concrete with an unimproved mix design a cement demand of 315 kg/m3 was assumed to fulfill the strength requirements. Consequently, it has a higher CO2 footprint than virgin concrete, which requires only 290 kg of cement/m3. However, based on the tests illustrated in Figure 4, it can be assumed that the cement content of CO2 mineralized concrete can be reduced to 290 kg/m3 and that such an optimized concrete mix design (C-RC(BC)) still complies with the same standards. Such C-RC(BC) exhibits the lowest carbon footprint and causes emissions of around 9 kg CO2-eq./m3 of concrete less than virgin concrete and as much as 21 kg CO2-eq./m3 less than conventional recycling concrete. As shown in Figure 8, regarding other life cycle impact assessment indicators such as particulate matter, land and water use, resource use of minerals and metals and ozone depletion, virgin concrete performs worst most of the time and causes the highest burdens. This is due to the higher consumption of the primary aggregates sand and gravel, the extraction of which causes substantial environmental burdens—especially, land, water, and resource use are affected. Differences among the other three concrete mix designs regarding the results for these impact categories are minor.
Figure 7. GWP of 1 m3 of concrete. The dots indicate the absolute GWP of the concrete taking into account the amount of CO2 stored. C-RC(BC), carbonated recycling concrete (best-case); VC, virgin concrete; C-RC (WC), carbonated recycling concrete (worst-case); RC, conventional recycling concrete.
Figure 8. Environmental performance of the four concrete types in other impact categories than GWP per m3 of concrete using the EF 3.0 Method (adapted) V1.00.
5.3. Projection of Negative Emission Potential in Demolition Concrete
The orange line in Figure 9, read out at the left axis, shows the projection of the amounts of Swiss demolition concrete until 2050. At the same time, this line represents the theoretical storage potential read out at the right y-axis. One can see that the amount of demolition concrete remains constant at 5 Mt of demolition concrete per year until 2025, while it will experience a rapid eight-fold growth between 2025 and 2050. The data was cross-validated. Several studies of the federal Swiss office of environment estimate the annual amounts of demolition concrete between 1998 and 2018 to lie in the range from 3 to 9 Mt concrete per year (Haag, 2008; Rubli, 2016; Schneider, 2016). Similar estimates can be found elsewhere (Hoffmann and Jacobs, 2007).
Figure 9. The projected theoretical amount of CO2 that can be permanently removed from the atmosphere is plotted over time (orange curve). Furthermore, evolution of the technological exploitable CO2 sink capacity in demolition concrete (black) and the amount of biogenic CO2 available from biogas (green) upgrading can be read out at the y-axis as a function of time (Boden et al., 2008).
The theoretical sink capacity corresponds to the calcination related emission of the cement, which is embodied in the demolished concrete. This maximum potential is today in the range of 300,000 tons of CO2-eq. per year. This value sees a rapid increase in the upcoming 30 years, reaching an annual theoretical sink capacity of approximately 2.4 Mt of CO2-eq. in 2050. This theoretical sink capacity is technology independent and tells, how much CO2 can be removed from a thermodynamic point of view, assuming that the corresponding technologies do not have any associated emissions.
The technological sink capacity tells, how much CO2 can be permanently removed from the atmosphere in case all demolition concrete is carbonated by the presented technology. In addition, it also considers the GHG emissions associated with the operation of the negative emission value chain—which reduce the potential by 6.4%. This translates into a CO2 removal capacity of about 35,000 t CO2-eq. in 2020, which is represented by the black line in Figure 9. Moreover, carbonation experiments revealed, that the specific CO2 uptake of RCA can potentially be doubled by increasing the processing time from 2 to 24 h (Figure 3), while the CO2 emissions of the value chain are reduced. Thus, this effect is taken into consideration from 2025 onwards, which results in the kink on the black curve of Figure 9.
6. Discussion
So far, we have demonstrated the technological elements of the value chain at industrial scale. In addition, the LCA has shown, that negative emissions are generated by operating the value chain already today. In this section, we want to discuss potential short-term improvements of the value chain (perspective 2025) and the role of the system in the perspective of the Swiss climate targets (perspective 2050). Moreover, the regional implementation of the value chain (regional perspective) and the deployment of the technology beyond the Swiss national borders (global perspective) will be discussed.
6.1. Perspective 2025
The presented negative emission value chain was demonstrated for the first time at industrial scale in summer 2020. Thus, the operational environment, namely the upstream and downstream processes are not yet optimized for CO2 storage. In addition, the experience in incorporating carbonated RCA into fresh concrete is still limited. As the technology is increasingly deployed, ancillary processes as well as the technology itself will undergo a steep learning curve.
Advancement in upstream processes and the mineralization technology
In the foreseeable future, a higher level of system integration can be achieved. This is reflected by the fact that the technology may be integrated in existing silo infrastructure which will increase the residence time to 24 h and more. Doing so will double the technologically exploitable storage capacity in concrete aggregate to about 80,000 t of CO2-eq. per year in 2025.
Smaller particles—bigger CO2 uptake?
It seems evident (Figure 3) that finely ground RCA yields the highest CO2 uptake in the given processing time. This is mainly due to the fact that the diffusion distances decrease with decreasing particle size. In theory, the theoretical CO2 uptake potential of 50% of the calcination related emissions (Leemann and Hunkeler, 2016) can be reached within hours—which would result in an uptake of about 30 g CO2 per kg of RCA. However, one has to keep in mind that today, demolition concrete is crushed to serve as sand and aggregate substitute in concrete and other applications. If the material is ground to micrometer size, it cannot fulfill the same service any longer. While a reduction in maximum particle size in the sieve curve down to 4 mm still may be reasonable in order to reuse the RCA as sand, grinding it to powder transforms the RCA into a material which needs to be landfilled. Hence, when maximizing the CO2 uptake, both upstream and downstream implications such as additional processing of the material or the generation of waste have to be added to the system boundary to come up with a qualified decision.
Advancement in concrete mix designs
With evolution and dissemination of more know-how about carbonated RCA, the environmental and physical optimization of concrete mix designs may turn into state of the art by 2025. Ideally, carbonated RCA can replace primary aggregate without compromising the functional performance of the concrete mix design. This will enable a future where concrete mix designs made with carbonated RCA have similar clinker contents as concrete made from primary aggregate. In the case study of this paper, this advancement would avoid another 15 kg CO2-eq. emissions per cubic meter of concrete compared to conventional recycling concrete. As a result, the recycling concrete mix design would exhibit a better GHG emission balance than primary concrete.
6.2. Perspective 2050
The Swiss climate policy target is to reduce net GHG emissions within the Swiss national borders (including international aviation and shipping) from 52.1 Mt CO2-eq. in 2018 to net-zero in 2050. This target should be achieved by a combination of classical emission avoidance, CO2 capture and storage and negative emissions. The Swiss Federal Office of the Environment estimates that 6.8 Mt of negative CO2-eq. emissions will be required in 2050. About 2 Mt of these negative emissions should be generated within the Swiss national borders (Swiss Federal Council, 2021). In comparison to the target sink capacity, the 40 Mt of demolition concrete available in 2050 have a theoretical sink capacity of 2.4 Mt of CO2-eq. The part of the sink capacity, which can be technically exploited with the presented technology is about 0.56 Mt of CO2-eq. per year - and thus can provide about 30% of the required domestic sink capacity. Since current demolition concrete recovery rates are in the range of 85% (Gauch et al., 2016), this target seems to be reachable. To build a carbon sink of the required scale, the integration of the technology into concrete recycling processes needs to start today. Moreover, the CO2 supply chains need to be scaled correspondingly. In particular, the biogas upgrading will reach its limits in supplying sufficient amounts of biogenic CO2 between 2025 and 2030 as it is illustrated by the cross over of the green and the black line in Figure 9. For this reason, it is critical that waste incinerators are retrofitted with CO2 capture units in the upcoming 5–10 years. Moreover, it is key that biomass wastes or residues are utilized as the biogenic carbon source, since they can be considered as free of environmental burdens.
6.3. Regional Perspective
Every urban area in Switzerland usually has biogenic CO2 emitters, e.g., waste water treatment plants or waste incinerators, as well as concrete demolition and a number of concrete recycling facilities as integral part of their industry. Matching supply and demand on a regional level keeps transport distances short and generates regional economic growth, since a new sector is established. Typical biogas upgrading operations generate 1,000–10,000 tons of biogenic CO2, while concrete recycling plants can use and store 350–2,000 tons of CO2 per year. The small scale comes at the benefit of smaller capital costs per installation and the compatibility with existing infrastructure. Furthermore, we could show that the demand for biogenic CO2 matches well with the amounts which could be supplied by biogas upgraders until 2030 (Figure 9). From 2030 on, part of the waste incinerator's CO2 emissions can be fixed in demolition concrete. Moreover, point source CCS was developed for large scale centralized operations, in order to decarbonize cement manufacturing and waste incineration. To enable its operation, the whole value chain needs to be scaled up to the size of megatonnes per year. However, the key elements of the CCS value chain, namely the CO2 capture, liquefaction and transport are also an integral part of the presented negative emission value chain. These synergies will accelerate the large scale deployment of CCS.
6.4. Global Perspective
In the scope of this work, the environmental performance of a negative emission value chain in the Swiss industrial environment was evaluated. However, similar industrial clusters can be found all around the world. Moreover, sink capacities of many countries, e.g., European countries, the US, Canada, Japan and South Korea, are expected to exhibit similar characteristics to those presented in this work for Switzerland. This is due to the expected large amounts of demolition waste due to aging built stock. It has been shown that the electricity mix is most sensitive in determining the environmental performance of the process. Even for carbon intensive electricity, such as in Germany, the presented value chain has a carbon removal efficiency exceeding 75%. Hence, the presented value chain enables them to activate concrete as one of their carbon sinks. In 2050, the globally available theoretical sink capacity in concrete will reach 500 Mt of CO2-eq. per year (Boden et al., 2008). This study tries to anticipate the key developments of the presented negative emission value chain within the upcoming 30 years. On the large scale, this is challenging, since the industrial environment is facing a rapid transformation, which will impact the operational environment of the value chain. This work is supposed to represent the basis for future evaluations at the overall system level, which will consider interdependencies on the market and within the construction sector. Moreover, as the market penetration increases, re-assessments of the negative emission technology are of critical importance in order to understand systemic changes and to consider further impacts of upscaling the technology.
7. Conclusion
In the scope of demonstrating a CO2 mineralization plant at industrial scale, a CO2 uptake of 7.2 g CO2 per kg RCA was measured. Concrete material tests confirmed, that carbonated RCA fulfills at least the same service as regular RCA. In addition, lab scale carbonation experiments with a grain size of 0–16 mm RCA revealed that 50% of the CO2 is stored in the 0 to 1 mm particles, which account only for 25% of the mass (Figure 3). Furthermore, this study also revealed that extending the processing time from the current 2 h to more than 24 h can roughly double the amount of CO2 stored.
In addition, the LCA has demonstrated that carbonating recycled concrete aggregates using biogenic CO2 represents an efficient permanent carbon sink to remove CO2 from the atmosphere. The carbon removal efficiency (de Jonge et al., 2019), i.e., the net amount of CO2 removed from the atmosphere compared to the total amount of CO2 stored, of the presented negative emission value chain is about 93.6% in Switzerland. Thus 6.7 g CO2-eq. out of the 7.2 g CO2 stored per kg RCA can be considered as net negative emissions. The remaining 0.5 g CO2 stored per kg RCA compensate for the emissions related to the process, i.e., materials and energy. The value chain is most sensitive toward the carbon intensity of the electricity mix used for liquefaction and evaporation of CO2 and other processes. However, even carbon intensive electricity mixes like in Germany allow to operate the value chain at a carbon removal efficiency exceeding 75%. Since carbonated RCA can provide the same service as regular RCA, an integration of CO2 mineralization in current concrete recycling processes is an effective climate change mitigation measure. The presented negative emission value chain does not cause major environmental burdens. Other environmental impact categories beside impacts on climate change such as particulate matter formation and land use are more sensitive to the use of primary and recycling materials as well as cement than to the negative emissions value chain. The demonstrated negative emission value chain could currently generate annually about 35,000 tons of negative CO2 emission in 5 Mt of demolition concrete within the Swiss national borders, the amounts being limited by the rate of concrete demolition. The technical potential for negative emissions will evolve due to advancements along the value chain to 80,000 t CO2-eq. removed per year in 2025. Until 2030, the required biogenic CO2 can be supplied from biogas upgraders. Afterwards, more and more CO2 needs to be sourced from waste incineration or DAC for full exploitation of the CO2 removal potential via carbonating RCA. Ultimately, the growing amounts of demolition concrete to be expected will allow to store about 560,000 t CO2-eq. in 35 Mt Swiss demolition concrete in 2050, accounting for about 30% of the targeted sink capacity within the Swiss national borders by the Swiss federal office of environment to achieve overall climate neutrality.
Data Availability Statement
Author Contributions
LB performed the LCA under the guidance of JT. LB wrote the chapters 3.1–3.4, 5.2, and 5.3. JT wrote parts of the introduction, chapter 2, 3.5, 4.1, 5, and 6. All co-authors were involved on many detailed discussions about the content of the manuscript, reviewing, and editing it.
This analysis was performed within the Swiss Competence Center of Energy Research-Energy and Industrial Processes (SCCER-EIP), from which it received main funding. In addition, this work has partially received financial support from the Kopernikus Project Ariadne (FKZ 03SFK5A), funded by the German Federal Ministry of Education and Research. The pilot demonstrations were financially supported by the Swiss Federal office of Environment (BAFU), the Klimastiftung Schweiz, Neustark AG, and Kästli Bau AG.
Conflict of Interest
JT: Ph.D. student in the Separation Processes Laboratory at ETH Zurich and Co-founder, member of the board and shareholder of the ETH-Spinoff Neustark, which scaled and commercialized the presented value chain. MM: Professor of the Separation Processes Laboratory at ETH Zurich and member of the advisory board of the ETH-Spinoff Neustark.
Publisher's Note
We would like to acknowledge Marcel Eckstein and Luca Tschurtschenthaler for their efforts in designing, constructing and operating the plant. In addition, Martin Reichen (Kästli Bau AG) for supporting us with the concrete material tests; Daniel Kästli and Bernhard Hirschi to enable the demonstration of the technology on-site of the concrete recycling plant Kästli Bau AG. In addition, we want to thank Theodora Potsi for supporting us in the lab scale carbonation experiments and Daniel Trottmann for constructing and maintaining the lab-scale carbonation setup. Last, but not least, we want to thank Frank Winnefeld for providing 80 L of RCA for our lab-scale study. Moreover, we would like to thank Stephan Pfister for supporting us with his expertise in life cycle assessment.
Supplementary Material
The Supplementary Material for this article can be found online at:
1. ^
2. ^Swiss FOEN environmental technology promotion project 2019 with the aim to demonstrate CO2 mineralization in RCA at scale.
3. ^
5. ^The following ecoinvent data set is used as a proxy: “machine operation, diesel, >= 74.57 kW, steady-state” with a diesel consumption of approximately 20.5 l per hour.
6. ^Fact sheet CO2 recovery plant, Hypro Group, 2020.
7. ^The ecoinvent database does not have any data on that specific refrigerant. Thus, the background data of the refrigerant R134a with approximately the same GWP (Makhnatch et al., 2017) was used instead.
8. ^Inventories generated by “carculator_truck,” a LCA tool for heavy-duty vehicles, were used to model a diesel-powered truck (Sacchi et al., 2021).
9. ^Fact sheet Atmosphärischer CO2 Verdampfer, ASCO-Kohlensäure AG 2019.
10. ^Concrete, high exacting requirements CH| concrete production, for building construction, with cement CEM II/B.
11. ^
12. ^
13. ^
Allen, M. R., de Coninck, H., Dube, O. P., Hoegh-Guldberg, O., Jacob, D., Jiang, K., et al. (2018). “Technical Summary,” in Global Warming of 1.5°C. An IPCC Special Report on the Impacts of Global Warming of 1.5°C Above Pre-industrial Levels and Related Global Greenhouse Gas Emission Pathways, in the Context of Strengthening the Global Response to the Threat of Climate Change, Sustainable Development, and Efforts to Eradicate Poverty, eds V. Masson-Delmotte, P. Zhai, H.-O. Pörtner, D. Roberts, J. Skea, P. R. Shukla, A. Pirani, W. Moufouma-Okia, C. Péan, R. Pidcock, S. Connors, J. B. R. Matthews, Y. Chen, X. Zhou, M. I. Gomis, E. Lonnoy, T. Maycock, M. Tignor, and T. Waterfield.
Anderson, K., and Peters, G. (2016). The trouble with negative emissions. Science 354, 182–183. doi: 10.1126/science.aah4567
PubMed Abstract | CrossRef Full Text | Google Scholar
Angelidaki, I., Xie, L., Luo, G., Zhang, Y., Oechsner, H., Lemmer, A., et al. (2019). “Biogas upgrading: current and emerging technologies,” in Biofuels: Alternative Feedstocks and Conversion Processes for the Production of Liquid and Gaseous Biofuels, (Amsterdam: Elsevier), 817–843.
Arvanitoyannis, I. S. (2008). “Iso 14040: life cycle assessment (lca)-principles and guidelines,” in Waste Management for the Food Industries (Amsterdam), 97–132.
Bednar, J., Obersteiner, M., and Wagner, F. (2019). On the financial viability of negative emissions. Nat. Commun. 10, 1–4. doi: 10.1038/s41467-019-09782-x
PubMed Abstract | CrossRef Full Text | Google Scholar
Birolini, L. (2019). CO2 capture in concrete recycling residues (Master Thesis). ETH Zurich.
Boden, T., Marland, G., and Andres, B. (2008). Switzerland Geogenic Co2 Emissions From Cement Production. Carbon dioxide Information Analysis Center (Oak Ridge National Laboratory).
CEMSUISSE (2020). Kennzahlen 2020. Bern: CEMSUISSE.
Colangelo, F., Navarro, T. G., Farina, I., and Petrillo, A. (2020). Comparative lca of concrete with recycled aggregates: a circular economy mindset in europe. Int. J. Life Cycle Assess. 25, 1790–1804. doi: 10.1007/s11367-020-01798-6
CrossRef Full Text | Google Scholar
Creutzig, F., Breyer, C., Hilaire, J., Minx, J., Peters, G. P., and Socolow, R. (2019). The mutual dependence of negative emission technologies and energy systems. Energy Environ. Sci. 12, 1805–1817. doi: 10.1039/C8EE03682A
CrossRef Full Text | Google Scholar
de Jonge, M. M., Daemen, J., Loriaux, J. M., Steinmann, Z. J., and Huijbregts, M. A. (2019). Life cycle carbon efficiency of direct air capture systems with strong hydroxide sorbents. Int. J. Greenhouse Gas Control 80, 25–31. doi: 10.1016/j.ijggc.2018.11.011
CrossRef Full Text | Google Scholar
Deutz, S., and Bardow, A. (2021). Life-cycle assessment of an industrial direct air capture process based on temperature-vacuum swing adsorption. Nat. Energy 6, 203–213. doi: 10.1038/s41560-020-00771-9
CrossRef Full Text | Google Scholar
Etxeberria, M., Vázquez, E., Marí, A., and Barra, M. (2007). Influence of amount of recycled coarse aggregates and production process on properties of recycled aggregate concrete. Cement Concrete Res. 37, 735–742. doi: 10.1016/j.cemconres.2007.02.002
CrossRef Full Text | Google Scholar
Fajardy, M., and Mac Dowell, N. (2017). Can beccs deliver sustainable and resource efficient negative emissions? Energy Environ. Sci. 10, 1389–1426. doi: 10.1039/C7EE00465F
CrossRef Full Text | Google Scholar
Farina, I., Colangelo, F., Petrillo, A., Ferraro, A., Moccia, I., and Cioffi, R. (2020). “Lca of concrete with construction and demolition waste,” in Advances in Construction and Demolition Waste Recycling (Amsterdam: Elsevier), 501–513.
Fazio, S., Castellani, V., Sala, S., Schau, E., Secchi, M., Zampori, L., et al. (2018). Supporting Information to the Characterisation Factors of Recommended ef Life Cycle Impact Assessment Methods. New Models and Differences with ILCD, EUR 28888.
Fuhrman, J., McJeon, H., Doney, S. C., Shobe, W., and Clarens, A. F. (2019). From zero to hero?: Why integrated assessment modeling of negative emissions technologies is hard and how we can do better. Front. Clim. 1:11. doi: 10.3389/fclim.2019.00011
CrossRef Full Text | Google Scholar
Fuss, S., Lamb, W. F., Callaghan, M. W., Hilaire, J., Creutzig, F., Amann, T., et al. (2018). Negative emissions part 2: costs, potentials and side effects. Environ. Res. Lett. 13, 063002. doi: 10.1088/1748-9326/aabf9f
CrossRef Full Text | Google Scholar
Gasser, T., Guivarch, C., Tachiiri, K., Jones, C., and Ciais, P. (2015). Negative emissions physically needed to keep global warming below 2 c. Nat. Commun. 6, 1–7. doi: 10.1038/ncomms8958
PubMed Abstract | CrossRef Full Text | Google Scholar
Gauch, M., Matasci, C., Hincapié, I., Hörler, R., and Böni, H. (2016). Material-und Energieressourcen Sowie Umweltauswirkungen der Baulichen Infrastruktur der Schweiz; empa Materials Science &Technology. St. Gallen.
Geden, O., Peters, G. P., and Scott, V. (2019). Targeting carbon dioxide removal in the european union. Clim. Policy 19, 487–494. doi: 10.1080/14693062.2018.1536600
CrossRef Full Text | Google Scholar
Geden, O., and Schenuit, F. (2019). Climate Neutrality as Long-Term Strategy: The EU's Net Zero Target and Its Consequences for Member States. Tech. rep., DEU.
Goglio, P., Williams, A. G., Balta-Ozkan, N., Harris, N. R., Williamson, P., Huisingh, D., et al. (2020). Advances and challenges of life cycle assessment (lca) of greenhouse gas removal technologies to fight climate changes. J. Clean Prod. 244:118896. doi: 10.1016/j.jclepro.2019.118896
CrossRef Full Text | Google Scholar
Grieder, A., Hubler, P., and Poell, M. (2016). Oekobilanz ausgewählter Betonsorten: Schlussbericht-Version 4.1. Tech. rep., Stadt Zürich.
Gross, C. (2018). Faktenblatt CO2 - Emissionsfaktoren für die Berichterstattung der Kantone. Tech. rep., Bundesamt für Umwelt BAFU.
CrossRef Full Text | Google Scholar
Guinée, J. B., and Lindeijer, E. (2002). Handbook on Life Cycle Assessment: Operational Guide to the ISO Standards, Vol. 7. Luxenburg: Springer Science &Business Media.
Gursel, A. P., Masanet, E., Horvath, A., and Stadel, A. (2014). Life-cycle inventory analysis of concrete production: a critical review. Cement Concrete Composites 51, 38–48. doi: 10.1016/j.cemconcomp.2014.03.005
CrossRef Full Text | Google Scholar
Haag, M. (2008). Bauabfälle Hochbau in der Schweiz Ergebnisse der Studie 2008. Bern: Bundesamt for Umwelt (BAFU).
Habert, G., Miller, S., John, V., Provis, J., Favier, A., Horvath, A., et al. (2020). Environmental impacts and decarbonization strategies in the cement and concrete industries. Nat. Rev. Earth Environ. 1, 559–573. doi: 10.1038/s43017-020-0093-3
CrossRef Full Text | Google Scholar
Hafez, H., Kurda, R., Cheung, W. M., and Nagaratnam, B. (2019). A systematic review of the discrepancies in life cycle assessments of green concrete. Appl. Sci. 9, 4803. doi: 10.3390/app9224803
CrossRef Full Text | Google Scholar
Hoffmann, C., and Jacobs, F. (2007). Recyclingbeton aus Beton-und Mischabbruchgranulat Sachstandsbericht. Zurich: EMPA.
International Organization for Standardization (2006a). Environmental Management - Life Cycle Assessment - Principles and Framework: ISO 14040. Tech. rep.
International Organization for Standardization (2006b). Environmental Management - Life Cycle Assessment - Requirement and Guidelines: ISO 14044. Tech. rep.
Jacobs, F. (2011). Proof of Concrete Quality by' Limiting Values' and/or Tests? Tech. rep., TFB Wildegg.
Kirchner, A. (2020). Energieperspektiven 2050+ Kurzbericht. Bern: Swiss Federal Office of Energy.
Knoeri, C., Sanyé-Mengual, E., and Althaus, H.-J. (2013). Comparative lca of recycled and conventional concrete for structural applications. Int. J. Life Cycle Assess. 18, 909–918. doi: 10.1007/s11367-012-0544-2
CrossRef Full Text | Google Scholar
Kober, T., Bauer, C., Bach, C., Beuse, M., Georges, G., Held, M., et al. (2019). Perspectives of power-to-x technologies in switzerland: A white paper. Technical report, ETH Zurich.
Koronaki, I., Cowan, D., Maidment, G., Beerman, K., Schreurs, M., Kaar, K., et al. (2012). Refrigerant emissions and leakage prevention across europe-results from the realskillseurope project. Energy 45, 71–80. doi: 10.1016/
CrossRef Full Text | Google Scholar
Lagerblad, B. (2005). Carbon Dioxide Uptake During Concrete Life Cycle: State of the art. Stockholm: Swedish Cement and Concrete Research Institute Stockholm.
Leemann, A., and Hunkeler, F. (2016). Carbonation of concrete: assessing the co2 uptake. 1–11.
Leemann, A., and Moro, F. (2017). Carbonation of concrete: the role of CO2 concentration, relative humidity and CO2 buffer capacity. Mater. Struct. 50, 1–14. doi: 10.1617/s11527-016-0917-2
CrossRef Full Text | Google Scholar
Levasseur, A., Lesage, P., Margni, M., Deschenes, L., and Samson, R. (2010). Considering time in lca: dynamic lca and its application to global warming impact assessments. Environ. Sci. Technol. 44, 3169–3174. doi: 10.1021/es9030003
PubMed Abstract | CrossRef Full Text | Google Scholar
Lueddeckens, S., Saling, P., and Guenther, E. (2020). Temporal issues in life cycle assessment'a systematic review. Int. J. Life Cycle Assess. 25, 1385–1401. doi: 10.1007/s11367-020-01757-1
CrossRef Full Text | Google Scholar
Makhnatch, P., Mota-Babiloni, A., Rogstam, J., and Khodabandeh, R. (2017). Retrofit of lower gwp alternative r449a into an existing r404a indirect supermarket refrigeration system. Int. J. Refriger. 76, 184–192. doi: 10.1016/j.ijrefrig.2017.02.009
CrossRef Full Text | Google Scholar
Marinković, S., Radonjanin, V., Malešev, M., and Ignjatović, I. (2010). Comparative environmental assessment of natural and recycled aggregate concrete. Waste Manag. 30, 2255–2264. doi: 10.1016/j.wasman.2010.04.012
PubMed Abstract | CrossRef Full Text | Google Scholar
Meyer, V., de Cristofaro, N., Bryant, J., and Sahu, S. (2018). “Solidia cement an example of carbon capture and utilization,” in Key Engineering Materials, Vol. 761 (St. Quentin Fallavier: Trans Tech Publ), 197–203.
Miller, S. A., and Moore, F. C. (2020). Climate and health damages from global concrete production. Nat. Clim. Change 10, 439–443. doi: 10.1038/s41558-020-0733-0
CrossRef Full Text | Google Scholar
Minx, J. C., Lamb, W. F., Callaghan, M. W., Fuss, S., Hilaire, J., Creutzig, F., et al. (2018). Negative emissions part 1: research landscape and synthesis. Environ. Res. Lett. 13, 063001. doi: 10.1088/1748-9326/aabf9b
CrossRef Full Text | Google Scholar
Monkman, S., MacDonald, M., Hooton, R. D., and Sandberg, P. (2016). Properties and durability of concrete produced using co2 as an accelerating admixture. Cement Concrete Composites 74:218–224. doi: 10.1016/j.cemconcomp.2016.10.007
CrossRef Full Text | Google Scholar
Müller, L. J., Kätelhön, A., Bachmann, M., Zimmermann, A., Sternberg, A., and Bardow, A. (2020a). A guideline for life cycle assessment of carbon capture and utilization. Front. Energy Res. 8:15. doi: 10.3389/fenrg.2020.00015
CrossRef Full Text | Google Scholar
Müller, L. J., Kätelhön, A., Bringezu, S., McCoy, S., Suh, S., Edwards, R., et al. (2020b). The carbon footprint of the carbon feedstock CO2. Energy Environ. Sci. 13, 2979–2992. doi: 10.1039/D0EE01530J
CrossRef Full Text | Google Scholar
Nemet, G. F., Callaghan, M. W., Creutzig, F., Fuss, S., Hartmann, J., Hilaire, J., et al. (2018). Negative emissions part 3: innovation and upscaling. Environ. Res. Lett. 13, 063003. doi: 10.1088/1748-9326/aabff4
CrossRef Full Text | Google Scholar
Obrist, M. D., Kannan, R., Schmidt, T. J., and Kober, T. (2021). Decarbonization pathways of the swiss cement industry towards net zero emissions. J. Clean Prod. 288:125413. doi: 10.1016/j.jclepro.2020.125413
CrossRef Full Text | Google Scholar
Papadakis, V. G., Vayenas, C. G., and Fardis, M. (1989). A reaction engineering approach to the problem of concrete carbonation. AICHE J. 35, 1639–1650. doi: 10.1002/aic.690351008
CrossRef Full Text | Google Scholar
Papadakis, V. G., Vayenas, C. G., and Fardis, M. N. (1991). Fundamental modeling and experimental investigation of concrete carbonation. Mater. J. 88, 363–373.
Rogelj, J., Luderer, G., Pietzcker, R. C., Kriegler, E., Schaeffer, M., Krey, V., et al. (2015). Energy system transformations for limiting end-of-century warming to below 1.5 c. Nat. Clim. Chang 5, 519–527. doi: 10.1038/nclimate2572
CrossRef Full Text | Google Scholar
Rosa, L., Reimer, J. A., Went, M. S., and DOdorico, P. (2020a). Hydrological limits to carbon capture and storage. Nat. Sustainabil. 3, 658–666. doi: 10.1038/s41893-020-0532-7
CrossRef Full Text | Google Scholar
Rosa, L., Sanchez, D. L., Realmonte, G., Baldocchi, D., and D'Odorico, P. (2020b). The water footprint of carbon capture and storage technologies. Renewable Sustainable Energy Rev. 138:110511. doi: 10.1016/j.rser.2020.110511
CrossRef Full Text | Google Scholar
Rubli, S. (2016). Bauabfälle in der Schweiz - Tiefbau Aktualisierung 2015. Bern: Bundesamt for Umwelt (BAFU).
Runge-Metzger, A. (2018). A Clean Planet for All, a European Strategic Long-Term Vision for a Prosperous, Modern, Competitive and Climate Neutral Economy. European Commission, 1–18.
Sacchi, R., Bauer, C., and Cox, B. L. (2021). Does size matter? the influence of size, load factor, range autonomy, and application type on the life cycle assessment of current and future medium-and heavy-duty vehicles. Environ. Sci. Technol. 5, 5224–5235. doi: 10.1021/acs.est.0c07773
PubMed Abstract | CrossRef Full Text | Google Scholar
Schneider, M. (2016). Das Kar-Modell for die Schweiz. Bern: Bundesamt for Umwelt (BAFU).
Seidemann, M., Müller, A., and Ludwig, H.-M. (2015). Weiterentwicklung der Karbonatisierung von rezyklierten Zuschlägen aus Altbeton: Abschlussbericht. Tech. rep., Bauhaus-Universität Weimar.
Siwior, P., and Bukowska, J. (2018). Commentary on european court of justice judgement of 19 january 2017 in case c-460/15 schaefer kalk gmbh &co. kg v bundesrepublik deutschland. Environ. Protect. Natural Resour. 29, 25–30. doi: 10.2478/oszn-2018-0011
CrossRef Full Text | Google Scholar
Soler, J. M. (2007). Thermodynamic Description of the Solubility of CSH Gels in Hydrated Portland Cement. Tech. rep., Posiva Oy, Finland.
Stamm, N. (2019). Schweizerische Statistik der Erneuerbaren Energien. Available online at:
Stocker, T. (2014). “Climate change 2013: the physical science basis,” in Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (Cambridge: Cambridge University Press).
Swiss Federal Council (2021). Switzerland's Long-Term Climate Strategy. Available online at:
Tanzer, S. E., and Ramírez, A. (2019). When are negative emissions negative emissions?. Energy Environ. Sci. 12, 1210–1218. doi: 10.1039/C8EE03338B
CrossRef Full Text | Google Scholar
Teir, S., Eloneva, S., Fogelholm, C.-J., and Zevenhoven, R. (2006). Stability of calcium carbonate and magnesium carbonate in rainwater and nitric acid solutions. Energy Convers Manag. 47, 3059–3068. doi: 10.1016/j.enconman.2006.03.021
CrossRef Full Text | Google Scholar
Terlouw, T., Bauer, C., Rosa, L., and Mazzotti, M. (2021). Life cycle assessment of carbon dioxide removal technologies: a critical review. Energy Environ. Sci. 14, 1701–1721. doi: 10.1039/D0EE03757E
CrossRef Full Text | Google Scholar
Teske, S., Rüdisüli, M., Bach, C., and Schildhauer, T. (2019). Potentialanalyse Power-to-gas in der Schweiz (version 1.0. 0). Dübendorf: EMPA.
Thiery, M., Dangla, P., Belin, P., Habert, G., and Roussel, N. (2013). Carbonation kinetics of a bed of recycled concrete aggregates: a laboratory study on model materials. Cement Concrete Res. 46:50–65. doi: 10.1016/j.cemconres.2013.01.005
CrossRef Full Text | Google Scholar
Tollefson, J. (2018). Ipcc says limiting global warming to 1.5 c will require drastic action. Nature 562, 172–173. doi: 10.1038/d41586-018-06876-2
PubMed Abstract | CrossRef Full Text | Google Scholar
van Oss, H. (2021). Minerals Yearbook: Cement. Tech. rep., United States Geological Survey.
Van Vuuren, D. P., Deetman, S., van Vliet, J., van den Berg, M., van Ruijven, B. J., and Koelbl, B. (2013). The role of negative co 2 emissions for reaching 2 c insights from integrated assessment modelling. Clim. Change 118, 15–27. doi: 10.1007/s10584-012-0680-5
CrossRef Full Text | Google Scholar
Van Vuuren, D. P., Hof, A. F., Van Sluisveld, M. A., and Riahi, K. (2017). Open discussion of negative emissions is urgently needed. Nat. Energy 2, 902–904. doi: 10.1038/s41560-017-0055-2
CrossRef Full Text | Google Scholar
Van Vuuren, D. P., Stehfest, E., Gernaat, D. E., Van Den Berg, M., Bijl, D. L., De Boer, H. S., et al. (2018). Alternative pathways to the 1.5 c target reduce the need for negative emission technologies. Nat. Clim. Chang 8, 391–397. doi: 10.1038/s41558-018-0119-8
CrossRef Full Text | Google Scholar
Vieira, D. R., Calmon, J. L., and Coelho, F. Z. (2016). Life cycle assessment (lca) applied to the manufacturing of common and ecological concrete: a review. Construct. Build. Mater. 124, 656–666. doi: 10.1016/j.conbuildmat.2016.07.125
CrossRef Full Text | Google Scholar
Villain, G., Thiery, M., and Platret, G. (2007). Measurement methods of carbonation profiles in concrete: Thermogravimetry, chemical analysis and gammadensimetry. Cement Concrete Res. 37, 1182–1192. doi: 10.1016/j.cemconres.2007.04.015
CrossRef Full Text | Google Scholar
Waisman, H., de Coninck, H., and Rogelj, J. (2019). Key technological enablers for ambitious climate goalsinsights from the ipcc special report on global warming of 1.5°c. Environ. Res. Lett. 14:111001 doi: 10.1088/1748-9326/ab4c0b
CrossRef Full Text | Google Scholar
Wernet, G., Bauer, C., Steubing, B., Reinhard, J., Moreno-Ruiz, E., and Weidema, B. (2016). The ecoinvent database version 3 (part i): overview and methodology. Int. J. Life Cycle Assess. 21, 1218–1230. doi: 10.1007/s11367-016-1087-8
CrossRef Full Text | Google Scholar
Wiloso, E., Heijungs, R., Huppes, G., and Fang, K. (2016). Effect of biogenic carbon inventory on the life cycle assessment of bioenergy: challenges to the neutrality assumption. J. Clean Prod. 125, 78–85. doi: 10.1016/j.jclepro.2016.03.096
CrossRef Full Text | Google Scholar
Worrell, E., Price, L., Martin, N., Hendriks, C., and Meida, L. O. (2001). Carbon dioxide emissions from the global cement industry. Ann. Rev. Energy Environ. 26, 303–329. doi: 10.1146/
CrossRef Full Text | Google Scholar
Xi, F., Davis, S. J., Ciais, P., Crawford-Brown, D., Guan, D., Pade, C., et al. (2016). Substantial global carbon uptake by cement carbonation. Nat. Geosci. 9, 880–883. doi: 10.1038/ngeo2840
CrossRef Full Text | Google Scholar
Xuan, D., Zhan, B., and Poon, C. S. (2016). Assessment of mechanical properties of concrete incorporating carbonated recycled concrete aggregates. Cement Concrete Composites 65, 67–74. doi: 10.1016/j.cemconcomp.2015.10.018
CrossRef Full Text | Google Scholar
Zhang, X., Witte, J., Schildhauer, T., and Bauer, C. (2020). Life cycle assessment of power-to-gas with biogas as the carbon source. Sustain. Energy Fuels 4, 1427–1436. doi: 10.1039/C9SE00986H
CrossRef Full Text | Google Scholar
Zhang, Y., Luo, W., Wang, J., Wang, Y., Xu, Y., and Xiao, J. (2019). A review of life cycle assessment of recycled aggregate concrete. Construct. Build. Mater. 209, 115–125. doi: 10.1016/j.conbuildmat.2019.03.078
CrossRef Full Text | Google Scholar
Keywords: negative emission technologies (NET), concrete, CO2 mineralization, concrete carbonation, carbon sink, life cycle assessment (LCA), construction
Citation: Tiefenthaler J, Braune L, Bauer C, Sacchi R and Mazzotti M (2021) Technological Demonstration and Life Cycle Assessment of a Negative Emission Value Chain in the Swiss Concrete Sector. Front. Clim. 3:729259. doi: 10.3389/fclim.2021.729259
Received: 22 June 2021; Accepted: 14 September 2021;
Published: 13 October 2021.
Edited by:
Shareq Mohd Nazir, KTH Royal Institute of Technology, Sweden
Reviewed by:
Jacopo Giuntoli, Independent Researcher, Montecatini Terme, Italy
Hannu-Petteri Mattila, Independent Researcher, Parainen, Finland
*Correspondence: Marco Mazzotti,
|
Astronomers spot 'dead' monster galaxy that existed 12 billion years ago
The galaxy created more than 1,000 solar masses of stars in a year during its prime days
In a new study, published in the Astrophysical Journal on Wednesday, astronomers have spotted an ultramassive monster galaxy, dubbed XMM-2599, dating back to the early days of the universe which lived fast and died young.
The astronomers used the Multi-Object Spectrograph for Infrared Exploration at W M Keck Observatory in Hawaii to get detailed measurements that suggest the galaxy rapidly formed a bunch of stars and died.
XMM-2599: the ultra massive galaxy
Representational image Pixabay
Benjamin Forrest, lead study author and a postdoctoral researcher in the University of California, Riverside's Department of Physics and Astronomy, said, "Even before the universe was 2 billion years old, XMM-2599 had already formed a mass of more than 300 billion suns, making it an ultra massive galaxy."
"More remarkably, we show that XMM-2599 formed most of its stars in a huge frenzy when the universe was less than 1 billion years old and then became inactive by the time the universe was only 1.8 billion years old," the study author added.
Created over 1,000 solar masses of stars in a year
The galaxy, which existed 12 billion years ago, created more than 1,000 solar masses of stars in a year during its prime days. According to the researchers, this rate of star formation is incredibly high when compared to the Milky Way that forms one new star in a year.
"XMM-2599 may be a descendant of a population of high star-forming, dusty galaxies in the very early universe that new infrared telescopes have recently discovered," said Danilo Marchesini, study co-author and an associate professor of astronomy at Tufts University in Massachusetts.
Should XMM-2599 still form stars?
However, the astronomers are not sure of the evolution of the unusually huge galaxy. Based on their models, they said that XMM-2599 should still be forming stars. Gillian Wilson, a professor of physics and astronomy at the University of California, Riverside, said, "What makes XMM-2599 so interesting, unusual and surprising is that it is no longer forming stars, perhaps because it stopped getting fuel or its black hole began to turn on."
"Our results call for changes in how models turn off star formation in early galaxies. We have caught XMM-2599 in its inactive phase," Wilson added. The astronomers said that the galaxy is not forming any more stars, but it can't lose its mass. Wilson said, "As time goes by, could it gravitationally attract nearby star-forming galaxies and become a bright city of galaxies?"
Co-author Michael Cooper, an associate professor of astronomy at UC Irvine, said, "Perhaps during the following 11.7 billion years of cosmic history, XMM-2599 will become the central member of one of the brightest and most massive clusters of galaxies in the local universe. Alternatively, it could continue to exist in isolation. Or we could have a scenario that lies between these two outcomes."
The astronomers will continue to study the strange galaxy at the observatory and they are hopeful that they will get the unanswered questions prompted by XMM-259.
|
Excepciones (DCH) (Exceptions (Historical Dictionary of Canon Law in Latin America and the Philippines))
No. 2021-09
The exception, a legal instrument by means of which the defendant tried to exclude the plaintiff's action, arose in Roman procedural law, it passed to canon law and Castilian law, and was adapted to the canon law for the Spanish territories according to local conditions. This paper analyses in detail the regulation of procedural exceptions in the legal system of the Catholic Church in Hispanic America and the Philippines between the 16th and 18th century, according to the most widely used and circulating normative sources. The author provides an analysis of the concept, its classifications and forms of processing, as well as the interaction between the civil and ecclesiastical forum. It covers the applicable norms, both in the field of universal canon law and those specific to the Church in Spanish America. In this respect, the regulation of exceptions took into account the enormous geographical distances in the New World, which demanded measures to mitigate the duration and costs of lawsuits. These exceptions also considered the difficulties of the Spanish Crown to know with certainty the circumstances of a decision such as the granting of an ‘encomienda’, as well as the need to protect indigenous people, among other topics. The paper concludes with a historiographic overview of this legal institution.
Go to Editor View
|
State governments are key to solve pollinator health crisis, study finds
Insect pollinators are vital to the existence of almost 90 percent of the world's flowering plants, including a large portion of food products. Blueberries and cherries, for instance, depend on honey bee pollination. But pollinator populations are falling amid what has been termed an "insect pollinator health crisis," and in the absence of sweeping international or federal action on this issue, it falls to state legislatures to come up with innovative solutions.
For the first time, researchers at the University of Missouri have cataloged every pollinator protection policy enacted by state governments from 2000-2017. The resulting database of information allows everyone from legislators to the general public to study how state lawmakers have addressed the issue over time.
"To monitor a problem of this scale, we need to be able to see what kind of progress we are making across the country," said Damon Hall, an assistant professor jointly appointed in MU's School of Natural Resources and the College of Engineering. "Until now, no one had put together a complete collection of legislation covering all 50 states. This creates a problem, because how do you write effective laws without knowing what has come before in other states?"
"We are seeing encouraging policy innovations, but there is no momentum in state legislatures to adequately monitor this crisis," Hall said. "Wild pollinating insects, like native bees, are wildlife to be managed like any other kinds of wildlife, and that means we need data to track population declines and to start experimenting with different types of land-use programs. Without data, we don't have answers because we don't know which questions to ask, and without a legislative push to fund that data collection, we are spinning our wheels."
You might also like... ×
Can SARS-CoV-2 be transmitted in the food cold chain?
|
December 26, 2020
By Ted Noel
Article II, Section 1 of the Constitution gives state legislatures "plenary authority" as enunciated in Bush v. Gore. This is key, since the counting of votes is discussed in Article II, the 12th Amendment, and 3 USC 15. To this we must add the history of counting and objections recounted by Alexander Macris (here and here). Put bluntly, it's as clear as mud. Add to that the fact that the contested states of Arizona, Georgia, Michigan, New Mexico, Nevada, Pennsylvania, and Wisconsin have sent dueling slates of electors to D.C. This means that the V.P. has to decide how he will handle the situation when two sealed envelopes are handed to him from any of those states.
Macris points out that in 1800, even with constitutional deficiencies in Georgia, Thomas Jefferson blithely counted defective electoral votes from Georgia, effectively voting himself into the presidency. This demonstrates that the president of the Senate is the final authority on any motions or objections during the vote-counting. There is no appeal. That doesn't mean there won't be any outrage. Whatever Pence does, people will be angry. But what does the law demand?
Seven contested states clearly violated their own laws. Rather than list the facts, which have been detailed in multiple articles, we must consider the following:
An election is a process of counting votes for candidates. Only valid, lawful votes may be counted. A valid lawful vote is:
• Cast by an eligible, properly registered elector as prescribed by laws enacted by the state Legislature.
• Cast in a timely manner, as prescribed by laws enacted by the state Legislature.
• Cast in a proper form as prescribed by laws enacted by the state Legislature.
Any process that does not follow these rules is not an election. Anything that proceeds from it cannot be regarded as having any lawful import.
Most commentators suggest that a process of collecting pieces of paper with marks on them is an election regardless of errors, omissions, and even deliberate malfeasance.
This is a mistake. Imagine a golf tournament where every bad shot by one player gets a do-over, but the competing player has to follow USGA rules in detail. One player gets to drop freely out of hazards, but the other has to tackle every embedded ball as it lies. The result is a travesty.
The same thing applies to elections.
If there are a handful of improper votes, we can suggest that there was in fact an election, perhaps tainted, but the election wasn't materially harmed. But when the people charged with managing the election decide to ignore the law, whatever process they supervise is not the process defined by the law. Therefore, it is not an election.
This leaves V.P. Pence with a dilemma. He is a gentleman who regards our governmental traditions with a degree of reverence, so he will be reluctant to take any bold action. But as an honorable man, faced with massive illegality, he must act to protect the law.
Consider how things might go down as the two closed envelopes from Georgia are handed to the V.P. Rather than opening them, he says:
In my hand are envelopes purporting to contain electoral votes from Georgia. They are competing for consideration, so it is essential that I consider the law that governs this. That law, according to the Legislature of Georgia and Article II, Section 1 of the U.S. Constitution is the Georgia statute that includes procedures for signature-matching on absentee ballots, a requirement that all absentee ballots be first requested by a legitimate voter, and that election monitors be meaningfully present at all times while votes were counted.
The Georgia secretary of state, who is not empowered by the U.S. Constitution to make changes to election law, entered into a Consent Decree that gutted these protections enacted by the Georgia Legislature. The processes that he prescribed and were ultimately followed were manifestly contrary to that law.
Further, the State of Georgia, in unprecedented concert with other states, suspended counting of ballots in the middle of the night, covering its conspiracy with a false claim of a "water main break." We now know from surveillance video that many thousands of "ballots" were counted unlawfully in the absence of legally required observers.
Finally, the State of Georgia, under the authority of secretary of state Brad Raffensperger, a non-legislative actor, used fatally flawed Dominion voting machines that have been demonstrated to be unreliable. In testing, the error rate of Dominion machines has exceeded 60%, far in excess of legal limits. They are designed to facilitate fraud without creating the legally required paper trail.
This alone is far more than enough to swing an election.
Since the state of Georgia has failed to follow the election law established by its legislature under Article II, Section 1 of the Constitution, it has not conducted a presidential election. Therefore, no "presidential electors" were appointed in Georgia. Further, "electors" "certified" by non-legislative actors pursuant to this process are in fact not "presidential electors." The competing slate of "electors" is similarly deficient, having not been elected through a presidential election.
Therefore, the chair rules that Georgia has not transmitted the votes of any presidential electors to this body. Georgia presents zero votes for Donald Trump and zero votes for Joseph Biden.
The central point is that the VP, as the presiding officer and final authority, has the unquestionable authority to declare that the states in question have not conducted presidential elections. There will be wailing and gnashing of teeth, but no one has the authority to override his decision.
The statement says nothing about who might or might not have "won" the contested states. Rather, by not following their own laws, as enacted by their own legislatures, they have violated Article II, Section 1. Thus, they have not conducted an election, and their results are void.
If the votes of all seven contested states are registered as zero, President Trump will have 232 votes, and Joe Biden will have 222. The 12th Amendment says, "[T]he votes shall then be counted[.] ... The person having the greatest number of votes for President, shall be the President[.]"
In plain language, Donald Trump will be re-elected, since he has a majority of the actual electoral votes.
There will be no need to involve the House of Representatives to resolve a contingent election.
Richard Nixon chose not to contest the 1960 election because he felt that winning that way would lead to an ungovernable country.
If V.P. Pence does this, that same argument might be made. But is the country governable even now?
Blue states such as California, Oregon, Washington, New York, New Jersey, and Michigan are already operating in an openly lawless manner with their "emergency" "COVID-related" restrictions.
Their denial of the civil rights of law-abiding citizens is horrific. Their refusal to do basic policing and law enforcement is a recipe for open war.
How much worse would things be if the V.P. lived up to his oath and upheld the law?
Ted Noel posts on multiple sites as DoctorTed and @vidzette.
|
Ice Bucket Challenge funds gene discovery in ALS (MND) research
The Ice Bucket Challenge that went viral in 2014 has funded an important scientific gene discovery in the progressive neurodegenerative disease ALS, the ALS Association says. Scientists have identified a new gene contributing to the disease, NEK1. The Ice Bucket Challenge has raised $115m (£87.7m).
It was criticised as a stunt, but has funded six research projects. Research by Project MinE, published in Nature Genetics, is the largest-ever study of inherited ALS, also known as motor neurone disease (MND).
The identification of gene NEK1 means scientists can now develop a gene therapy treating it. Although only 10% of ALS patients have the inherited form, researchers believe that genetics contribute to a much larger percentage of cases.
Social media was awash with videos of people pouring cold water over their heads to raise money for ALS in the summer of 2014.
What is amyotrophic lateral sclerosis (ALS), also known as motor neurone disease (MND)? fatal, rapidly progressive disease that affects the brain and spinal cord. Attacks nerves that control movement so muscles refuse to work (sensory nerves are not usually affected).
|
Microgrids can power the transition to a net-zero economy by 2050
Photo by Alexandre Viard on Unsplash
by John Gould, Enchanted Rock
The movement to decarbonize the grid by 2050 marks the largest shift in global energy philosophy since the Industrial Revolution. The journey to that goal has challenged our long-held views of the electricity system, tested our political will and revealed how flaws in our current technology mean that our goals do not align with our best environmental intentions. However, a hybrid electricity solution, backed by microgrids, can bridge the gap between our ultimate net-zero goals and the current market’s shortcomings.
President Biden has set a goal of creating a net-zero emissions economy by no later than 2050. We’ve witnessed some tremendous strides in recent years, including decarbonization and sustainability commitments from grid operators and large commercial entities. We’ve also incorporated more renewables in many markets, with upwards of 35% of the energy mix coming from solar, wind and hydro generation.
However, progress in the name of sustainability often overlooks one critical factor: resiliency. As our society focuses on sustainability, we need to expand our definition of the term “sustainable” to mean both sustainably sourced power, along with sustainable (i.e., resilient) power availability for all human society. This thinking will help us avoid the kinds of disasters we’ve seen recently with Winter Storm Uri in Texas and heat storms in California and the West.
When we evaluate current thinking and common approaches, we see a rational combination of renewables and battery storage. This great short-duration solution leverages renewables and shifts a day’s solar or wind production into the evening hours of peak demand. However, batteries do not offer long-duration options beyond four or eight hours, are expensive and are not considered good resiliency options. And since wind and solar power are dependent upon the sun shining and the wind blowing, we have a need for a longer-term resiliency solution.
How do we strike the right balance of carbon neutrality and renewables adoption with low-emission resiliency solutions that back this up?
The answer is hybrid microgrids. A hybrid solution for backup generation – a microgrid – consists of a combination of renewables, battery storage and highly resilient natural gas or renewable natural gas (RNG) generation. Ideally, we deploy dual-purpose microgrids that both support commercial customer’s long duration resiliency requirements along with services that support underlying grid stability and accommodate additional renewable generation. This hybrid approach enables us to protect communities while preserving the planet’s long-term viability.
What is exciting is that this technology is available today and something we can implement now. However, it will require disruptive new thinking, a willingness to forego the status quo, and collective work to prioritize this new approach to clean, resilient grid solutions. These innovative steps and investments will help the United States become a global leader in our successful path to the net-zero economy.
How do we get there? Some key steps need to occur to achieve the proper momentum. For starters, corporate and commercial entities need to continue to push hard to attain an improved mix of green energy sources. The largest commercial and industrial power consumers need to continue to think differently and be willing to re-evaluate the status quo of blind dependency on diesel generators for local resiliency. Organizations such as water treatment facilities, data center providers, and large-scale manufacturers and distributers — sites that often have 10 megawatts to over 100 megawatts — need to lead this charge.
Organizations such as water treatment facilities, data center providers, and large-scale manufacturers and distributers – sites that often have 10 megawatts to over 100 megawatts – need to lead this charge.
A key industry that has the potential to shape this future are data center and hyperscale providers. Many of these large power consumers have corporate objectives of carbon neutrality (even carbon negative) but find themselves having to achieve this through buying carbon credits to offset their carbon output. You’ll often hear this referred to as “creative carbon accounting,” and it’s called out in many regions. Many of these facilities still operate large footprints of diesel backup generators to maintain emergency backup power. As these organizations design and build new facilities, a concerted effort should be made to evaluate alternative approaches, like RNG microgrids that can provide similar back-up performance with improved emissions and lower overall cost.
Next, it’s also incumbent upon our public sectors to legislate to drive change and make organizations rationalize why they’re unwilling to evaluate alternative, cleaner solutions. Two organizations in California, the California Energy Commission and California Air Resources Board are examples of how public policy can influence commercial organizations. They have made it more difficult for data center providers to get the necessary permitting to complete their data center builds if they choose to use diesel generators. The more red tape and longer the delay, the more difficult it is for the operator as they strand capital (e.g., a partially built data center) and delay the return on investment for its shareholders. This additional financial pressure is driving innovative data centers operators to explore alternative solutions, such as RNG microgrids, for their back-up systems.
In short, it’s time for all of us to “get real” about what does not work anymore and what can work right now to move us down the path of energy transition. We can continue to progress our society and be a global climate change leader with cooperation across businesses and government to embrace renewables, battery storage, and resilient RNG microgrids. In the future, technology and chemistry will hopefully come together to provide longer duration storage options of hopefully 5 days or more. But in the near and mid-term, as we bridge to a net-zero economy, a hybrid solution that incorporates the benefits of renewables and battery storage with the resiliency of natural gas or RNG microgrids is the right balanced approach to achieve carbon neutrality now and underlying grid resiliency.
In the world of energy transition, this RNG microgrid solution allows all of society to metaphorically, have our cake and eat it too.
Data center veteran John Gould joined Enchanted Rock in 2021 as Chief Revenue Officer, responsible for scaling Enchanted Rock’s commercial and industrial operations.
Prior to joining Enchanted Rock, Gould served as EVP, chief commercial officer for CyrusOne, a global leading data center provider to the world’s largest cloud and enterprise customers. In his career, Gould served as President, Americas for Statasys, SVP and chief revenue officer for ReachLocal, and held various senior executive positions at Dell for more than 14 years. Gould has a bachelor’s degree in economics from Connecticut College and an MBA from Vanderbilt University – Owen Graduate School of Management.
View LinkedIn
Previous articleApplying a multispectral remote sensing approach to underground electric manhole inspections
Next articleTechnology is enabling a safety-first culture
No posts to display
|
Hand Pain
Hand Pain can be caused by various disorders such as arthrosis , trauma, infection , tendonitis , inflammation of the nerves, etc. Those disorders can affect in daily life activities.
Pain in the back of the hand
Wrist cysts
The cysts on the back of the hand are soft, fluid formations that develop on the back of the hand for no apparent reason.
They are also called synovial cyst and constitute the most common non-cancerous mass of the soft tissues of the hand and wrist. Usually affects the right hand, but if the patient is left-handed this formation can be observed on the left. People who play (for example the guitar) easily develop these formations.
Symptoms include:
• Pain in the wrist that is aggravated by repeated use, for example, when writing;
• A slow and localized growth with swelling, mild pain and numbness in the wrist;
• An apparent smooth protuberance, firm, rounded and flat.
Symptoms of the wrist cyst may resemble other conditions or diseases. Always consult your doctor for proper diagnosis. Initially, when the cyst is small and painless, no therapy is required. Only when it begins to grow and interfere with the function of the hand is it recommended to treat the wrist.
The cures recommended by physicians for wrist cysts consist of:
• Home
• Immobilization in an orthosis
• Non-steroidal anti-inflammatory remedies
• Aspiration
• Cortisone injections
• Surgery to remove the cyst, though in most cases it does reform within a few months.
Inflammation of the tendons of the hand
The two major tendon diseases are tendonitis and tenosynovitis.
Tendonitis is the acute inflammation of a tendon (resistant strands of fibrous tissue that attach muscles to bones) and can affect any tendon, but it most often occurs on the wrist and fingers. When the tendons become irritated, there is pain, swelling and stiffness .
Tenosynovitis is the inflammation of the tendon sheaths surrounding the tendons. Normally, only the sheath becomes inflamed, but also the tendon can become inflamed simultaneously. The cause of tenosynovitis is often unknown, but it is usually caused by tears, excessive use, injuries or very difficult exercises.
Tendonitis can also be linked to a disease (for example, diabetes or rheumatoid arthritis ).
Tendon disorders include:
1. De Quervain 's Tenosynovitis is the most common type of tenosynovitis characterized by inflammation of the sheath of the tendons of the thumb that causes pain and inflammation.
2. The trigger finger is a tenosynovitis in which the tendon sheath becomes inflamed and thickened, thus preventing flexion or extension of the thumb and the other fingers. The finger may suddenly block or stretch .
3. Tendonitis of the extensor or ulnar flexor of the carpus is a disease characterized by inflammation of the tendons, which are inserted between the wrist and the hand and allow the movements of extension and flexion of the hand.
Tendonitis causes pain with movement, pressing and lengthening the affected tendon, can also cause swelling and limitation of movement.
The pain may spread along the forearm to the elbow .
The treatment of hand tendonitis includes rest, ice, non-steroidal anti-inflammatory drugs, instrumental physiotherapy such as laser treatment and ultrasound .
Extensor tendon injuries
The extensor tendons are just under the skin on the back of the hand and fingers. Due to its location, it can easily be injured even with a small cut. A stretch can cause a rupture in the tendons where they attach to the bone.
With this type of injury, you may have difficulty straightening one or more joints. The treatment necessary to recover the use of the tendon is the splint, but in severe cases serves the surgical suture.
Arteriosis of the metatarsal-phalangeal joints (joints) The joints of the hand at the base of each finger are known as metacarpal-phalangeal joints. They are key to grasping objects, such as a precision tweezers .
The most common pathology affecting the metacarpal phalangeal joints is rheumatoid arthritis , while gout , psoriasis, and infections are less common.
Pain in the palm of the hand
Flexor tendon injuries
The deep cuts on the palmar side of the wrist, hand and fingers can damage the flexor tendons and possibly the nerves and blood vessels.
When a flexor tendon is completely broken from the finger, the finger remains straight because the muscle is unable to transmit the force to the tendon to bend.
Symptoms include pain, swelling , stiffness and loss of function. The treatment consists of resting on a rigid splint, but in the most severe cases surgery for tendon suture is necessary.
What is Dupuytren's syndrome?
Dupuytren's disease is a thickening of the fibrous tissue called fascia that lies beneath the skin of the palm. It's a hereditary disorder.
They form small lumps or fibrous bands that can pull the fingers in the palm of the hand. Dupuytren's disease may be associated with smoking, vascular disease, epilepsy, and diabetes. Small lumps or lumps in the palm need not be removed unless they become very large or interfere with the functioning of the hand.
Surgical treatment may be recommended if there is a progressive curvature of the fingers of the hand.
The bands of fibrous tissue may reappear or may occur on other fingers.
The pain at the base of the thumb
Hand Scaphoid Fracture
The fracture of the scaphoid bone occurs most frequently during a fall on an extended hand. In general you feel a severe pain in the beginning, but may decrease after a few days or weeks. Bruising or bruising due to scaphoid fracture is rare and swelling may be minimal. Because there is no deformity, many people with this injury think they have a wrist twist, this causes a delay in diagnosis.
It is common for people with a fractured scaphoid to see only a few months or years later.
|
Many patients come into my Bronx office with complaints of blurry vision. This can often be due to a myriad of reasons including astigmatism, glaucoma, and dry eyes. But as a patient gets older the lens in the eye can be cloudier which is called cataracts.
Cataract is a condition where the lens becomes more opaque. The analogy we often use is instead of looking through a clear window, you are looking through one that is dirty. Often it is tough to see the small print.
Patient will often notice a gradual decrease in vision espescially with reading and watching television. Depending on the cataract, patient can also fine that sunlight bothers them as well.
Cataracts can be monitored if it is not affecting a patient’s vision. Though with time, most cataracts do progress and get denser. Wearing polarized sunglasses that protect the sun is one way to try to reduce the progression. When the cataract is significant, surgery is often suggested. Modern surgery has advanced tremendously where it is a same day surgery and the majority of patients have excellent results. It is important to discuss with your doctors all the risk and benefits when talking about any eye issue. At South Bronx Eyes, Dr. Alevi will take you step by step to discuss the procedure and your options.
David Alevi MD
|
Busy. Please wait.
Log in with Clever
show password
Forgot Password?
Don't have an account? Sign up
Sign up using Clever
Username is available taken
show password
Already a StudyStack user? Log In
Reset Password
Didn't know it?
click below
Knew it?
click below
Don't know
Remaining cards (0)
Normal Size Small Size show me how
Early Humans
What kinds of foods did the early humans gather? Fruits, grains, nuts, egg, insects and fish
What did the early people use fire for? They used fire for protection, light, cooking and heat.
What type of weapons did the early humans use to hunt? They used stone, clubs, spears, and knives.
Why did early man begin to band together? Early man started to band together for safety
What was the earliest form of shelter? Early man lived in caves and trees.
Why did early man live in trees? To protect themselves from the animals below.
Louis Leakey? A famous anthropologist and archaeologist. He studied the origins of humans.
What were the lasting contributions of Old Stone Age man? Fire and socialization.
What is gathering? Collecting foods found naturally in an environment.
What is the earliest form of hunting? They began to use rocks and clubs to kill animals for meat.
What is the earliest form of record keeping? Cave painting.
Created by: mrssimona
Pass complete!
"Know" box contains:
Time elapsed:
restart all cards
|
Negative Communications: The Process of Absorption and Transmission
TASA ID: 2156
To truly comprehend the impact of negative communications on the recipients(s), one must understand how such negative input is received and processed by those exposed to it directly, as well as how such information can be transmitted unintentionally and the harmful effects of that.
The way individual’s process negative information when it is received not only has to do with their own frame of reference, base of knowledge and pre-dispositions but also with basic human nature. For instance:
• People will generally believe something to be true if it is seems plausible and there is no other counterbalancing input that refutes or challenges that information. Also, often, even if a dissenting view to negative information is received, there could be an element of doubt if the person has accepted the negative view and the new information will be filtered through the initial negative input. Therefore, there may not be a “clean slate” for reconsideration.
• In a situation where someone is making a choice (e.g. selecting something) if there is both equally positive and negative input on a particular option, many will take the conservative approach (e.g. not take a chance) and go with another choice which has no hint of negativity attached to it.
Negative information can be disseminated in two ways:
(1) the sender targets it to specific people or outlets that are targeted specifically; and
(2) the sender uses a “shotgun” approach and disseminates it for mass distribution and all to see.
In the first case, the sender may wish to confine the information but that is not always possible as
the recipient(s) may choose to pass it on with or without permission and when that happens, not only can the distribution be controlled but the content and context of the information can change due to personal interpretation of those who transmit it. Add to this is that distribution channels can vary from a very select one-to-one passing of the information to another or it getting placed on the internet for worldwide visibility.
Unfortunately, there is no “magic bullet” to combat, minimize or eliminate negative communications (think “Humpty Dumpty” and the genie out of the bottle) but it is important to be aware and informed of what transpires in the aftermath.
TASA Article Disclaimer
Previous Article The Security Survey
Next Article Negligence vs Gross Negligence
Tasa ID2156
Theme picker
• Let Us Find Your Expert
Search Experts
Search Experts
|
Reducing Blue Light in the Office
In your search for the best office lights, you have probably come across warnings against too much blue light. In moderation, blue light wavelengths are healthy. They exist within natural sunlight, keeping us alert and awake during the day.
Too much blue light in the office, and employees will struggle to relax and sleep well when they go home. Artificial light sources like light bulbs and computer screens typically have large spikes of blue light, which is much different from the color spectrum found in healthy natural sunlight.
For employees who are happy and healthy, look into reducing blue light. Sunlight-mimicking LEDs, blue light glasses, and healthy vision exercises can all help your whole office stay safe from the harmful effects of blue lights.
Harmful Effects of Blue Lights
Excess blue light is all too common in modern offices. After all, so much of our work is completed digitally, from video conferencing to digital paperwork. But all that exposure to blue light can have negative effects on our health if we don’t take protective measures.
Digital Eye Strain
If you stare at electronic screens for a long time at work, you have probably experienced digital eye strain. You may feel fatigue behind your eyes, along with soreness, itchiness, light sensitivity, and even blurred vision. This is a sign that you are probably receiving too much blue light exposure.
Restless Sleeping
Blue light can also cause disruptions in natural sleep cycles. If you use a laptop, phone, or tablet after sundown, troubled sleep is especially likely.
Why? The body is wired to receive some level of blue light from natural sunlight during the day. In fact, blue light during the day keeps us alert and awake, which is a good thing! But since blue light keeps us alert, too much blue light at night can signal to our bodies that it’s time to wake up and stay awake—not ideal when you’re trying to get a good night’s sleep.
Increased Risk of Depression
Blue light, in moderation, can actually be good for our moods. But when it starts to keep us up at night and disrupt our body’s natural systems, the disruption can start to cause mental health problems.
Blue light can interfere with many of the body’s natural systems. This includes sleep, neurotransmission, hormone secretion, and brain plasticity. Taking extra precautions to reduce blue light exposure, especially in the evening, can keep your employees feeling happy and reduce their risk of experiencing depression.
Change to Low Blue Lights
Natural sunlight mimicking bulbs are a great way to reduce the harmful effects of blue light. These low blue light lamps give off light that is similar to the sun, our healthiest source of light.
SOListic products by TCP feature an advanced LED chip that mimics the characteristics of natural sunlight. This means your light will cover the full color spectrum evenly, unlike typical LEDs that produce a spike of blue light.
By bringing sunlight indoors, you give your employees the closest thing to natural sunlight. This encourages health and well-being by reducing blue light exposure and following closely with natural human circadian rhythm.
How to Minimize Blue Light Exposure
In addition to anti-blue light light bulbs, you can tailor your employee’s work environments to support their overall health with a few simple changes.
Blue Light Computer Glasses
Blue light-blocking glasses are common in workplaces where employees spend much of their time on computers or looking at screens. They are even available with prescription lenses! Consider providing a stipend for these tinted glasses to help your employees minimize blue light to stay safe, healthy, and productive.
20/20/20 Rule
For overall eye health, remember the 20/20/20 rule. Every twenty minutes, look at something twenty feet away for twenty seconds or more. This helps counteract the effects of staring at a screen up close for long hours—especially when that screen is emitting lots of blue light.
The 20/20/20 rule allows your eyes to rest and recover from strain. You can also add in a few forceful blinks to encourage your eyes to rehydrate, which is especially helpful if you struggle with dry eyes.
Lutein Vitamins
Lutein is a vitamin that helps our bodies filter high-energy blue light wavelengths. Humans get lutein from our diet, including dark leafy greens and eggs. To add to your lutein intake for healthy eyes, you can look into lutein supplements for a boost.
SOListic Lighting by TCP
For the most natural, sun-like LED bulb on the market, choose SOListic bulbs from TCP. Our innovative LED technology lets you enjoy the energy-saving benefits of eco-friendly LEDs, without sacrificing eye health or light quality.
The light from SOListic bulbs spans the whole color spectrum, avoiding the spike in blue light that can cause so many health and wellness issues.
Keep your office healthy with SOListic.
Online Quote
|
Get In Touch
Surgical microscope camera is a type of medical device. Surgical microscope cameras are used to obtain high-resolution live images. These cameras simplify the process of imaging from capture to processing. A basic surgical microscope is an optical instrument– electric, mechanical, or both– consisting of a combination of lenses. It provides the surgeon with a high-quality, stereoscopic, magnified image of small structures within the surgical area. There are several advantages of surgical microscope cameras such as increased specificity and sensitivity, natural visualization, and removal of auto fluorescence. A surgical microscope camera offers spectral and temporal multiplexing; lacks moving parts; and provides modular add-on illumination and camera technology. Important developments in molecular imaging have resulted in fluorescence contrast agents that can highlight the pathology, function, and anatomy of a tissue to help doctors in interventional imaging. A surgical microscope camera provides optical excellence for both the assistant and the surgeon. Safety in its design provides protection against exposure to harmful thermal radiation and UV rays.
Rising geriatric population and increasing incidence of chronic disorders among the people increase the demand for surgical microscope cameras. Increasing inclination toward minimally invasive surgeries boosts the demand for surgical microscope cameras. Technological advancements and the rise in research and development investments drive the global surgical microscope cameras market. Rise in the number of surgical procedures performed and technological advancements in surgical devices are factors augmenting the surgical microscope cameras market. However, imposition of excise tax on medical devices restrains the global surgical microscope cameras market to a certain extent.
Increasing prevalence of diseases leads to increase in the demand for surgeries and diagnostics tests, which consequently boosts the demand for surgical microscope cameras. Leading manufacturers of surgical microscope cameras across the globe are increasing their investment in new product development so as to sustain their market share. On the other hand, new players are launching advanced products to achieve market penetration. These factors fuel the global surgical microscope cameras market. Mergers and acquisitions are major growth strategies adopted by players operating in the market.
The global surgical microscope cameras market can be segmented based on product, specialty, end-user, and region. Based on product, the market can be divided into television style surgical microscope cameras, computer surgical microscope cameras, and commercial surgical microscope cameras. In terms of specialty, the global surgical microscope cameras market can be classified into ophthalmology, ENT, neurosurgery, and others. Based on end-user, the market can be segmented into hospital laboratories, diagnostic laboratories, and physician’s offices.
Geographically, the market can be divided into Asia Pacific, Latin America, Europe, North America, and Middle East & Africa. North America held a leading market share in 2016, owing to increase in technological advancements and high adoption of innovative and latest surgical microscope cameras. Europe held the second-largest market share in 2016, due to increased awareness about and high prevalence of diseases in the region. The market in Asia Pacific is anticipated to expand at a significant pace during the forecast period, owing to increase in research and development activities in the region. The demand for surgical microscope cameras has been increasing for the research purpose. Apart from medical applications, the demand for surgical microscope cameras is increasing in research and educational fields. This is expected to propel the global surgical microscope cameras market during the forecast period.
Key players operating in the global surgical microscope cameras market include Olympus Corporation, Danaher Corporation, Nikon Corporation, Stryker Corporation, SPOT Imaging Solutions, and Allied Vision GmbH.
• Customer Experience Maps
• Insights and Tools based on data-driven research
• Actionable Results to meet all the business priorities
• Strategic Frameworks to boost the growth journey
The following regional segments are covered comprehensively:
• North America
• Asia Pacific
• Europe
• Latin America
• The Middle East and Africa
Below is a snapshot of these quadrants.
1. Customer Experience Map
2. Insights and Tools
3. Actionable Results
4. Strategic Frameworks
Custom Market Research Services
Surgical Microscope Cameras Market
|
Can Wood Stoves Impact The Environment?
August 4, 2021
We're an affiliate
Can Wood Burning Stoves Impact the Environment?
Answer: yes, like most things humans do on earth. It will have an environmental impact. Compared to many other heating methods, wood burning stoves have a far lower impact.
How big of an impact does wood burning stoves on the environment is quite easy to figure out. In this article we will compare the different types of heating types commonly used in the United States and abroad to figure out which methods have the least amount of environmental impact. We will also compare why different types of heating methods are used across the world. Then finally we will review changes that need to be made to make progress to a greener and cleaner future.
How do wood burning stoves impact the environment?
The biggest misconception is that burning wood creates pollution in the form of carbon monoxide. This is just not true. Yes, carbon monoxide is released when wood is burned, but CO is also release when a dead tree is decomposing. It does not matter in which manner the carbon monoxide is released. It is the same amount either way.
The largest impact to the environment with wood burning stoves is harvesting the trees themselves. De-forestation is already a big problem across the world. Day after day more trees are cut to support the human condition called “survival”. Because the by-products are trees produce so many goods humans use, they get cut down more rapidly.
Follow this link for a lot more information on deforestation: Deforestation facts and information (
Ways we can make wood burning stoves more environmentally friendly
The largest way we can make wood burning stoves more environmentally friendly is to burn things that would have otherwise ended up in the trash can. Here are 3 items you can burn in your wood stove safely to keep you warm:
• Old news papers
• Old Books
• Old Magazines
• Food waste
Can you Burn Pressure Treated Wood in a Fire Pit? - ZeusFire (
You get the point. I hope this list sparked some ideas in your head about things you can use to burn for heat. Just confirm those ways safe to burn! Here is a good resource for you. Best Wood-Burning Practices | US EPA
The biggest take away is to make sure what you burn is 100% natural to the environment.
What parts of the world are wood burning stoves mostly used?
Well... If you live in the southern parts of America, you may have never seen a wood stove before. If you’re from the northern parts of America, you are probably shocked to think someone has never seen a wood stove before. In the south when you see a wood stove or even a fireplace in a home, it is merely just a decoration.
The history of wood burning stoves
The first wood stove was patented in 1557 but did not become popular until the one and only Benjamin Franklin got his hands on one. Benjamin Franklin wanted to improve the design of what we would call a fireplace into something more efficient. He named it the franklin stove and became more common around the era of the industrial revolution.
Can wood burning stoves impact to the environment be improved?
Yes, the biggest way the wood burning stove has been improved is with the pellet stove. You may have seen something called a Traeger grille. This is the same concept and uses the same kind of fuel just typically flavored for that nice cherry taste on a brisket. Pellet stoves’ burn a compacted organic material made from tree’s, biomass or even food waste. They are dried and then compacted firmly into a small pellet.
Since a pellet is compacted into a small form, they can achieve a higher temperature then a regular ole log. The fuel source from a pellet is much more controlled and we can control the burn rate and temperature much easier.
Are wood burning stoves allowed in every home?
Mostly yes, wood stoves, fireplaces and pellet stoves typically fall under the same local ordinances and codes as each other. You of course will have to speak with local agencies around you to find out more, but the odds are in your favor. The biggest reason these heating devices for homes and cabins are regulated is probably self-explanatory. For the safety of your family. and your neighbors. Do some digging. The epa’s website is again a great source and a place to start: Ordinances and Regulations for Wood-Burning Appliances | US EPA
When would a wood burning stove not work so well?
The main reason a wood stove wouldn’t work for your home is in large multi-story type structures. Unless you plan to have multiple wood stoves around the home, do not plan on this being a reliable consistent source of heat. That wood be... I mean the would be a lot of wood.
Wood Stove vs Fireplace - In Depth Guide - ZeusFire (
What is the best wood to use for a wood burning stove?
For a wood stove the best fuel source will be from what is in and around your property. Passed that the log that burned the highest in temperature and the longest is maple, oak, ash, birch, and most fruit trees.
Can wood burning stoves be dangerous?
Wood stoves are dangerous. It is important to make sure all safety measures are taken. Here are a list of safety measures to take with wood stoves.
• Make sure the wood stove is installed to manufacture recommendations and local requirements... Just so not homes are burnt in the pursuit of heat.
• Make sure carbon monoxide sensors are in and around your home/house.
• Have an action plan of who to call and what to do in case of a fire.
• Inspect the wood stove for any defects prior to lighting
• Have a gate around the stove to protect young children or pets.
Do wood burning stoves increase the risk of a home fire?
Wood stoves do not increase the risk of a home fire anymore then an oven, fireplace, or a central heating system. It is important that your wood stove is of high quality, or it isn’t out date. Among many other issues an old wood stove can cause house fires or even carbon monoxide poisoning from leaking smoke.
Are there ways to limit the amount of indoor pollution wood burning stoves make?
When have issues with carbon monoxide pollution in your home It is important to confirm the wood you are burning is dry. Wet logs will create much more smoke than dry logs. The example I tend to give people is when I am grille on a propane BBQ pit. I buy these small wood chips from the hardware store. In an effort to increase the flavors of the meat I am smoking; I will soak the chips in water. This will increase the smoke output by roughly 150%.
If everyone switched to wood burning stoves, would it have an impact to our forests?
Yes, our forests would be impacted by an increase of wood burning stove users.
Environmental impacts from other heating sources will go down as well. As far as the impact to our planet I wouldn’t recommend everyone switching to wood stoves or even pellet stoves. Trees are a far too important renewable resource. It is important the world continues to innovate with truly renewable energy to supply heat to our homes.
How to Burn Leaves in a Fire Pit - ZeusFire (
A Detailed Guide on Cooking Over a Fire Pit - ZeusFire (
Leave a Reply
About Us
Find Us
|
The Japanese Church ready to celebrate Takayama Ukon, "samurai of Christ"
The Bishops' Conference of Japan sent to Rome the documents needed to open the cause of beatification of the feudal lord who challenged the empire by keeping his faith. His example and teachings paved the way for the evangelisation of Japan. "He led a life appropriate to a saint."
Osaka (AsiaNews) - The Japanese Church has finished preparing the application for the beatification of Takayama Ukon, a feudal lord or daimyo who, after his conversion, played a pioneering role in the spread of Christianity in Japan in the 16th century.
The Catholic Bishops' Conference of Japan presented a 400-page application to the Congregation for the Causes of Saints, with all the relevant information about the case. The bishops hope to celebrate the new Blessed in 2015, the 400th anniversary of his death.
Takayama Ukon was born in 1552 in what is now Osaka Prefecture to Takayama Tomoteru, lord of Sawa Castle. When he turned 12, his father converted taking the name of Darius whilst he was baptised with the name of Justo.
Both father and son were daimyo, feudal lords appointed by the imperial court, entitled to raise a private army and hire samurai.
Before his conversion, Justo practiced bushido, the "way of the warrior," a code of conduct for the Japanese warriors.
Towards the end of the 16th century, in the 1580s, Japan was ruled by Toyotomi Hideyoshi, known as the country's second "great unifier".
Through their political activity, the Takayamas come to dominate the Takatsuki region. During the rule of the two daimyo, many local residents converted to Christianity.
In 1587, Hideyoshi was convinced by some of his advisers to ban the 'western religion'. Whilst many feudal lords chose to abjure their Catholic faith, Justo and his father chose instead to give up land and honours to maintain their faith.
During subsequent years, Justo Takayama lived under the protection of aristocratic friends. However, when Christianity was definitely banned in 1614, the former daimyo chose the path of exile and led a group of 300 Christians to the Philippines, welcomed by Spanish Jesuits and the local Catholics, when they arrived on 21 December.
Here some exiles proposed to seek Spanish support to overthrow the Japanese government, but Justo refused Right.
On 4 February 1615, 40 days after his arrival in the Philippines, he died and was buried with full military honours in a Catholic ceremony. Today a statue of him dominates Manila's Plaza Dilao (pictured).
The current application is not the first time the Japanese Church has tried to get him beatified. The first attempt was made in the 17th century by the clergy of Manila. Unfortunately, due to the isolationist policy of the Tokugawa shogunate, which prevented foreigners from entering Japan, it was impossible to get the necessary documents for a canonical investigation. A second attempt was made in 1965, but failed because of several formal errors.
"The application was not accepted because no one knew how to put it together nor how to best publicise his case," said Fr Hiroaki Kawamura, head of the Diocesan Commission that sent the papers to Rome. Learning their lessons, this time church officials have been much better prepared.
Last October Mgr Leo Jun Ikenaga, archbishop of Osaka and president of the Bishops' Conference of Japan, sent a letter to Pope Benedict XVI asking for approval of the cause. The Vatican answered, saying that it would take the cause "into special consideration."
It would do so because the daimyo would be the first individual Japanese to receive such a high honour. There are 42 saints with some connection to Japan as well as 393 blessed. All of them were martyred together during the Edo Period (1603-1867) and are celebrated as groups.
"Takayama was never misled by what those around him. He persistently lived a life following his own conscience," Fr Kawamura said. "He led a life appropriate to a saint and continues to encourage many people even today."
|
Machine Translation [Part 2]
Machine Translation [Part 2]
To date, machine translation—a major goal of natural-language processing—has met with limited success. A November 6, 2007, example illustrates the hazards of uncritical reliance on machine translation.
Machine translation has been brought to a large public by tools available on the Internet, such as Yahoo!'s Babel Fish, Babylon, and StarDict. These tools produce a "gisting translation" — a rough translation that, with luck, "gives the gist" of the source text.
Claude Piron
Relying exclusively on unedited machine translation ignores the fact that communication in human language is context-embedded and that it takes a person to comprehend the context of the original text with a reasonable degree of probability. It is certainly true that even purely human-generated translations are prone to error. Therefore, to ensure that a machine-generated translation will be useful to a human being and that publishable-quality translation is achieved, such translations must be reviewed and edited by a human. The late Claude Piron wrote that machine translation, at its best, automates the easier part of a translator's job; the harder and more time-consuming part usually involves doing extensive research to resolve ambiguities in the source text, which the grammatical and lexical exigencies of the target language require to be resolved. Such research is a necessary prelude to the pre-editing necessary in order to provide input for machine-translation software such that the output will not be meaningless. Source
Tags: : Machine Translation, Translation Tools, Translation Tool, Translation Technologies,
|
Foot Tournament
Originally, tournament was a competition between mounted warriors. Gradually they changed from wild mock wars, that destroyed entire villages, into a regulated form of sports, where it was forbidden to cause any actual bodily harm to the opponent, even though weapon use was practiced. Finally the joust between two men became the main event.
During tournament events, there were also some official duels between men of the knightly class. According to the ancient Germanic tradition, such mortal combat was fought on foot. Official licence to fight a duel was often not given lightly and the combatants had to await for their permission to fight from their liege lords sometimes for years. Only the most gravest offences and slander were enough reason to challenge someone to a duel. God himself was called upon to judge the result of the combat, as it was widely believed, that so only the just cause could win.
However, even the heavy cavalry often had to fight on foot. Such occasions were for example sieges. Sometimes, to help to pass the time and ease the tensions of a long siege, the besieged and the besiegers could do tourney against each other. These tournaments had to be arranged at the gate of the fortress in such a way, that the sports combat would not turn into a gharge through the gate into the fortress, or a sortie out of the fortress. To prevent this, a stocky barricade or a fence was erected in front of the gate. The contestants would then fight each other across the fence. The besieged within the fence and the besiegers on the outside of the fence.
During peacetime tournaments sometimes were arranged in such a way, that they represented sieges, by the erection of a wooden mock fortress. In front of this set piece fortress gate, a fence was placed in between the contesters, just like at an actual siege. Later even though no mock fortresses were erected, the fence between the combatants in sports fights of foot tournament remained more to prevent accidents, than to prevent any sorties. Much the same way as the tilt barrier between the mounted jousters. The fence also made all the difference between friendly sports combat and personal duel to the death.
The men-at-arms who belong to the Armour Smiths Guild now call upon all knights and men-at-arms of ready hand to take part in the foot tournament in front of the very gates of the set representing the “Fortress of Love” and according to good tradition fight across a fence. The Challengers defend the gate and the Challenged may take part, if they have ‘four points’ of armour (head, torso & arms and hands). Shins do not need to be armoured, since the fight is on foot and strikes may only be given over the fence. The challenged may choose as weapons either wooden clubs, provided by the organizers, or blunt steel weapons of equal measure. The marshals and the varlets have to accept the used weapons beforehand. If a contestant does not have an accepted weapon and only carries his sharp weapons of war, the organizers and challengers of the Armour Smiths Guild shall provide them with safe weapons.
The prize of victory belongs to the one who out of three strikes gives the best strike. The best strike is decided by the noble ladies who act as the referees of the tourney. The ladies are advised in their duty by a “Knight of honour” or by one of the marshals. The best strike is the most skilled one, not the most powerful, nor the fastest. As the use of exes power, or terrible haste are not the hallmarks of a skilled warrior, but those of a coward terribly afraid of losing. The one who brakes the rules of the tourney shall not win the prize. The one who by taking an unnecessary risk, in hate, or by otherwise deliberately harms their opponent shall not recieve the prize.
The Role of Women in Tournament
Wether they were the noble ladies, fair maidens, or wives of knights and men-at-arms, women had a significant role in the tourney. They acted as the referees and they handed over the prizes to the victors of the combat. They were there to bear witness to the brave deeds of the men of their families and noble houses. A contestant who dishonoured a lady would face a most severe punishment, such as being disgraced by being forced to sit on the fence in full armour to be ridiculed by anyone, or even being beaten up by all the other contestants until a lady would beg for mercy for the beaten man.
The Armour Smiths Guild call out for all the noble ladies, fair maidens and wives and widows of good standing and reputation to join in as the referees for the foot tournament and to escort one of the contestants from the tournament encampment to the field of glory.
Equipment requirements
Equipment requirements for the tournament. Equipment of the participants should fit in the time period of the event as well as guarantee the safety of the participants. Allowed time frame for the equipment is A.D. 1350-1410.
• Helmet with visor (for example a bascinet and maille aventail or an early armet)
• Neck protector (separate maille standard or bevor)
• Mail shirt with cuirass or coat of plates/brigandine
• Leg armour
• Arm harness
• Also recommended
• Heraldic surcoat
• Groin protector
• Wooden shield
Main rules of engagement in tournament combat
1. Do not hurt your opponent.
2. Do not take risks that might lead to hurting your opponent, or anybody at all.
3. Do not strike, or stab at the face, neck or groin of your opponent.
BNF Français 20090 Bible Historiale de Jean de Berry, 1380-1390
|
Using COVID-19 and Connecticut’s Primary Elections to Teach Political Science Concepts
Connecticut’s 2020 primary election is a great example of electoral politics to include in a variety of classroom lessons. For public policy courses, it showcases how focusing events, defined as an event that is sudden; relatively uncommon; can be reasonably defined as harmful or…potentially harmful…and that is known to policy makers and the public simultaneously,” influence policy. The primary is set for AugustConnecticut could cancel their primary like New York since Biden is the presumptive nominee, but this option could potentially damage the integrity of the electoral systemThe pandemic has ‘focused’ Connecticut’s primary election conversations on the need for election reform. Both the Secretary of the State and governor are pushing election reform measures onto the agenda. There are talks of moving to a pseudo allmail election; all eligible Connecticuters would receive an absentee ballot application, and absentee ballot, with postage paid for by the government. Other ideas include an online absentee ballot application, new election technologies, early voting, and no-excuse absentee balloting. This unprecedented election cycle is an excellent classroom example to illustrate the complicated machinations of electoral politics in Connecticut.
For election administration courses, Connecticut is a great example of the complexities that surround state election laws.
Connecticut has fairly antiquated election laws. It is one of nine states that has not adopted early voting and still requires a justified excuse to vote absentee. Nutmeggers must meet one of six excuses to vote absentee. The statute that governs absentee ballot applications says “his or her illness.” While seemingly mundane, it implies that if a voter is not actually sick, they do not qualify for an absentee ballot. Therefore, fear of contracting the virus that comes with in-person Election Day voting does not qualify. If a voter still opts to vote absentee, they could be subject to civil and criminal penalties.
However, the Secretary of the State has the authority to interpret the statute, and she has asked the legislature and governor to fix the statute or, at the very least, offer guidance on its interpretation. The state legislature could amend current language in the state statute. The public health emergency declared by the governor grants him permission to modify any state statute during said crisis. But, there could be hesitation to issuing an executive order to amend election laws because it leaves out the state legislature, and any long lasting election reform would need to be done by the legislature.
Fairfield University professors and students at Hartford State Capitol
This example ties into courses like public administration that examine the politics surrounding policy makingIf the state fails to adopt any election reform measures, elected officials could receive backlash similar to that faced by Wisconsin’s elected officials for holding an in-person primary. Moreover, Wisconsin saw COVID-19 cases linked to voters who voted in their primary. It is fair to say that no elected official wants to be in a similar position. Before the pandemic, Connecticut legislators could, and did, vote against various election reform measures. The pandemic puts state legislators in a Catch-22. In early May, Connecticut’s Secretary of the State decided to mail every voter an absentee ballot application for the primary election, noting the complications that surround the language of “his or her illness.” Since the Secretary of the State does have the authority to interpret the statute’s language, she could accept absentee ballots from voters who preferred to vote absentee out of fear of getting sick. But, she is asking for the legislature to act to adopt longer term solutions. Couple the power struggle over this issue with polls that show overwhelming support for election reform ideas like what Connecticut is considering, it creates the ‘perfect storm’ for discussions on the role of politics in policy making.
Shifting the voting costs from Connecticut voters to the government, which will likely happen for the 2020 primary election, means voter turnout will likely rise. It is much easier to cast a ballot from one’s home than a polling place.
Those teaching voting, campaigns, and election courses can use Connecticut’s 2020 primary election to discuss the calculus of voting theory, which states thareducing the voting costs leads to higher participation. Traditional Election Day voting places voting costs on the voter, which could then lead to low turnout. For example, Connecticut conducts primary elections in-person on Election Day. The last presidential primary election (2016), turnout in Connecticut was 21%. In the same primary election, Oregon, which conducts allmail elections, had 36% turnoutThis is not surprising since the government, not the voter, incurs the majority of voting costs. Shifting the voting costs from Connecticut voters to the government, which will likely happen for the 2020 primary election, means voter turnout will likely rise. It is much easier to cast a ballot from one’s home than polling place.
Fairfield University students at the election night results show.
A lesson learned from Wisconsin’s 2020 primary is that voters will go to the polls; 34% of Wisconsinites voted. An analysis conducted by Hides and Stewart shows that if there was no pandemic, Wisconsin’s primary turnout would have been roughly 26%. If Election Day was a holiday, thus keeping more voters out of the office and moving them to the polling station much like the pandemic does, would turnout increase in other elections, such as local elections or the general election? Using the calculus of voting theory, the answer is yes. This is a great question not only for students of political behavior courses, but also research methods and statistics. The modeling aspect of such a question would be timely and likely interesting to students.
How does Connecticut conduct a primary election during a pandemic? That story is yet to be written. The hard truth is almost three in four American voters do not want to risk their health to vote in-person. Well over half of American voters are open to other ways to vote, such as all mail elections. In order to hold the Connecticut primary election, state officials are likely going to adopt, even temporarily, some type of election reforms. At minimum, Connecticuters will hopefully see the legislature, governor, and Secretary of the State work together provide much needed clarity on the term ‘his or her illness.’ Should the Nutmeg State adopt any election reform measures on the table right now due to the pandemic, it could be the catalyst for election reform and change the way they vote in future elections.
Gayle Alberda, PhD is as an Assistant Professor of Politics and Public Administration at Fairfield University. Her research focuses on election administration, participation, and civic engagement. Currently, she is working on her forthcoming book, Early Voting’s Impact on U.S. Local Elections. Before her career in academia, Alberda worked the political arena. Her work experience in the political field includes lobbying, party, and campaign experience. She lobbied in Washington D.C. as well as the state and local levels. She also held various statewide and regional positions and worked on numerous campaigns including state house, city council, governor, president, and U.S. Senate, in multiple states. Alberda often serves as a political analyst for the media. She comments on local, state, and national politics and elections for local, regional, national, and international print and broadcast media. She has appeared in the Washington Post, NY Times, Good Morning America, Al Jazeera America, Hartford Courant, CT Post, WNPR, WICC, Associated Press Radio, NBC CT’s Face the Facts, WTHN’s Capitol Report, Channel 12’s Power and Politics, and many others.
Join the Campaign
Share via
Copy link
Powered by Social Snap
|
How do I know I have anxiety?
Racing heart, panic attacks, or something like a shaky voice are all physical signs of anxiety. But there are other signs of anxiety more commonly occurring but often missed. Find out what our doctors say on how to know if you have anxiety.
Reiterating past scenarios in your mind and thinking about how to “correct them.”
If you have gone through an unpleasant experience, perhaps you’ll ruminate about it sometime. And to most of us, it is entirely normal.
But when you’ve anxiety, overthinking, or regretting how you handled or responded to past situations, it will make it worse.
Reiterating scenarios over and over again will only cause more anxiety and completely overwhelm you.
Being over-concerned about future events and trying to think of surprises that may jeopardise your plans
This is called anticipatory anxiety, which revolves around fears and worries that you think can happen in the future. Whether at work or in your relationship, focusing a lot more than required on things you can’t predict is a sign of being anxious.
These looping feelings can take a toll on your mental health because you’re only predicting “what could happen,” but it doesn’t necessarily mean that your predictions will come true.
Using social isolation as a coping mechanism
Social withdrawal can be anything like distancing yourself from friends and family, disconnecting from social apps, not responding to calls or messages, and even finding it way too hard to reply to emails.
The feeling of withdrawal is itself a symptom of anxiety. Some people who deal with stress feel left alone. But, although you may give yourself some private time, it will only add to your worries when dealing with anxiety. Dealing with any sort of mental health issues is only good when done around people whom you trust.
Being able to recognise your withdrawal as a sign of anxiety is the first step. Thinking of getting help is where your wellness begins.
Second-guessing all the time
When second-guessing becomes habitual, it can disrupt your peace and drive you over analyze practically everything.
Thinking about what is “right” and what is “wrong” constantly or believing there is a “perfect solution” to a problem is or can demonstrate a fear. And when this becomes repetitive, even while ordering food at a restaurant could be a sign of anxiety.
Repetitive conflicts
Whether at work or in your relationship, when conflicts often lead to disagreements and arguments, it can be signs of anxiety.
Conflicts can trigger more anxiety, and you may find it challenging to resolve issues.
Get involved with your mental wellness
There are myriad other things that you might do when suffering from anxiety. Most often it is only the tip of the iceberg on how anxiety can affect you.
At Duff Street Medical Clinic, our doctors work alongside our psychologist to provide the best care plans to help patients with a wide variety of mental health issues including anxiety.
If you’re anxious about your mental health wellness, make an appointment to meet with one of our doctors for treatment
They can develop a Mental Health Plan and refer you to a specialist to help overcome your feelings and better manage your symptoms.
|
Global Warming a Global Threat
“Global warming, a global threat”. We have heard a lot about global warming in the news debates general discussion and international discussions too. But are we really serious about it, we just hear it and leave it we never try to understand the seriousness of it. It has just turn to be a debate topic for us.
The temperature of the Earth is increasing drastically, even in Antarctica whoes normal temperature remain +10°C in summer experience 18.3 degree Celsius record temperature which is an alarm in one. Even Canada is facing about 50 degree celsius temperature which is the record highest in the history of Canada.
Cause of global warming is the CFC gases which are released by the AC, refrigerators and cars that we use. The clearing of forests that leads to deforestation is another cause of global warming as carbon dioxide is not been able to use by the trees. Global warming leads to melting of glaciers which ultimately leads to increase in sea level which is the main cause of tsunamis and other problems.
We always hear that action speaks Louder Than words and this is the real problem to take action upon. There are many steps taken internationally for this cause but we should contribute from our side too.
We can contribute by planting a tree or any small plant or by reducing the use of ACs, refrigerators and cars. Just think about it it might be a very small step from your side but will cause a great difference to this earth. Just bring the first step forward and other get inspired by you and will ultimately E result in reduction of the Global temperature.
|
Alfred Chaston Chapman
Listen in app
The brewing of beer is regarded by many as a more or less mechanical operation, yet there is much more to it. Great is its debt of gratitude to the labours of scientific men. The aim of this work is therefore to show the number of scientific investigations of the first order of importance, which have given rise to the brewing industry.
Alfred Chaston Chapman (1869-1932) was a British chemist, whose work was especially focused on brewing and fermentation. In 1920, he was elected into the Royal Society. Throughout his career, he was sought after by many institutions, spending time working both at the University of London, of Leeds, and at the Royal Microscopioal Society, amongst many others. He is most commonly remembered today for his book "Brewing".
Saga Egmont
How did you like the book?
Sign in or Register
|
When To Use The Suffix ER And OR?
Is Ness a suffix?
The suffix “-ness” means “state : condition : quality” and is used with an adjective to say something about the state, condition, or quality of being that adjective.
For example, redness is a red quality, and redness means “the quality of being red.”.
Is Dr A suffix?
Academic. Academic suffixes indicate the degree earned at a college or university. … In the case of doctorates, normally either the prefix (e.g. “Dr” or “Atty”) or the suffix (see examples above) is used, but not both.
What do the suffixes ER and OR mean?
When to use “-er,” “-or,” or “-ar” at the end of a word. The suffixes “-er,” “-or,” and “-ar” are all used to create nouns of agency (indicating “a person or thing that performs an action”) from verbs.
What are word endings called?
Description. A suffix (also called ending) is an affix that is placed after the stem of a word. Common examples are case endings, which indicate the grammatical case of nouns or adjectives, and verb endings, which form the conjugation of verbs.
What are the two meanings of the suffix er?
Definition for er (7 of 13) a suffix regularly used in forming the comparative degree of adjectives: harder; smaller.
How does suffix er change meaning of word?
These suffixes change the meaning or grammatical function of a base word or root word. For example, by adding the suffixes -er and -est to the adjective fond, you create the comparative fonder and the superlative, fondest. … This verb can be turned into a noun by adding the suffix -er, and so read becomes reader.
What words end with suffix er?
13-letter words that end in ermicrocomputer.granddaughter.supercomputer.schoolteacher.whistleblower.intelligencer.accelerometer.quartermaster.More items…
Who can write ER before name?
What is the root and suffix?
What does ER mean before name?
Whats the difference between ER and OR?
The -or suffix is primarily found in words derived from Latin, whereas -er can be put on the end of just about any verb that involves an agent (a ‘doer’ of the ‘action’).
What words have the suffix or?
Pages in category “English words suffixed with -or”abductor.aberuncator.ablator.abnegator.abominator.abrogator.absquatulator.abstractor.More items…
What does the suffix or?
suffix forming nouns indicating state, condition, or activityterror; error. the US spelling of -our.
What are the two types of suffixes?
There are two primary types of suffixes in English:Derivational suffix (such as the addition of -ly to an adjective to form an adverb) indicates what type of word it is.Inflectional suffix (such as the addition of -s to a noun to form a plural) tells something about the word’s grammatical behavior.
What is the rule for adding er to a word?
drop the e before adding ‘er’ or ‘est’. Words ending in y… change the y into an i before adding ‘er’ or ‘est’. Words that end with one vowel and one consonant need a double letter before adding ‘er’ or ‘est’.
What type of suffix is er?
2 emergency room. -er. adjective suffix or adverb suffix. Definition of -er (Entry 4 of 5) —used to form the comparative degree of adjectives and adverbs of one syllablehotterdrier and of some adjectives and adverbs of two or more syllablescompleterbeautifuller.
What are ER and EST words called?
The comparative ending (suffix) for short, common adjectives is generally “-er”; the superlative suffix is generally “-est.” For most longer adjectives, the comparative is made by adding the word “more” (for example, more comfortable) and the superlative is made by adding the word “most” (for example, most comfortable) …
When the positive ends in E only R and ST are added?
When the positive form ends in -e, only -r and -st are added to form comparatives and superlatives respectively.
What are the most common suffixes?
What’s the full meaning of ER?
emergency room(iːɑːʳ ) Word forms: plural ERs. countable noun. The ER is the part of a hospital where people who have severe injuries or sudden illnesses are taken for emergency treatment. ER is an abbreviation for ’emergency room’.
What are the rules for adding suffixes?
|
user_caps(5) File Formats Manual user_caps(5)
user_caps - user-defined terminfo capabilities
tic -x, infocmp -x
The tables of capability names differ between implementations.
While ncurses' repertoire of predefined capabilities is closest to Solaris, Solaris's terminfo database has a few differences from the list published by X/Open Curses. For example, ncurses can be configured with tables which match the terminal databases for AIX, HP-UX or OSF/1, rather than the default Solaris-like configuration.
In SVr4 curses and ncurses, the terminal database is defined at compile-time using a text file which lists the different terminal capabilities.
In principle, the text-file can be extended, but doing this requires recompiling and reinstalling the library. The text-file used in ncurses for terminal capabilities includes details for various systems past the documented X/Open Curses features. For example, ncurses supports these capabilities in each configuration:
(meml) lock memory above cursor
(memu) unlock memory
(box1) box characters primary set
The memory lock/unlock capabilities were included because they were used in the X11R6 terminal description for xterm. The box1 capability is used in tic to help with terminal descriptions written for AIX.
This is a feature recognized by the screen program as well.
The command “tput clear” does the same thing.
The capability type determines the values which ncurses sees:
Because there are several RGB encodings in use, applications which make assumptions about the number of bits per color are unlikely to work reliably. As a trivial case, for example, one could define RGB#1 to represent the standard eight ANSI colors, i.e., one bit per color.
Set this capability to a nonzero value to enable it.
ncurses sends a character sequence to the terminal to initialize mouse mode, and when the user clicks the mouse buttons or (in certain modes) moves the mouse, handles the characters sent back by the terminal to tell it what was done with the mouse.
The mouse protocol is enabled when the mask passed in the mousemask function is nonzero. By default, ncurses handles the responses for the X11 xterm mouse protocol. It also knows about the SGR 1006 xterm mouse protocol, but must to be told to look for this specifically. It will not be able to guess which mode is used, because the responses are enough alike that only confusion would result.
The XM capability has a single parameter. If nonzero, the mouse protocol should be enabled. If zero, the mouse protocol should be disabled. ncurses inspects this capability if it is present, to see whether the 1006 protocol is used. If so, it expects the responses to use the SGR 1006 xterm mouse protocol.
The xterm mouse protocol is used by other terminal emulators. The terminal database uses building-blocks for the various xterm mouse protocols which can be used in customized terminal descriptions.
The terminal database building blocks for this mouse feature also have an experimental capability xm. The “xm” capability describes the mouse response. Currently there is no interpreter which would use this information to make the mouse support completely data-driven.
xm shows the format of the mouse responses. In this experimental capability, the parameters are
state, e.g., pressed or released
y-ordinate starting region
x-ordinate starting region
y-ordinate ending region
x-ordinate ending region
Here are examples from the terminal database for the most commonly used xterm mouse protocols:
xterm+x11mouse|X11 xterm mouse protocol,
xterm+sm+1006|xterm SGR-mouse,
Name Description
kDC special form of kdch1 (delete character)
kDN special form of kcud1 (cursor down)
kEND special form of kend (End)
kHOM special form of khome (Home)
kUP special form of kcuu1 (cursor-up)
These are the suffixes used to denote the modifiers:
Value Description
2 Shift
3 Alt
4 Shift + Alt
5 Control
6 Shift + Control
7 Alt + Control
8 Shift + Alt + Control
9 Meta
10 Meta + Shift
11 Meta + Alt
12 Meta + Alt + Shift
13 Meta + Ctrl
14 Meta + Ctrl + Shift
15 Meta + Ctrl + Alt
16 Meta + Ctrl + Alt + Shift
tic(1M), infocmp(1M).
Thomas E. Dickey
beginning with ncurses 5.0 (1999)
|
Understanding Your Paycheck
Picture this: You started a new job, the pay rate is almost twice what you were making before and you can’t wait to treat yourself and celebrate. Then, when the direct deposit hits your account it seems like half the money you thought you were getting is missing! HR and payroll give you a copy of your pay stub and tell you all is correct and that is your NET amount, not your GROSS pay, and all you see is a paper full of boxes and numbers. How do you make sense of all this?
Let’s start untangling this mystery. Although some paystubs might look slightly different, most of them will include at least three key sections: how much money you are being paid, how much money you are paying for federal and state taxes and other deductions taken from your pay.
1) Section 1: Gross vs. Net Pay
This is usually located at the top of your paystub, after your name and personal information. For pay rate, if you work by the hour, you will see the number of hours worked and how much you are being paid per hour. If working overtime, the hours and rate will appear in a separate line. The main figures to focus on are:
1) Gross pay: The amount you are paid before taxes or deductions are taken out
2) Net pay: The amount you actually receive after all taxes and deductions have been taken out
3) Year To Date (YTD): The total amount received since the beginning of the year, depending on pay stub, might show YTD Net and Gross pay, as well as hours and other aggregate categories.
2.) Section 2: Federal and State Taxes you pay
This is an important section in your paystub. These are some of the lines you may see:
a)Federal withholdings (federal taxes), Medicare and Social Security. This combination of estimated contributions is mandatory and may take up 7.65% of your gross income.
b)State and local taxes. These are your estimated payments towards your State income tax obligations.
Keep in mind that these tax deductions are not fixed, but based on the answers you submitted when you first started working on a form called W-4. On this form, you select the number of dependents you have and your HR department estimates how much to retain from your paycheck for taxes.
c)It is also important to keep in mind that the withholdings or contributions taken from your paycheck for taxes may not represent your total tax debt. In other words, when you file your tax return, the IRS will consider various factors, such as your household size, eligibility for deductions and credits, to recalculate your tax liability and determine if you will be getting a refund or if you underpaid your taxes and will have to pay the IRS and the State back.
3) Section 3: Other deductions:
Outside of taxes, you also pay other voluntary deductions (some might even be mandatory). These typically include payments for benefits like health insurance, retirement savings and transit, which you get to choose. The amounts for these voluntary withholdings vary significantly, depending on the costs of plans your employer offers and which options you chose. Health Insurance premiums can be especially costly. While not as common, you might also notice certain mandatory deductions which can reduce your payment significantly. For example: child support payments or consumer debts (like credit cards and personal loans) if you were sued in court and had a judgement entered against you.
Keep in mind that you should always get a notice from your payroll or HR department for these types of debts, and you should be given a chance to challenge or dispute them, if they are not correct.
Paychecks, although at times confusing, are key to knowing why you are making the amount of money that you are making and if there can be any adjustments. If you find yourself wondering about your pay, consider scheduling an appointment with a financial counselor, who can help you review, understand, verify that everything is correct, and even request changes, where it is possible.
Partager cette publication
Partager sur facebook
Partager sur twitter
Partager sur linkedin
Partager sur pinterest
Partager sur print
Partager sur email
Articles Liés
Domestic Violence Awareness Month: Healthy Financial Relationships
More than 1 in 3 women (35.6%) and more than 1 in 4 men (28.5%) in the United States have experienced rape, physical violence, and/or stalking by an intimate partner in their lifetime. Research, as well as NYLAG’s experience advocating for survivors, has established that Linda’s situation is not unique: domestic violence is often accompanied by financially controlling or retaliatory behavior.
Lire la suite "
Black Women Equal Pay Day 2021
If you are a Black women, Black Women’s Equal Pay Day is an acknowledgment of the unique challenges that you face in the job market. It is also a call to action. New York Legal Assistance Group (NYLAG) sees you and supports you both with our one-on-one legal and financial counseling services as well as in our impact litigation and public advocacy campaigns.
Lire la suite "
Safety Net Assistance (SNA) Benefits Extended to Special Immigrant Juvenile (SIJ) Beneficiaries
The New York Legal Assistance Group is pleased to announce that we have reached a settlement agreement that will result in New York expanding eligibility for subsistence public assistance, called Safety Net Assistance (SNA) benefits, to special immigrant juvenile (SIJ) beneficiaries—unmarried immigrants under 21 years old who have been abandoned, abused, or neglected by one or both of their parents.
Lire la suite "
Retour haut de page
|
What is Telepathy?
In today’s world, we can meet, connect, or talk with other peoples in at most two or three ways, but in all these two-three ways, we have to go in front of them speak and interact with each other. We have to sit next to them to do anything with them. And another way is that we can talk or meet them through phone or the internet. But telepathy is not like that.
Lots of searches are rolling on the internet about telepathy, like What is Telepathy? and How do Telepathy works? And so on. Today in this article, we will discuss only What is Telepathy?
But exactly, What Is Telepathy?
Telepathy Definition:
Telepathy is the purported vicarious direct transference of data and thought from one person (sender) to another (receiver) without using any known human sensory channels or physical interaction. It’s a sort of extrasensory perception (ESP).
What Is Telepathy?
It’s the most basic way that we all can interconnect. It was supposedly, who’s to know, the ways humans connected before there was even speech. One could call it mind-to-mind communication.
The term was first coined in 1882 by the classicist Frederic W. H. Myers, the Society for Psychical Research (SPR) founder, as a result of his joint investigation with Edmund Gurney, Henry Sidgwick, and William F. Barrett into the possibilities of thought-transference.
Telepathy experiments have historically been criticized for lack of proper controls and repeatability. There’s no convincing evidence that telepathy exists, and therefore the scientific community generally considers the subject to be pseudoscience.
Origin of Telepathy
According to historians like Roger Luckhurst and Janet Oppenheim, the origin of the concept of telepathy in Western civilization can be traced to the late 19th century and the formation of the Society for Psychical Research. As the physical sciences made significant advances, scientific concepts were applied to mental phenomena (e.g., animal magnetism), hoping that this would help to understand paranormal phenomena. The modern idea of telepathy emerged in this context.
Let’s bring it down into simple words.
Telepathy Meaning:
It’s the question of, you think of somebody you bumped into in the street. You were on the same wavelength. You’re thinking of calling somebody, and they’re on the phone. And we think, oh, that’s very strange! It isn’t strange. Indeed, we have all heard stories of mothers who become aware that their children, be they little or adult, are in some major trouble.
If we can interact with others using our mind from a long distance, it means we are trying to beat the cell phone companies. See, we have to understand that the telepathy connection is not just about the mind. It’s almost the fundamental way of being in touch with someone in other dimensions rather than physical.
Telepathy is a quantum leap from physical to non-physical dimension. Also, if you share a flat with strangers, after a while, you will develop telepathy between the two of you. It’s not that you particularly care for them or not. I believe that the Aborigines of Australia still use the method.
And they can communicate over thousands of miles, mind-to-mind. In the more supposedly modern, sophisticated world, we have forgotten how to do that, but every one of us still, whether we know it or not, deals with each other with telepathy.
The public concept of telepathy became a rival of the spirit hypothesis. This misconception spread so widely that many people considered telepathy to be only thought transference and mind-reading technique. But is completely wrong.
“In telepathy connection, the transmitter is often unaware that he acts as an agent, and the receiver does not consciously prepare himself for the reception. Telepathy cannot be made a subject of experiments, while thought-transference can. Thought-transference is a rudimentary faculty. Telepathy is a well-developed mode of supernormal perception and is usually brought into play by the influence of extreme emotions.”
So is it a possibility, means, Is Telepathy Possible?
Definitely, it is. We are on the way to adapting so many technologies and devices that can help us do telepathy. One of them is the so-called Neuralink.
We have finally achieved brain-to-brain communication in humans. With the help of technology, we had acquired the brain’s capacity for the brain to brain communication in humans long ago.
Brain to brain communication first happened in 2014, during an experiment conducted by Starlab, Axilum Robotics, and Harvard Medical School. But participants were only able to send and receive simple messages. And they had to use binary codes to spell out each word. it was slow. It could take up to 70 minutes and to send and receive a four-letter message. It was more like morse code than real telepathy. But in the future, with the help of these technologies, we can achieve fully-functional telepathy equipment? We have a long way to go before we can share our thoughts seamlessly with someone else. But it’s not impossible.
We can totally simplify this, which is really exciting. This is the very first step down a long road of the brain to brain networking. There may come a day when you can control your computer with your thoughts, or shoot a quick brain message to your boyfriend, transmit a complicated emotion to your friend, or even live as a Borgian collective with everyone, but for now, this is what we have got.
What If is Telepathy Real?
Once your mind was a place that only you could go, but now anyone can drop in and hear your thoughts. Mind to mind communication has finally arrived. Telepathy is real but not that type of Telepathy. I am talking about a scientifically created Telepathy connection that uses Electroencephalography OR EEG.
Scientists don’t know how the brain sends electrical signals, but if we can decode our brain’s language, we will unlock our full telepathic potential. After making the devices that can send and receive electrical signals, we can not only send words but images and feeling too. If scientists know how the brain works, different parts of your brain could be triggered for different communication types.
On that communication channel is open anything could be sent through it. Even those snippets of daydreams that you don’t want anyone else to hear or see. with widespread Telepathy, our world would change drastically. You would see team sports where the players are linked up together, moving in synchronized motion, and complex gameplays. Creators and innovators would be able to share their ideas easier. Instead of trying to put thoughts that are hard to communicate in two words, you could share your thoughts with others.
And when you can access other people’s thoughts, and they’re accessing yours, you would need a way to disconnect. Your brain would be pretty cluttered with peoples who just send thoughts all day and all night long, and sometimes you might don’t want to know about what another person is thinking? Privacy would become rarer, and new mental health issues could arise from extended use of Telepathy. If a scientist could genuinely unlock the mysteries of the brain, then the possibilities are endless. Virtual reality simulations could be sent into your mind. You could record your dreams, and maybe you could even upload fake memories.
Will Telepathy Become Reality?
Unfortunately, this often gets tricky and so far to become.
First, how exactly does one read brain activity? One thing is that- Implanted electrodes can do — they can sense the change in electrical currents as brain cells activate. On an equivalent principle, Elon Musk’s Neuralink is based.
And now, the Second problem is that: How can we decode the signals we’ve got? Scientists still don’t understand how the brain codes electrical information. It might be like we will hear the brain thinking, but we’ve no clue about the language that it’s speaking except that it’s getting to be really complicated.
Now for the hard part: How can we beam an idea back to someone else’s head? If we went for the electrodes option back partially A, then we could do this by just bypassing some current back to the electrodes, although we’d have little or no control over what quiet thoughts we stimulated.
If you are not wired up for that, then you could do it magnetically. And for that, you will need a Transcranial Magnetic Stimulation wand, which creates a robust magnetic flux at its tip. Rest it on your head and switch it on, and a magnetic flux momentarily passes across the brain tissue directly beneath it, inducing a current that activates that brain tissue. Unfortunately, this has even less control than the electrodes — but if you place it over the visual area, it is often reliably wont to trigger little flashes of sunshine called phosphenes.
Put all of this together, and you’ve got the primary demonstration of telepathy: One person concentrates on something, especially, this is often read as specific EEG brain activity that’s sent by wire to a TMS wand, which stimulates another person’s brain, and that they see a flash of sunshine. That was done back in 2014, as I discussed above.
All of this is technologically possible. But, I feel we’ve been doing telepathy for a longer back than that. In fact, if the definition of telepathy is sending messages from brain to brain, we’ve been sending messages to every other since we squelched out of the primordial ooze. Culminating within the most sophisticated communication system, we all know language and gesture, Then mobile phones.
But maybe that doesn’t really count. That’s not “real” telepathy — not sent brain-to-brain directly: It has been filtered through our senses.
The technology to bring all those steps up to scratch for proper thought transmission still doesn’t exist. And if it did exist, we even got to compute the way to understand the brain’s language first.
But with projects like Elon Musk’s “Neuralink” on the horizon, that day may come before we expect. What does an idea without language sound like? It is so strange only to believe this. Our future is so bright and technologically advanced. We are on the way to becoming superhumans and even gods for some peoples.
What did you think? Share your thoughts in the comment section.
So here I explained almost everything about What is Telepathy?
Leave a Comment
|
Need help? We are here
Uncle Joe is a 78-year-old male living in a nursing home for the past 10 years after his retirement. He has Alzheimer’s disease and has not slept enough at night. He has no recollection of his family, and all contacts that he had on the books about relatives were outdated. He was last visited 4 months ago by his only son, who since died in the war. Because of his mental status, he cannot decide for himself about his care or other issues.
At night, Uncle Joe wanders into other resident’s rooms, which often ends in a fight with residents being injured. The facility was recently sued by another resident, and management had to settle with a considerable amount of money.
Uncle Joe was recently placed on one-to-one for close monitoring by a nursing assistant. Most of the nursing assistants assigned to Joe have not been patient with him. It has been challenging to redirect him or persuade him to follow instructions. Often, the nursing assistants have been restraining Joe with belts, whereas other assistants pinch him to gain control. The other day, a social worker came in and saw the bruises and spots on Uncle Joe and requested an immediate investigation into the matter. Two of the nursing assistants who were caught on camera abusing Uncle Joe were terminated immediately. The issue was not reported to the state as required by law.
Prepare at least a 5-page observation and assessment outlining protocols and procedures for dealing with residents like Uncle Joe. In your memo, make sure that you address the points listed below. Cite at least 5 sources.
1. Explain the rights and responsibilities that Uncle Joe has as a resident and why he was not been allowed to make decisions for himself.
2. Discuss the procedures of decision making for a person with mental impairment.
3. Define ethical and legal issues at play in this scenario. How would you address them?
4. Describe and state the importance of a resident advocate and why one should be assigned to Uncle Joe.
5. Explain how resident advocates help to bring change in resident care in facilities.
6. Explain the abuses that Uncle Joe underwent, and share how such issues be handled correctly.
7. Discuss the political and social impact of advocacy on resident care.
8. Identify how the government can help improve the quality of care of residents.
|
Corn Snake Care Sheet
Scientific Facts
Common Name: Corn Snake, Red corn snake, English corn snake, red rat snake
Scientific Name: Pantherophis Guttatus
Life Span: 15 to 20 years (in captivity)
Mass: Around 900 grams
Length: 61-182 cm (2.0-5.97 ft.)
Size: 2.5 to 5 feet in length
Clutch Size: 10 to 30
Habitat: Wooded groves, rocky hillsides, woodlots, meadowlands, rocky open areas
Country of Origin: Eastern United States
Physical Description
Image Source
Corn snakes are brown, or orange-ish yellow snakes that are slender, with a pattern of red, large blotches of black lining down their backs. In the bellies of these snakes are unique rows designed with alternating white and black marks, similar to that of a checkerboard.
The name itself, corn snake, may have been given to it due to its similarity to the markings of maize kernels or Indian corn. These snakes come with a variety of pattern and color, depending on their geographic range and age. New hatchlings are usually absent from the bright colors that are typical among adults.
Life Span
Corn snakes usually live up to 20 years when taken care by humans, though their lifespan is generally shorter in their wild habitat.
Baby corn snakes usually thrive inside a plastic vivarium similar to the size of a huge shoebox. This can serve as their home for the first few months of their months. A cage may be needed for adult corn snakes which are at least the size of a 20-gallon reptile terrarium. A bigger cage is also better.
Note that corn snakes are not social reptiles, which means that adding a cage mate can be a burden to them. One cage should only house one corn snake. It is also important to remember that corn snakes, by nature, are escape artists, so ensure that the cage is escape-proof.
To add spice to the life of your pet, you may also want to add some habitat products such as climbing branches. Corn snakes also feel secure when you add tight reptile hides.
Breeders typically use aspen shavings for breeding because of its soft and absorbent nature. These shavings also hold shape when the snakes burrow. You may also use cypress mulch, but avoid using aromatic woods, like cedar or pine. Reptile carpet and newspaper may also work, though the corn snake may have the tendency to get beneath it. Be careful not to use sand as it may cause some impactions when consumed.
Lighting and Temperature
Corn snakes do not need special lighting, though natural light coming from nearby windows usually help corn snakes adjust their day and night cycles, along with seasonal cycles. It is recommended not to let the corn snakes be under exposure to direct sunlight since this temperature could be fatal to the snakes.
When taking care of corn snakes at home, prepare a temperature gradient with light, cable, or under tank heat pad. The ideal warmth for corn snakes is 85 degrees Fahrenheit, while room temperatures at low 70s are fine on the cooler end.
You may want to install a long PVC pipe, hollow log, or skinny hide so that one end is cool while the other is warm. Make sure that the temperature is constantly checked inside the warm end but not on the glass. Even within just a few inches, the temperature may vary a bit, which means that hide box placement and the thermometer is important.
When the corn snake starts shedding its skin in pieces, it is better to increase the humidity inside the hide box. This can be done by adding paper towel, or damp moss whenever the snake is preparing to shed. Remove in between sheds in order to avoid the buildup of mold, bacteria, and other dangerous elements.
The main natural food of corn snakes is rodents. Younger corn snakes also eat lizards and frogs. Adult snakes, on the other hands, may also feed on birds or their eggs. Avoid feeding your corn snakes with crickets because they do not recognize these creatures as food.
Hatchlings usually feast on newborn mice. Feed bigger mice for a bigger adult corn snake. Most captive corn snakes usually learn to eat frozen, but completely thawed out mice. When transferring baby corn snakes to a new home, you may want to feed it with a live, newborn mouse because they are not used to eating thawed mice just yet.
Put your corn snake, along with a completely thawed mouse in an empty container. Make sure that the container has few air holes. Close the lid to allow your corn snake to focus on its meal. Make sure that the lid is tightly placed, and far from a heat source. This might overheat, and kill the snake.
Cutting the skin of a thawed mouse encourages faster and better digestion. Feed young corn snakes once in every five to seven days, while adult corn snakes should be fed once every seven to ten days.
Eating Habits
Corn snakes usually bite their prey, getting a strong grip, and then coiling themselves quickly around their food. They continue to squeeze tightly until the prey is subdued. Lastly, they swallow their food completely, headfirst. Corn snakes usually enjoy a good meal every few days.
Sleeping Habits
Corn snakes are described as diurnal, which means that they are primarily active at daytime. They love to climb trees and enter buildings that are abandoned, searching for prey. They are also very secretive, spending most of their time underground while braving through burrows of rodents. They usually hide beneath logs or under loose bark, rocks, and other hiding places during the day.
It is recommended to ensure that the availability of freshwater is ready in a heavy, shallow reptile water bowl. Clean the bowl once in every few days, or immediately when soiled. Put the in a corner cage so that it can be found easily when the snake crawls through the perimeter of the cage at night.
Development and Reproduction
The breeding season for corn snakes usually occurs from March to May. They are described as oviparous, which means that they lay the eggs. Usually, around May to July, female corn snakes lay a batch of 10 to 30 eggs in piles of decaying vegetation, rotting stumps, or other similar places with sufficient humidity and heat in order to incubate their eggs.
Adult snakes do not take care of their eggs, which usually require up to 60 to 65 days to hatch at a temperature of around 82 degrees Fahrenheit. The eggs usually hatch between the months of July and September, with the hatchlings growing up to 25 to 38 centimeters (10 to 15 inches) in length. These hatchlings usually reach maturity in around 18 to 36 months.
Special care has to be observed when handling corn snakes. For one, they rarely bite and are relatively docile. When handling a corn snake, be careful not to smell like their food, such as rabbits or rodents. Before holding a snake, make sure to wash your hands well. Support the body, neck, and head of the snake so that they will not be stressed during handling.
Mating Season
As soon as the colder months of weather give way to warmer temperatures, female corn snakes usually start to produce strong smells. This will encourage the male corn snakes to look for her and thus mate. After mating, the female corn snake will lay her eggs in a month. She will look for a place that is moist and warm for the eggs to be incubated. After laying her eggs, the mother snake will leave them. The snakes will then surface after 10 weeks using a tooth that they use to get out of their shell.
Corn snakes are regarded as the most commonly bred species of snake in the USA. Breeders of domestic corn snakes have created hundreds of morphs or variations through selective breeding. Among them include albino corn snake, okeetee corn snake, snow corn snake, black corn snake, and lavender corn snake among several others.
Common Health Problems
Here are some of the most common health problems among corn snakes:
Mites are small black parasites living on corn snakes, feeding on their blood. They usually lay their eggs in the substrate placed on the tank and are usually visible around the mouth, eyes, and under their scales.
When you spot these mites, bathe your snake in warm water. Afterwards, disinfect the tank thoroughly using an insecticide which is designed for snakes. This will kill the mites breeding. This may need to be repeated several times.
This is otherwise called as mouth rot, a condition usually caused when bacteria in the mouth of corn snakes get into an open wound, causing infection in the lining of the mouth and gums. The symptoms include color change or swelling in the mouth and gums frothy discharge, or frequent mouth rubbing. If you suspect this illness, it may be needed to consult your reptile vet for advice.
The digestive process of your snake depends on its metabolism and size. However, when you are already used to their pattern, you can notice changes in their poos right away. Some signs of constipation include lethargy, bloating and loss of appetite.
To alleviate constipation, you can bathe your snake in warm water for about 15 minutes a day. This will encourage them to defecate. If it does not work, you can bring your snake to a reptile vet, as your snake may be suffering from blockage in their digestive system.
Preventing Illness
Just like humans, the only way to prevent illness in your snake is to maintain its health. However, there is no assurance that your corn snake will not get sick. Early detection of any illness, however, can help a lot It is recommended to contact your reptile vet right away if you suspect that your snake is suffering from conditions mentioned above, or you see some symptoms including loss of appetite, changes in behavior, lethargy, wrinkled or retained skin, difficulty breathing, lumps or swelling, vomiting, weight loss, and discharge from nose or eyes.
How to Breed
One important step to take when breeding corn snake is learning how to properly sex your snake. Check them immediately right after getting them in order to avoid taking care of same-sex pair for several years before finding out.
Most snake breeders follow a schedule of environmental conditioning their corn snakes. After around three to four weeks, the female snakes start shedding for the first time. This is considered as the pre-breeding shed. You may want to start introducing the male snake to the female at this point.
After making sure that successful breeding took place, you can start keeping the female snake separated in her own cage. Feed your snake heavily, adding calcium d3 supplement. Within three to four weeks, she will start shedding. This is the pre-egg laying shed.
When this happens, you can now place a nesting box in her cage. Add substrate as well. Almost always, the female snake will begin laying up to eight to twelve days after shedding. Avoid disturbing your snake until you are completely sure that she is done.
Possible Danger to Humans
Because of the attractive colors of corn snakes, and the fact that they are calm and docile, they are among the most popular choices as pets. If you are bitten by a corn snake, despite being non-venomous, it is still better to go to the doctor to have your bite checked because some people may have an allergic reaction to the bite.
Variations of Corn Snakes
After several generations of performing selective breeding, captive or domesticated corn snakes are now available in various patterns and colors. They come as a result of combining together recessive and dominant genes which code for the proteins that are usually involved in the development of chromatophore. New morphs, or variations, usually become available each year as breeders continue to gain more understanding on the involved genetics.
Availability – Where to Get One?
Corn snakes are available readily at pet shops, online reptile stores, reptile exhibits, and expos, as well as direct breeders. Some wild-caught varieties usually adopt as pets, while captive-bred snakes are better options because of their beautiful pattern and color morphs. Captive-bred corn snakes also make it possible to get healthy and parasite-free snakes. Owners can also get details about the history, age, and parentage of the snakes.
Fun Facts About Corn Snake
• Corn snakes are actually constrictors, which means that they love to wrap themselves around their prey, squeezing and subduing it before completely swallowing it.
• The color of the body of the corn snake usually depends on its habitat. This means that this provides camouflage and protection to the snakes.
• Aside from the fact that the pattern of the belly of a corn snake resembles that of the Indian corn kernel, they are usually found near corn plants, which attracts their favorite prey, rodents.
• Humans benefit from the presence of corn snakes because they help in controlling the number of rodents. They also help in preventing the spread of diseases and crop damage which are typically associated with the presence of numerous rodents.
• Corn snakes also love sleeping. In fact, they hibernate during the colder periods of the year.
• Corn snakes are not interested in parental care. The babies only reach up to 10 to 15 inches at birth. From the first day of their life, they already need to take care of themselves.
How to Care for a Corn Snake?
Newly hatched corn snakes are naturally defensive and nervous. Even though it is quite normal for baby corn snakes to hide, feel or defend themselves naturally, they really have no ability to harm people. In fact, a cat or a white mouse that plays roughly can cause more damage when compared with the biggest corn snake.
As such, it is very important to give your baby corn snake a few weeks to get accustomed to, settle in its new home, and get used to a regular feeding schedule before possibly stressing it with careless handling. After about three or four successful feedings, handle your corn snake for a short period of time, except after a meal for the first two or three days.
Make sure that you handle your corn snake from the side, instead of holding it from the top, as this is the usual way a predator would do. Gently lift your corn snake up, but feel confident. Corn snakes can smell fear and hesitation, and this can also scare them too. This will make them bite or hide. If necessary, use light cotton gloves to encourage confidence. Once your corn snake recognizes you as an owner, and that you are not a predator, it will be tamed quite quickly.
FAQ Section
I just received my new corn snake, what do I need to do?
Even before receiving your pet corn snake, it is very important to prepare your cage. The cage should be able to demonstrate the most important factor that should be considered – heating. Since snakes are not generally capable of producing enough body heat to support their digestion and appetite, it is vital to make sure that the cage is heated properly in order to ensure the welfare and health of your pet.
How often do corn snakes shed?
As your pet corn snake continues to grow, it will shed its skin. The typically occurs once every few weeks. The gap of time between sheds also increases as they grow older. When they have reached adulthood, corn snakes shed once every few months.
How often do corn snakes eat?
Baby corn snakes should be fed once in five to seven days. Adult corn snakes, on the other hand, should be fed once every seven to ten days.
Do corn snakes like to climb?
In general, corn snakes are partially arboreal, which means that they usually spend their time in trees, and thus, love to climb. With this reason, you can add climbing plants and branches, placed in areas where the snake can climb on. Make sure that it can handle the weight of the snake, and it is small enough to curl around.
Do corn snakes have a personality?
In terms of behavior, corn snakes are the most docile and calm species in the reptile world. They are not prone to defecating, biting, or constricting, even when under stress. They also love being handled at times. Baby corn snakes have the tendency to nip, though they will settle down after handling gently.
How should I heat up the cage for my corn snake?
Regardless of where you live, note that the indoor temperature that you may prefer is still cool for your pet. It is recommended to provide an adequately private hide, especially on the warmer end of the cage which maintains a range of 80 degrees to 84 degrees Fahrenheit (27 degrees to 29 degrees Celsius) all the time. The other end of the cage should also have a private hide with a temperature that is below 80 degrees Fahrenheit (27 degrees Celsius) all the time.
What role do cage “hides” play in the care of my pet?
A hide describes any type of furnishing placed inside the cage that will allow your snake to feel as it is really hiding from everything in its environment, thus allowing them to instinctively feel safe from predation. Since corn snakes do not have eyelids, and since it is natural for them to feel safe in the darkness, using hides is beneficial towards the mental health of corn snakes.
I have been hearing about “morph” from other corn snake owners. What does it mean?
When compared to the canine world, morph means the herpetoculture which identifies the “breed” of the corn snake. In general, all corn snakes fall under the same species, Pantherophis guttatus, though with a variety of phylogenetic characteristics which makes them unique from other morphs, wild-types, and mutations.
My corn snake looks different from the pictures I always see from others. Why?
Corn snakes are known to dramatically grow from hatchling to adult. The average appearance of the corn snakes are the ones that are usually depicted in pictures. Since most owners start growing with younger ones, it usually looks different. As such, it is not recommended to select corn snakes, especially when purchasing online, based on the pictures of younger hatchlings.
How soon should it be before I start feeding a rodent to my pet after delivery? How big, or how small, should I feed it?
At a normal cage temperature, corn snakes usually digest their food three days after being fed. Most sellers ship their snakes that last ate more than 72 hours before delivery. It is recommended, therefore, to feed your corn snakes about 2 to 7 days after receiving.
One thing that should be noted, and is very important, is to avoid feeding your new pet until you are sure that appropriate cage temperatures are sufficient to digestion. During the first two or three meals, it is suggested to provide food that is half the size of the normal serving. After that, if normal digestion is already observed, you can move up to normal-sized meals.
Can multiple corn snakes be housed together with no problem?
It is not really recommended to house together multiple young corn snakes. It is recommended to have one corn snake per cage. This is due to the possibility of feeding confusion by one or multiple snakes. The likelihood of injuries or fatality from a larger community of housed corn snakes is less than 1%. Still, housing them in separate cages is recommended.
Is it possible to handle a corn snake while shedding?
Yes, it is possible to handle it immediately after shedding. There is no need to worry about it. It is not best to hold the snake before they shed, not after.
Which gender of a corn snake is better?
This depends on your preference. One way to tell the difference in gender is by considering its size. Both male and female corn snakes grow from 27.5 to 47 inches long, but the male snakes are typically bigger compared to the females. In terms of behavior, both genders are docile, making them good pets.
Do corn snake bites hurt?
These types of snakes rarely bite, even when under stress, hurt or frightened. If they do bite, the bite from a young corn snake usually is not painful, and may not even be noticed at all because of the size of their teeth. On the other hand, a bite from an adult corn snake may draw a little blood and show tiny pricks.
Is there a need to bathe my corn snake?
Bathing helps in relieving issues such as constipation in your snake. It also promotes shedding and kills mites. When bathing your snake, make sure to use filtered water or warm spring.
How often should the cage be cleaned?
It is recommended to change the water at least once a day. Enclosures and cages should be frequently cleaned, using a 5% bleach solution. Allow to air dry afterwards. Spot cleaning is easy because corn snakes only defecate once or twice a week.
Copperhead Snake Care Sheet
Cottonmouth Snake Care Sheet
|
James Whistler—Portrait of the Artist’s Mother
James Abbott McNeill Whistler, the American painter who chose to live abroad most of his adult life, was a skilled portrait artist and printmaker. Whistler’s fame is derived from many chance happenings which were all bound by the artist’s unconventional studio process and his seemingly driven desire to magnify his vision and create. Six years after the cessation of hostilities known as the American Civil War, the American artist living abroad succeeded in painting a masterpiece. The artist himself was quite pleased with his vision of his ailing mother and deeply satisfied with his effort directed to her portrait. When he finished with his labors on the portrait, Whistler set his brushes down, turned toward this mother and said, “Oh mother, it is masterful and beautiful”.
It is admirable that Whistler was pleased with his effort of capturing more than just his mother’s likeness—after all the artist’s gaze had to overcome a lot of challenges as she was ailing while she posed for three months seated for her son— which is due in no small degree to the artist’s driven skill-set and his driven personality to work so diligently. Whistler tells us quiet clearly about his work ethic and vision in his journals and diaries with comments such as the following: “To say to the painter that Nature is to be taken as she is, is to say to the player that he may sit on the piano…Work alone will efface the footsteps of work…An artist is not paid for his labor but for his vision.”
What are your thoughts on the artist’s 1871 vision of his mother? Does this painting deserve being iconic and globally famous?
James Whistler, Portrait of the Artist’s Mother, 1871
Published by: roberttracyphd
6 thoughts on “James Whistler—Portrait of the Artist’s Mother”
1. Whistler’s painting of his mother served a much greater purpose than one being displayed in an art gallery. As discussed in class, this was one of Whistlers best pieces, however, it didn’t get him much recognition at first. It was looked at as dreary and unconventional for its time. Even though Whistler only intended it to be a study in tone and form, it was his mother’s portraiture that ended up becoming an iconic symbol for the American people during the harsh times of the Great Depression.
When you analyze the painting, you get almost this sense of suffering. Suffering is what many had to endure in order to survive the brutal conditions of the economy during the Great Depression. This applied to both men and women. But to see your own mother hurt, is the worst feeling ever. For those who had no choice but to live out those excruciating years, it motivated them to not give up — To keep fighting — To fight for your loved ones.
It’s a good, strong message. (At least how I interpreted it)
One that associates itself with a reputable painting — So why wouldn’t it be symbolic?
2. I think when we are talking about “deserving” to become globally or internationally famous, we start treading into subjective waters. Of course there will be some that will say “Yes! Absolutely” and there will be others who will say “Of course not”, because they do not understand the piece. But who truly is to say whether or not a piece “deserves” to be recognized? Art is subjective.
Personally, I think the work is incredibly strong and quite moving. I agree with Elham’s observation that the work does convey a feeling of suffering; Whistler’s personal situations were translated through his brush. He was close enough and invested enough to properly express his mother in the best way he knew how (which I think goes back to the idea couple of blog posts ago about having to be a “gentleman of sound mind” to paint an accurate portrait). The piece is obviously striking, with it’s muted color tones and strong approach to capturing a realist view of his mother. I definitely think Whistler was effective in capturing his mother’s portrait.
3. Throughout history, paintings have been used to convey feelings of wonder, splendor, and excitement. Rarely, at the time, was art used to show something dark and emotionally heavy. Whistler’s painting of his mother conveys her at her worst time. Her husband had passed many years prior, and she still wore mourning clothes because of her pain towards her loss. Whistler paints her anyway, not afraid to shy away from the darkness that his mother feels. Everything from the colors to her positioning are used to convey her sense of loneliness and loss.
I think it is a remarkable piece, especially given the context that it was created in. At the time of its creation, it was almost not allowed in an exhibition because it went against the norm of portraiture. Because of its historical significance, I do believe it deserves its iconic status/fame.
4. I feel like he portrayed his mother very well. She was a women who didn’t change, especially after her husbands death. It seemed like she was in a constant stage of mourning over the loss of her husband. I believe that it is great and worth being iconic. It gives us all the information we need without having to delve into the details. It deserves to be known as one of his best pieces because it works extremely well. You can tell the attention to detail he put in it probably just as much as he cared for his actual mother.
5. James Whistler’s piece titled “Portrait of the Artist’s Mother” did not gain a lot of popularity because of his idealistic style of painting. This woman posed for this painting numerous times over the span of three months. Talk about artist and model dedication. The attention to detail and mood is there which adds a sense of ‘realism’, something that was not in-style at the time. His mother also appears to look frail and cold, the result of ailment. I believe that James Whistler did a great job at capturing this very moment with his mother. It is delicate and beautiful.
6. I think that it is a picture worth being iconic, this is not a traditional portrait or pose and I believe that those unique qualities are what helps this soft color painting be so strong. When I look at this painting I can see the strength of this woman that sits alone in a room and fills it up with her presence, one can tell she is the subject yet it is so human and emotional because one can feel how vulnerable she is and how solemn the moment is. Although she cannot stand we know that this woman was someone strong in her life and that that strength does not come from her physical body but rather her mind and will that drive her to be that iconic image.
Leave a Reply
WordPress.com Logo
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
Video thumbnail
In this installment I will discuss alcohol.
How alcohol acts in the central nervous system is still poorly understood.
Two of the best known effects of alcohol, however, are its actions on GABA and glutamate
Alcohol increases GABA activity at a subtype of the GABA receptor known as GABAa.
The mechanism by which this occurs is still not clear, but it is thought that alcohol
may act as a positive allosteric modulator, meaning it binds to a site on the receptor
that is separate from where GABA binds, and increases the effect GABA has when it binds
to the receptor itself.
The immediate effect of this action typically is the inhibition of neural firing.
Alcohol also inhibits the activity of glutamate receptors.
Again, the mechanism for this is not fully understood but because glutamate is generally
excitatory, inhibition by alcohol initially leads to the reduction of neural activity.
A long list of other synaptic actions have been linked to alcohol, including (but not
limited to): activation of serotonin receptors, enhancement of glycine receptor function,
inhibition of adenosine reuptake, inhibition of calcium channels, activation of potassium
channels, and modulation of nicotinic acetylcholine receptor function.
It’s not clear, however, how relevant each of these effects are to the human use of alcohol.
There are also some large-scale effects associated with alcohol.
For example, alcohol stimulates dopamine transmission in the mesolimbic dopamine pathway--an action
thought to be associated with the reinforcement of alcohol consumption.
Alcohol affects motor coordination and balance, potentially in part through its influence
on neurons in the cerebellum.
And it inhibits long-term potentiation and other mechanisms of synaptic plasticity in
the hippocampus, which may contribute to its memory-disrupting effects.
|
1. Direct Liquid Level Measurement :
Directly, by using the varying level of the liquid as a means of obtaining the measurement. Example : Sight Glass, Floats, Dip Rod & Bob and tape etc.
2. Indirect Liquid Level Measurement :
Indirectly, by using a variable signal like pressure, radio wave, Ultrasonic wave, Mass weight etc., in respect of liquid level. Many types of indirect level measuring devices.
Example : Pressure Gauge, Bubbler method, Level Transmitter (PT & DPT),Radioactivity, Ultrasonic Waves, Weight-ment of entire vessel etc.
The pressure at the base of a vessel containing liquid is directly proportional to the height of the liquid in the vessel. This is termed hydrostatic pressure. As the level in the vessel rises, the pressure exerted by the liquid at the base of the vessel will increase linearly.
Mathematically, we have: P = d H
P = Pressure (Pa) , d = density of the liquid ,H = Height of liquid column(m), P(absolute) = P (atm) + Hydrostatic pressure of liquid
d = Density = Weight of Liquid / Volume of Liquid
The level of liquid inside a tank can be determined from the pressure reading if the weight density of the liquid is constant.
Differential Pressure (DP) capsules are the most commonly used devices to measure the pressure at the base of a tank.
Density of Water = 1.00 Kg / Liter
Density of Mercury = 13.6 Kg / Liter
1. Open Tank Level Measurement :
If the tank is open to atmosphere, the high-pressure side of the level transmitter will be connected to the base of the tank while the low-pressure side will be vented to atmosphere. In this manner, the level transmitter acts as a simple pressure transmitter. We have:
Phigh = Patm + d H
Plow = Patm
Differential pressure P = Phigh – Plow = d H
The level transmitter can be calibrated to output 4 mA when the tank is at 0% level and 20 mA when the tank is at 100% level.
2. Closed Tank Level Measurement :
Compensation can be achieved by applying the gas pressure to both the high and low-pressure sides of the level transmitter.
Phigh = Pgas + d H
Plow = Pgas
P = Phigh – Plow = d H
DRY LEG : The effect of the gas pressure is cancelled and only the pressure due to the hydrostatic head of the liquid is sensed. When the low-pressure impulse line is connected directly to the gas phase above the liquid level, it is called a dry leg.
If the gas phase is condensable, say steam, condensate will form in the low-pressure impulse line resulting in a column of liquid, which exerts extra pressure on the low-pressure side of the transmitter.
A technique to solve this problem is to add a knockout pot below the transmitter in the low-pressure side as shown in Figure.
One example of a dry leg application is the measurement of liquid poison level in the poison injection tank, where the gas phase is non-condensable helium.
WET LEG : In a wet leg system, the low-pressure impulse line is completely filled with liquid (usually the same liquid as the process) and hence the name wet leg.
At the top of the low pressure impulse line is a small catch tank. The gas phase or Vapour will condense in the wet leg and the catch tank. The catch tank, with the inclined interconnecting line, maintains a constant hydrostatic pressure on the low-pressure side of the level transmitter. This pressure, being a constant, can easily be compensated for by calibration.
It would be idealistic to say that the DP cell can always be located at the exact the bottom of the vessel we are measuring fluid level in. Hence, the measuring system has to consider the hydrostatic pressure of the fluid in the sensing lines themselves. This leads to two compensations required.
Zero Suppression
In some cases, it is not possible to mount the level transmitter right at the base level of the tank. Say for maintenance purposes, the level transmitter has to be mounted h meters below the base of an open tank.
Phigh = Patm + d.hm + d.H ,
Plow = Patm
∆P = Phigh – Plow = d.hm + d.H
The pressure on the high-pressure side is always higher than the actual pressure exerted by the liquid column in the tank (by a value of d.hm). This constant pressure would cause an output signal that is higher than 4 mA when the tank is empty and above 20 mA when it is full. The transmitter has to be negatively biased by a value of – d.hm . So that the output of the transmitter is proportional to the tank level (d H ) only. This procedure is called Zero Suppression and it can be done during calibration of the transmitter.
Zero Elevation
When a wet leg installation is used the low-pressure side of the level transmitter will always experience a higher pressure than the high-pressure side. This is due to the fact that the height of the wet leg (h) is always equal to or greater than the maximum height of the liquid column (H) inside the tank. When the liquid level is at H meters,
we have:
Phigh = Pgas + d H
Plow = Pgas + d h
P = Phigh – Plow = d H – d h = – d (h – H)
The differential pressure P sensed by the transmitter is always a negative number (i.e., low pressure side is at a higher pressure than high pressure side).
P increases from P = -d h to P = -d (h-H) as the tank level rises from 0% to 100%.
Note : If the transmitter were not calibrated for this constant negative error (-d h), the transmitter output would read low at all times. To properly calibrate the transmitter, a positive bias (+d h ) is needed to elevate the transmitter output. This positive biasing technique is called zero elevation.
A sensor in the form of a tuning fork is made to vibrate at its resonant frequency by a piezo-electric crystal drive. The frequency changes when the fork comes into contact with the liquid. The change is evaluate and converted into a switching signal.
The Probe and vessel wall form the two plates of a capacitor, the capacitance of which is determined by their surface areas, the distance between them as well as the type and dielectric properties of the product to be measured. When the vessel is filled, the capacitance increases. The capacitance is measured and a level proportional signal is produced in the electronic insert of the probe. This signal is then evaluated by other electronic units connected to the system..
The difference in conductivity of liquids is measures with air as the reference point. A very small alternating voltage (AC Voltage) is applied between two probe tips or between the probe and the vessel wall.The circuit is closed and the level is indicated when the liquid reaches the probe tip. The voltage and current used are so small that no dangerous shock-hazard voltages can occur. The use of alternating voltage prevents electrolysis occurring.
The Ultrasonic measuring systems measure the level of all kinds of liquids and solids including those in hazardous areas. The sensor is not directly in contact with the material, thus the unit is wear and maintenance free.
The emitter in the sensor is excited electrically and sends an ultrasonic pulse in the direction of the surface of the product which partially reflects the pulse. Thus echo is detected by the same sensor, now acting as a directional microphone, and converted into a electrical signal. The transmission and reception of the pulse (the sonic run time) is directly proportional to the distance between the sensor and the product surface. This distance is determined by velocity of sound c and run time t using the formula: D = ( c.t ) / 2
D= Distance from sensor to surface of material
c = Velocity of sound
t = The sonic run Time
E = Zero Point of measurement ( 0 % Empty)
F= Maximum Level ( 100 % Full)
BD = Blocking Distance
Electro Mechanical Type Level Sensor
A measuring tape with a sensing weight attached to the end is driven down into the bunker. When the weight touches the surface of the material, the tape slackens and the motor reverse. The weight returns to the start “parked”position. During the weight’s downwards or upwards journey, the transducer emits pulses equivalent to the length of the extended tape. The pulses are decoded by D/A converter and the measurement is stored until the next measurement cycle. This is initiated by time circuit or the start button on the timer.
Radiometric Type Level Sensor
The gama source, either a caesium or cobalt compound, emits radiation which is attenuated as it passes through materials.
A detector mounted on the opposite side of vessel or pipe, converts this radiation into an electrical signal. The strength of the signal is determined by the distance between the radiation source and the detector, and also the thickness and density of the material. The distance and the vessel or pipe walls through which the radiation penetrates are constant values. These must be calculation when selecting the strength of the radiation source. The actual measuring principle is based on the absorption of the radiation by the product to be measure.
The tank gauging system is based upon the principal of displacement measurement .
A small displacer is accurately positioned in the liquid medium using a servo motor. The displacer is suspended on a stainless steel wire which is wound onto a finely grooved drum housed within the transmitter unit.
The drum is driven via two coupling magnets which are completely separated by the drum housing. One magnetic ring is connected to the wire the other is connected to the drive motor. As the inner ring turns, its magnetic attraction causes the outer ring to turn as well, thus turning the entire drum assembly.
The weight on the wire puts torque on the outer ring and this torque is detected by a unique electromagnetic transducer on the inner ring.
The change of magnetic flux generated between the drum assembly and servo-driven coupling magnets are converted into a voltage.
At the operator command, the displacer is lowered and as it touches the liquid, the weight of displacer is reduced because of the buoyant force of liquid. As result, the torque in the magnetic coupling is changed and this changed is measured by a Hall detector. The signal, an indication of the position of the displacer, is sent to the motor control circuit. As liquid level is rises or fall, the position of displacer is adjusted by the drive motor and hence continuous measurement up to 0.9 mm accurately.
The weight of a column of liquid generates a hydrostatic pressure. At constant density, the hydrostatic pressure is a function of the height h of the column of liquid only.
Phydrostatic = h.d
h = distance between the surface of the liquid and the centre of the process diaphragm.
d = density
Bubbler Level Measurement System
If Process liquid have corrosive , radioactive, contains solid particle, etc properties, it is desirable to prevent it from coming into direct contact with
which utilizes a purge gas, can be used.
a bubbler tube is immersed to the bottom of the vessel in which the liquid level is to be measured. A gas (called purge gas) is allowed to pass through the bubbler tube. Consider that the tank is empty. In this case, the gas will escape freely at the end of the tube and therefore the gas pressure inside the bubbler tube (called back pressure) will be at atmospheric pressure. However, as the liquid level inside the tank increases, pressure exerted by the liquid at the base of the tank (and at the opening of the bubbler tube) increases. The hydrostatic pressure of the liquid in effect acts as a seal, which restricts the escape of, purge gas from the bubbler tube. As a result, the gas pressure in the bubbler tube will continue to increase until it just balances the hydrostatic pressure (P = S⋅H) of the liquid. At this point the backpressure in the bubbler tube is exactly the same as the hydrostatic pressure of the liquid and it will remain constant until any change in the liquid level occurs. Any excess supply pressure will escape as bubbles through the liquid.
proportionally, since the density of the liquid is constant.
A level transmitter (DP cell) can be used to monitor this backpressure. In an
open tank installation, the bubbler tube is connected to the high-pressure
side of the transmitter, while the low pressure side is vented to atmosphere.
The output of the transmitter will be proportional to the tank level. A constant differential pressure relay is often used in the purge gas line to
ensure that constant bubbling action occurs at all tank levels. The constant
differential pressure relay maintains a constant flow rate of purge gas in the bubbler tube regardless of tank level variations or supply fluctuation. This ensures that bubbling will occur to maximum tank level and the flow rate does not increase at low tank level in such a way as to cause excessive
disturbances at the surface of the liquid. Note that bubbling action has to be
continuous or the measurement signal will not be accurate. An additional advantage of the bubbler system is that, since it measures only the backpressure of the purge gas, the exact location of the level transmitter is not important. The transmitter can be mounted some distance from the process. Open loop bubblers are used to measure levels in spent fuel bays.
Radar level transmitters work based on the time of flight (TOF) measuring principle or time domain reflectometry (TDR). To start with, we can measure the distance from the reference point to the surface of a liquid. Then the meter sends a high-frequency signal from an antenna or along a probe.
Radar level transmitter’s working principle
When the product surface reflects the pulse, the meter receives the reflection. Then the device calculates how long it took the pulse to return and translates that time delay into a level measurement.
Before we apply a radar meter, we need to know the dielectric constant (DC) of a product, as that has a direct impact on the quality of the reflections. In fact, products with high DC values will reflect strong, clear pulses. On the other hand, a product with a low DC value will absorb more of the pulse, reflecting less and reducing accurate readings.
Radar level measurement is a safe solution even under extreme process conditions (pressure, temperature) and vapours. Radar level transmitters can also be used in hygienic applications for non-contact level measurement. Radar level transmitters versions are available for different industries like for water/wastewater, the food industry, life sciences or the process industry. Various antenna versions for every kind of radar applications are available.
Basic radar level transmitter setup
The basic radar level transmitter setup isn’t too hard. Regardless, nowadays they come with a “setup wizard,” which makes them even easier. Usually, the setup wizard will walk us through the setup. For example, it’ll often start by asking which product we want to measure. Then it’ll go on to ask for the dielectric of the product, then the type of tank, and so on.
A radar level detector basically includes:
• A transmitter with an inbuilt solid-state oscillator
• A radar antenna
• A receiver along with a signal processor and an operator interface
• The operation of all radar level detectors involves sending microwave beams emitted by a sensor to the surface of the liquid in a tank. The electromagnetic waves after hitting the surface of the fluid returns back to the sensor which is mounted at the top of the tank or vessel. The time taken by the signal to return back i.e. time of flight (TOF) is then determined to measure the level of fluid in the tank.
Yes, devices that use radar can have significant problems with radar buildup. That’s because when the buildup increases, the signal strength will drop, giving bad measurements.
Thus, proper cleaning of the antenna will fix this problem and get us back to reliable measurements. However, depending on the device, we can also control the clean up with the device itself or our programmable logic controller.
If a device doesn’t have the automated option, then we’ll have to clean it manually. Therefore, we may want to consider upgrading the radar device. Today’s transmitters can measure even with buildup and perform their own cleanings when necessary.
Also, some devices on the market have algorithms to reduce the interference caused by buildup. That means we can maintain the device’s accuracy, even with high levels of contamination, without the need for cleaning. The ability of the transmitter to control the cleanup process using compressed air will also save us money and reduce unplanned downtimes.
Types of radar level transmitters
We have two kinds of radar level transmitters:
• Noninvasive or Non-contact Systems
• Invasive or Contact System
Noninvasive radar level measurement
Radar level measurement is based on the principle of measuring the time required for the microwave pulse and its reflected echo to make a complete return trip between the non-contacting transducer and the sensed material level. Then, the transceiver converts this signal electrically into distance/level and presents it as an analogue and/or digital signal. The transducer’s output can be selected by the user to be directly or inversely proportional to the span.
Pulse radar has been used widely for distance measurement since the very beginnings of radar technology. The basic form of pulse radar is a pure time of flight measurement. Short pulses, typically of a millisecond or nanosecond duration, are transmitted and the transit time to and from the target is measured.
Everything inside the tank that conducts energy, such as level switches or heater systems, can reflect the signal. If the product has a low dielectric level, then the radar may find a false level. We may also wind up with bad readings from vapour, foam, or other product conditions.
We can find many solutions to avoid this issue – high-frequency radar level transmitters, echo analysis, stilling wells, and more. This type of radar level measurement can be very accurate.
Invasive or contact radar level measurement
The invasive method used for liquid level measurement is called Guided-wave radar i.e. GWR method. In this method, a cable or rod is employed which act as a wave guide and directs the microwave from the sensor to the surface of the material in the tank and then straight to its bottom. “The basis for GWR is time-domain reflectometry (TDR), which has been used for years to locate breaks in long lengths of cable that are underground or in building walls.
A TDR generator develops more than 200,000 pulses of electromagnetic energy that travel down the wave guide and back.”
The dielectric constant of the process material will cause variation in impedance and reflects the wave back to the radar. Time taken by the pulses to go down and reflect back is determined to measure the level of the fluid.
In this method, the degradation of the signal in use is very less since the wave guide offers an extremely efficient course for signal travel. Hence, level measurement in case of materials having very low dielectric constant can be done effectively. Also in this invasive measurement method, pulses are directed via a guide; hence factors like surface turbulence, foams, vapours or tank obstructions do not influence the measurement.
The GWR method is capable of working with different specific gravity and material coatings. However, there is always a danger that the probe or rod used as a waveguide may get impaired by the agitator blade in the fluid under measurement. A typical guided wave radar system is shown in the figure below.
Effect of Pressure on Level Measurement
Level measurement systems that use differential pressure P as the sensing method, are also affected by pressure, although not to the same degree as temperature mentioned in the previous section.
Again the measured height H of a column of liquid is directly proportional to the pressure PL exerted at the base of the column by the liquid and inversely proportional to the density d of the liquid.
H ~ PL/d
Density (mass per unit volume) of a liquid or gas is directly proportional to the process or system pressure Ps.
d ~ Ps
Thus, for any given amount of liquid in a container, the pressure PL (liquid pressure) exerted at the base of the container by the liquid will remain constant, but the height will vary inversely with the process or system pressure.
H ~ 1/Ps
Most liquids are fairly incompressible and the process pressure will not affect the level unless there is significant vapour content.
Any given instrument is prone to errors either due to aging or due to manufacturing tolerances. Here are some of the common terms used when describing the performance of an instrument.
The range of an instrument is usually regarded as the difference between the maximum and minimum reading. For example a thermometer that has a scale from 20 to 100oC has a range of 80oC. This is also called the FULL SCALE DEFLECTION (f.s.d.).
The accuracy of an instrument is often stated as a % of the range or full scale deflection. For example a pressure gauge with a range 0 to 500 kPa and an accuracy of plus or minus 2% f.s.d. could have an error of plus or minus 10 kPa. When the gauge is indicating 10 kPa the correct reading could be anywhere between 0 and 20 kPa and the actual error in the reading could be 100%. When the gauge indicates 500 kPa the error could be 2% of the indicated reading
If an accurate signal is applied and removed repeatedly to the system and it is found that the indicated reading is different each time, the instrument has poor repeatability. This is often caused by friction or some other erratic fault in the system.
Instability is most likely to occur in instruments involving electronic processing with a high degree of amplification. A common cause of this is adverse environment factors such as temperature and vibration.
For example, a rise in temperature may cause a transistor to increase the flow of current which in turn makes it hotter and so the effect grows and the displayed reading DRIFTS. In extreme cases the displayed value may jump about. This, for example, may be caused by a poor electrical connection affected by vibration.
In any instrument system, it must take time for a change in the input to show up on the indicated output.This time may be very small or very large depending upon the system. This is known as the response time of the system. If the indicated output is incorrect because it has not yet responded to the change, then we have time lag error.
A good example of time lag error is an ordinary glass thermometer. If you plunge it into hot water, it will take some time before the mercury reaches the correct level. If you read the thermometer before it settled
down, then you would have time lag error. A thermocouple can respond much more quickly than a glass thermometer but even this may be too slow for some applications.
When a signal changes a lot and quite quickly, (speedometer for example), the person reading the dial would have great difficulty determining the correct value as the dial may be still going up whe n in reality the signal is going down again.
Most forms of equipment have a predicted life span. The more reliable it is, the less chance it has of going wrong during its expected life span. The reliability is hence a probability ranging from zero (it will definitely fail) to 1.0 (it will definitely not fail).
This occurs when the input to the system is constant but the output tends to change slowly. For example when switched on, the system may drift due to the temperature change as it warms up.
|
Objectives Of Ionic Cpd And Two Step Mole
ChemistryWiki | RecentChanges | Preferences
Objectives of Ionic Cpd and Two Step Mole
Essential Questions (EQ2): Are there any basic particles?
The student should be able to:(bolded number are MA Framework Standard numbers)
1. Draw correct Lewis Dot Structures of ionic compounds.(4.2)
2. Correctly set-up and calculate One Step or Two Step Mole Calculation Problems using the Mole Wheel in your folder including molar mass calculation and the sig fig rules.[5.3,5.4](5.3)
3. Understand and be able to calculate percent composition problems both form mass data or a chemical formula.
4. Understand the difference between an Empirical Formula (EF) and Molecular Formula (MF) and how it relates to the subscripts of the chemical formula.[5.4](5.4)
5. Calculate EFand MF. This include determining EF from mass data, percent composition info, and MF from mass data, percent composition and EF data. [5.4](5.4)
6. Provide the correct chemical name from chemical formula or visa versa for both ionic (IC) and molecular compounds (MC) including:(4.6)
2019-2020 Added:
7.Understand how to calculate the number of mole of a solute in a solution and visa versa (called Molarity calcualtions). Also Molarity calculations make another "spoke" on the Mole Wheel
ChemistryWiki | RecentChanges | Preferences
Edit text of this page | View other revisions
Last edited December 5, 2019 1:44 pm (diff)
|
2007-08 AM Patterns of Participation: Literature and Criticism in the 19th and 20th Centuries
From Angl-Am
Revision as of 10:12, 8 December 2007 by Florian Gubisch (Talk | contribs)
Jump to: navigation, search
• Time: Thursdays 2-4 pm
Introduction. Technicalities.
[meeting postponed, Akkreditierung]
Nineteenth-Century Concepts of Criticism: Oscar Wilde, “The Critic as Artist” (1889)
Questions for next week's discussion (15.11.07):
Wilde: The Critic as Artist
How does Wilde arrive at the position in the final paragraph?
What are the implications of this essay for public society? (For example, do they participate or not?)
Arnold: The Function of Criticism at the Present Time
What is Arnold's definition of literature in this essay?
What is his definition of criticism?
What other definitions of criticism are there? What is he against?
What does Arnold say about the relation of criticism to the public, politics, practice, and creativity?
What differences are there between England and the continent?
p.s.. My comments for each session can be found on stud-ip under "discussion".--Lindsay 21:58, 12 November 2007 (CET)
Nineteenth-Century Concepts of Criticism: Matthew Arnold, “The Function of Criticism at the Present Time” (1864)
Genreral questions for session on 29.11.07:
How does this poem try to address a public event and discussion? What kind of effect does it hope to create? Is it successful? If so, how is this effect achieved?
[meeting postponed, Conference]
Public Poetry: Alfred Tennyson, “The Charge of the Light Brigade.” (1854) (presented by Andreas Sprenkel & Gordon Barnard)
Secondary Reading:
The handout can be downloaded following this link:
Handout to the presentation "The Charge of the Light Brigade"
Public Poetry: W. B. Yeats, "Easter 1916." (1919)
Questions for our discussion (Dec.06.2007)
1. What impression do the rhyme and the metre create?
2. What is the main theme of the poem? Can you explain how it develops?
3. Can you describe the speaker's position on the event "The Easter Rising"?
Public Poetry: W. H. Auden, "Spain 1937". / "September 1, 1939".
Etexts and Other Links on Auden
Questions: Spain - Spain 1937
1) "Yesterday, Today, Tomorrow" structure the poem. What perception of time does Auden have?
2) The poem can also be read as a call to action. What`s your opinion?
3) How does the poem discuss the ethics of killing?
Secondary Reading:
Melanie Williams, 2004
the following item needs to be re-scheduled.
Poets as Critics: Political Journalism by T.S.Eliot and Ezra Pound; [alternatively/ additionally]: Eliot and the Poetics of Modernism: T. S. Eliot, “Tradition and the Individual Talent” vs. Wordsworth, “Preface to the Lyrical Ballads”
Writers as Critics:
Woolf, Virginia. Mr. Bennett and Mrs. Brown (1924)
Presentation by Julia Jung, Ann-Katrin Ahlers
Virginia Woolf, The Common Reader (1925)
Virginia Woolf’s contributions to the Times Literary Supplement
Exclusionist Writing? V. Woolf. Selected Texts from Monday and Tuesday (1921) <http://ebooks.adelaide.edu.au/w/woolf/virginia/w91m/>
Implicit and Explicit Politics in James Joyce, A Portrait of the Artist as a Young Man (1916)
Implicit and Explicit Poetics in James Joyce, A Portrait of the Artist as a Young Man (1916)
Course Evaluation. – Final Discussion.
Feedback on Course Evaluation. – Discussion of Term Paper Projects.
|
import sympy as sym # This library is used for symbolic computing # Make two variables x and y x, y = sym.symbols("x y") print x # this will simply print the text "x" # Make matrices and vectors using symbols F = sym.Matrix([x*y, x**2 - y]) # vector of length 2 print F A = sym.Matrix([[x*y, x + y], [x - y, x*y**2]]) # matrix of size 2x2 print A # Evaluate symbolic expressions / using numbers instead of x and y print F.subs({x:1, y:2}) # set x=1 and y=2 NB: only in this line print A.subs({x:1, y:2}) # set x=1 and y=2 # Compute Jacobian of symbolic expression J = F.jacobian([x, y]) # must give the variables we differentiate wrt print J # Solve Ax = b b = sym.Matrix([x + y, x - y]) print A.solve(b) # symbolic solution print A.solve(b).subs({x:1, y:2}) # evaluated solution # Can also evaluate first, and then solve the system (This is faster) A_s = A.subs({x:1, y:2}) b_s = b.subs({x:1, y:2}) print A_s.solve(b_s) print A_s.solve(b_s).norm() # compute norm of answer
|
Documentation:CHBE Exam Wiki/Final Exam 2016W/Question 1
From UBC Wiki
Jump to navigation Jump to search
CHBE 241
Exam resources wiki
Example alt text
Chemical and Biological Engineering
Welcome to the CHBE Exam Resources Wiki!
This wiki is intended to host past exams
with fully worked-out hints and solutions
Past Exams
Final Exam 2016W
Midterm Exam 1 2016W
Midterm Exam 2 2016W
Problem Sets
Module 1 - Process Basics
Module 2 - Reactors
Module 3 - Separations 1
Module 4 - Separations 2
Module 5 - Non-reactive Energy Balances
Module 6 - Reactive Energy Balances
The water gas shift reaction is commonly used for producing hydrogen and follows the reaction given below:
CO + H2O ↔ CO2 + H2
Say we have a water gas shift reactor with 3 entering streams. The first stream (#1) contains 10 weight% CO2, 30 weight% CO and the remainder as nitrogen (N2) and enters at 100 kg/h. The second stream (#2) contains 40 mol% hydrogen and the remainder as CO. The third entering stream (#3) contains only water. Only one product stream exits and it contains a mixture of all the entering species. The reactor achieves an 80% conversion of the entering CO. The reactants and products enter and leave at 300 °C and 1 atm and the reactor exchanges energy by giving off or taking in heat to achieve this. The exiting stream is not under equilibrium.
Physical Data
MW CO2 : 44 g/mol
MW CO : 28 g/mol
MW N2 : 28 g/mol
Question 1a [5 points]
Draw a flowchart for the process labeling all streams and components.
Question 1b [5 points]
Indicate the mole fractions of all components and the total molar flow in kmol/h for stream #1.
Question 1c [5 points]
Do a degrees of freedom analysis on the reactor. Is the problem over-specified, under-specified or adequately specified?
Question 1d [5 points]
The pressure gauge for the reactor was found to not be accurate and instead a manometer was installed with one end attached to the reactor and then other end open to the atmosphere. A liquid with a specific gravity of 5.27 (relative to water at 4°C) was used in the manometer. The atmospheric pressure was found to be 100 kPa. The fluid was higher on the side open to the atmosphere by 50 cm. What is the absolute pressure inside the reactor in kPa?
Script error: The function "navbox" does not exist.
|
Skip to content
Message Passing Interface
Message Passing Interface (MPI) is the principal method of performing parallel computations on all CHPC clusters. Its main component is a standardized library that enables communication between processors in distributed processor environments.
MPI Distributions
There are numerous MPI distributions available and thus CHPC supports only some of them, those we believe are best suited for the particular system.
More information: MPI standard page.
The CHPC clusters utilize two types of network interconnects, Ethernet and InfiniBand. Except for some nodes on Lonepeak, all clusters have InfiniBand and users should use it since it is much faster than Ethernet.
We provide a number of MPI distributions with InfiniBand support: MVAPICH2, OpenMPI, MPICH and Intel MPI. All have fairly similar performance, however, MVAPICH2, MPICH and Intel MPI seem to be more stable with multi-threaded programs. For each of these MPI distribution, there is a general build that works on all CHPC clusters, and also cluster specific builds of OpenMPI and MVAPICH2 which target the cluster specific CPUs for optimal performance. The CPU optimizations in the MPI calls should have minor effect on performance, which is why we are moving towards the general builds. On the top of that, the Intel compiler allows for multiple CPU target optimizations in one library which is also how we build the MPIs. Builds with with the GNU and PGI compilers are optimized for the lowest common denominator, which is the Ember cluster.
The general builds of OpenMPI, MPICH and IntelMPI also support multiple network interfaces in a single build, usage is described in a this page.
More information from the developers of each MPI distribution can be found here:
Sourcing MPI and Compiling
Before performing any work with MPI, users need to source the MPI distributions appropriate for their needs. Each of the distributions has its pros and cons. Intel MPI has good performance and very flexible usage, but, it's a commercial product that we have to license. MVAPICH2 is optimized for InfiniBand, but, it does not provide flexible process/core affinity in multi-threaded environment. MPICH is more of a reference platform which has InfiniBand support through a relative recent LibFabrics interface. Its feature set is the same as that of Intel MPI and MVAPICH2 (both of which are based on MPICH) . Intel MPI, MVAPICH2 and MPICH can also be freely interchanged thanks to their common Application Binary Interface (ABI), which main advantage is no need to build separate binaries for each distribution.
Finally, OpenMPI is quite flexible, but,we have seen its peformance to be slightly below Intel MPI and MVAPICH2. It is also not ABI compatible with the other MPIs that we provide. If you have any doubts on what MPI to use, contact us.
Note that in the past we have provided separate MPI builds for different clusters. Since these days most MPIs provide flexible interfaces for multiple networks, we provide single builds for all clusters and allow for changing the default network interface at runtime.
To set up general MPI package, use the module command as:
module load <compiler> <MPI distro>
<MPI distro>= mvapich2, openmpi, mpich, impi
<compiler> = gcc (GNU), intel (Intel), pgi (Portland Group)
Example 1. If you were running a program that was compiled with the Intel compilers and uses mvapich2 :
module load intel mvapich2
Example 2. If you were running a program that was compiled with the PGI compilers and uses OpenMPI :
module load pgi openmpi
The CHPC keeps older versions of each MPI distribution, however, the backwards compatibility is sometimes compromised due to network driver and compiler upgrades. When in doubt, please, use the latest versions of compilers and MPI distributions as obtained with the module load command. These older versions can be found in the respective directory for each distribution, or module loaded by explicitly specifying the compiler and MPI version:
/uufs/<MPI distro>/<version><g,i,p>
Different version are indicated by version numbers, and what compiler was used (GNU, Intel, PGI, respectively). If no compiler tag is given, assume that it is GNU.
Compiling with MPI is quite straightforward. Below is a list of MPI compiler commands with their equivalent standard version:
Language MPI Command Standard Commands
C mpicc icc, pgcc
C++ mpicxx icpc, pgCC
Fortran 77/90 mpif90,mpif77
ifort, pgf90, pgf77
When you compile, make sure you record what version of MPI you used. The std builds are periodically updated, and programs will sometimes break if they depend on the std builds.
Note that Intel MPI supplies separate compiler commands (wrappers) for the Intel compilers, in a form of mpiicc, mpiicpc and mpiifort. Using mpicc, mpicxx and mpif90 will call the GNU compilers.
Running with MPI
mpirun command launches the parallel job. For help with mpirun, please consult the manpages (man mpirun) or run mpirun --help. The important parameter is the number of MPI processes specification (-np).
To run on a cluster, or on a CHPC supported Linux desktop desktop:
mpirun -np $SLURM_NTASKS ./program
The $SLURM_NTASKS variable corresponds to SLURM task count requested with the #SBATCH -n option.
Multi-threaded MPI
For optimal performance, especially in the case of multi-threaded parallel programs, there are additional arguments that must be passed to the program. Specifically, the variable OMP_NUM_THREADS (number of threads to parallelize over) needs to be set. When running multi-threaded jobs, make sure to also link multi-threaded libraries (e.g. MKL, FFTW), and vice versa, link single threaded libraries to single threaded MPI programs.
The OMP_NUM_THREADS count can be calculated automatically by utilizing SLURM provided variables, assuming that all nodes have the same CPU core count. This can prevent accidental over or under-subscription when node or task count in the SLURM script changes:
# find number of threads for OpenMP
# find number of MPI tasks per node
set TPN=`echo $SLURM_TASKS_PER_NODE | cut -f 1 -d \(`
# find number of CPU cores per node
set PPN=`echo $SLURM_JOB_CPUS_PER_NODE | cut -f 1 -d \(`
@ THREADS = ( $PPN / $TPN )
mpirun -genv $OMP_NUM_THREADS -genv MV2_ENABLE_AFFINITY 0 -np $SLURM_NTASKS ./program
Task/thread affinity
In the NUMA (Non Uniform Memory Access) architecture, which is present on all CHPC clusters, it is often advantageous to pin MPI tasks and/or OpenMP threads to the CPU sockets and cores. We have seen up to 60% performance degradation in high memory bandwidth codes when process/thread affinity is not enforced. The pinning prevents the processes and threads to migrate to CPUs which have more distant path to the data in the memory. Most commonly we would set the MPI task to be pinned to a CPU socket, with OpenMP threads allowed to migrate over this socket's cores. All MPIs except for MPICH automatically bind MPI tasks to CPUs, but the behavior and adjustment options depend on the MPI distribution. We describe MPI task pinning in the relevant MPI section below, for more details on the problem see our blog post, which provides a general solution using a shell script that pins both tasks and threads to cores.
Running MVAPICH2 programs
MVAPICH2 by default binds MPI tasks to cores, so, optimal binding of single threaded MPI program is one MPI task to one CPU core and is achieved with plainly running:
mpirun -np $SLURM_NTASKS ./program
For multi-threaded parallel programs, we need to disable the task to core affinity by settingMV2_ENABLE_AFFINITY=0.This also means that we need to pin the tasks manually, which is can be done using Intel compilers with KMP_AFFINITY=verbose,granularity=core,compact,1,0. To run multi-threaded MVAPICH2 code compiled with Intel compilers:
module load intel mvapich2
# find number of threads for OpenMP
# find number of MPI tasks per node
# find number of CPU cores per node
@ THREADS = ( $PPN / $TPN )
mpirun -genv OMP_NUM_THREADS $OMP_NUM_THREADS -genv MV2_ENABLE_AFFINITY 0 -genv KMP_AFFINITY verbose,granularity=core,compact,1,0 -np $SLURM_NTASKS ./program
For other compilers, the suggestions listed in MVAPICH2 user's guide don't seem to be appropriate for multi-threaded programs. However, we have found that using MPICH's process affinity options will do the trick (as MVAPICH2 is derived from MPICH). That is, for example on 16 core, 2 socket cluster node, runing 2 tasks 8 threads each:
mpirun -genv MV2_ENABLE_AFFINITY 0 -bind-to numa -map-by numa -genv OMP_NUM_THREADS 8 -np 2 ./myprogram
taskset -cp 8700
pid 8700's current affinity list: 0-7,16-23
taskset -cp 8701
pid 8701's current affinity list: 8-15,24-31
Jump to top of page
Running OpenMPI programs
Generally, our tests show that for the InfiniBand, OpenMPI performance is slightly below that of MVAPICH2. Nevertheless, OpenMPI has a number of appealing features that have led us to provide it to CHPC users. Again, see the manpages for OpenMPI for details.
Running OpenMPI programs is straightforward, and the same on all clusters:
mpirun -np $SLURM_NTASKS $WORKDIR/program.exe
mpirun flags for multi-threaded process distribution and binding to the CPU sockets are-map-by socket -bind-to socket.
To run an OpenMPI program multithreaded:
mpirun -np $SLURM_NTASKS -map-by socket -bind-to socket $WORKDIR/program.exe
OpenMPI will automatically select the optimal network interface. To force it to use Ethernet, use --mca btl tcp,self mpirun flag. Network configuration runtime flags are detailed here.
Running MPICH programs
MPICH (formerly referred to as MPICH2) is an open source implementation developed at Argonne National Laboratories. Its newer versions support both Ethernet and InfiniBand, although we do not provide cluster specific MPICH build, mainly since MVAPICH2 which is derived from MPICH provides additional performance tweaks. MPICH should only be used for debugging on interactive nodes, single node runs and and embarrassingly parallel problems, as its InfiniBand build does not ideally match our drivers.
mpirun -np $SLURM_NTASKS ./program.exe
Since by default MPICH does not bind tasks to CPUs, use -bind-to core option to bind tasks to cores (equivalent to MV2_ENABLE_AFFINITY=1) in case of single threaded program. For multi-threaded programs, one can use -bind-to numa map-by numa, with details on the -bind-to option obtained by running mpirun -bind-to -help, or consulting the Hydra process manager help page. The multi-threaded process/thread affinity seems to be working quite well with MPICH, for example, on a 16 core Kingspeak node with core-memory mapping:
numactl -H
available: 2 nodes (0-1)
node 0 cpus: 0 1 2 3 4 5 6 7 16 17 18 19 20 21 22 23
node 1 cpus: 8 9 10 11 12 13 14 15 24 25 26 27 28 29 30 31
taskset -cp 8595
pid 8595's current affinity list: 8-15,24-31
mpirun -bind-to core:4 -map-by numa -genv OMP_NUM_THREADS 4 -np 4
taskset -cp 9549
pid 9549's current affinity list: 0-3,16-19
Notice that the binding is also correctly assigned to a subset of CPU socket cores when we use 4 tasks on 2 sockets. Intel MPI is also capable of this, MVAPICH2 (unless using MPICH's flags) and OpenMPI don't seem to have an easy way to do this.
MPICH by default uses the slower Ethernet network for communication, to take advantage of InfiniBand, set environment variable MPICH_NEMESIS_NETMOD=ofi. However, please note that based on our tests MPICH's implementation of the InfiniBand does not seem to be highly optimized so you may get better performance with Intel MPI or MVAPICH2.
Note that all the examples above only pin MPI tasks to cores, allowing the OpenMP threads to freely float across the task's cores. Sometime it is advantageous to also pin threads, which is decribed here.
Running Intel MPI programs
Intel MPI is a high performance MPI library which runs on many different network interfaces. Apart from its runtime flexibility, it also integrates with other Intel tools (compilers, performance tools). For a quick introduction to Intel MPI, see the Getting Started guide,
Intel MPI by default works with whatever interface it finds on the machine at runtime. To use it module load impi .
For best performance we recommend using Intel compilers along with the IMPI, so, to build, use the Intel compiler wrapper calls mpiicc, mpiicpc, mpiifort.
For example
mpiicc code.c -o executable
Jump to top of page
Network selection with Intel MPI
Since IMPI is designed to run on multiple network interfaces, one just needs to build a single executable which should be able to run on all CHPC clusters. Combining this with the Intel compiler's automatic CPU dispatch flag (-axCORE-AVX512,CORE-AVX2,AVX,SSE4.2) allows to build a single executable for all the clusters. The network interface selection is controlled with the I_MPI_FABRICS environment variable. The default should be the fastest network, in our case InfiniBand. We can verify the network selection by running the Intel MPI benchmark and look at the time it takes to send a message from one node to another:
srun -n 2 -N 2 -A mygroup -p ember --pty /bin/tcsh -l
mpirun -np 2 /uufs/
# Benchmarking PingPong
# #processes = 2
#bytes #repetitions t[usec] Mbytes/sec
0 1000 1.74 0.00
It takes 1.75 microseconds to send a message there and back which is typical for InfiniBand network.
Intel MPI provides two different MPI fabrics for InfiniBand, one based on Open Fabrics Enterprise Distribution (OFED), and the other on Direct Access Programming Library (DAPL), denoted by ofa and dapl, respectively. Moreover, one can also specify intra-node communication, out of which the fastest should be shared memory(shm). According to our observations, the default fabrics is shm:dapl, which can be confirmed by using environment variable I_MPI_DEBUG larger than 2, e.g.
mpirun -genv I_MPI_DEBUG 2 -np 2 /uufs/
[0] MPI startup(): shm and dapl data transfer modes
The performance of the OFED and DAPL are comparable, but, it may be worth-wile to test both to see if your particular application gets a boost from one fabrics or the other.
If we'd like to use the Ethernet network instead (except for Lonepeak, not recommended for production due to slower communication speed), we choose I_MPI_FABRICS tcp and get:
mpirun -genv I_MPI_FABRICS tcp -np 2 /uufs/
# Benchmarking PingPong
# #processes = 2
#bytes #repetitions t[usec] Mbytes/sec
0 1000 18.56 0.00
Notice that the latency on the Ethernet is about 10x larger than on the InfiniBand.
Single and multi-threaded process/thread affinity
Intel MPI pins processes and threads to sockets by default, so, no additional runtime options should be needed unless the process/thread mapping needs to be different. If that is the case, consult the OpenMP interoperability guide. For the common default pinning.:
mpirun -genv OMP_NUM_THREADS 8 -np 2 ./myprog
taskset -cp 10085
pid 10085's current affinity list: 0-7,16-23
mpirun -genv OMP_NUM_THREADS 4 -np 4 ./myprog
taskset -cp 9119
pid 9119's current affinity list: 0-3,16-19
Based on our investigation detailed in here, Intel MPI does the best job in pinning MPI tasks and OpenMP threads, but, in case of more exotic MPI tasks/OpenMP threads combinations, use our task/thread pinning script.
Common MPI ABI
From Intel MPI 5.0 and MPICH 3.1 (and MVAPICH2 1.9 and higher which is based on MPICH 3.1), the libraries are interchangeable at the binary level, using common Application Binary Interface (ABI). This in practice means that one can build the application with MPICH, but, run it using the Intel MPI libraries, and thus taking advantage of the Intel MPI functionality. See details about this at
Last Updated: 6/10/21
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.