text
stringlengths 188
632k
|
|---|
Rise in eating disorders among teens
They increased from 959 13 to 19-year-olds in 2010/11 to 1,815 in 2013/14.
“The reasons behind why more children and young people have eating disorders are complex but the situation is made much worse by the onslaught of pressure that they face to have the ‘perfect body’.”
“We’re getting increasingly concerned about the pressure of social media. Literally with one click of a button very vulnerable young people are able to access 10,000 images of ‘perfect looking’ people which places them under a lot of pressure. Young people who look at these images often develop body image dissatisfaction, quite low self esteem, because they’re constantly comparing themselves to these perfect images. This is a risk factor for disordered eating and more serious eating disorders which can prove fatal.”
If you want to find out more or need support, there are lots of places that can help:
|
Jhansi is specially known in the memories of Indians for brave Lakshmi Bai who later on renowned as Jhansi ki Rani (queen of Jhansi). When King Gangadhar Rao died in 1853, his widow and successor rani Lakshmibai was forcibly pulled back by British. During the Indian uprising also famous as First War of Independence in 1857, Rani Lakshmibai was at the forefront of Jhansi's rebellion. They forced British force to take a back seat but in next year they took the advantage of interior conflict between rebel forces and defeated the Jhansi rebels. Rani of Jhansi fled to Gwalior. She fought her last breath, in her last stand she rode out against the British disguised as a man and subsequently became a heroine of Indian independence.
Sights And ActivitiesJhansi Fort
It was established by Maharaja Bir Singh Deo of Orchha in 1653. The local guides can tell you some exciting stories about the blood letting took place within its double walls and moats that were inhabited by crocodiles. The fort is on the hill top called Bangra. Fort is an architectural wonder because of its beauty and size. During the reign of Peshwas, Jhansi fort was the stronghold of that area because it was situated at a strategic location of the center.
The Jhansi museum or state museum is located below the fort. It is a nice place to know the history and heritage of Jhansi and Bundelkhand region. The museum showcases some beautiful miniature paintings, illustrated manuscripts, terracotta sculptures, weapons, dresses, and photographs from the Chandela Dynasty. Also it houses potraits of Rani Lakshmibai, cricketers and politicians.
The place was a home of Rani Rani Lakshmibai. It is a beautiful palace adorned with colorful paintings and artworks in its fences and walls. The state government has now changed Rani Mahal to a museum that houses a wide collection of sculptures and paintings of the period of 9th and 12th centuries AD. The palace comprises of arched chambers in the region of the open courtyard gives a splendid outlook. The palace has a distinctive character of Bundelkhand architecture.
Jhansi festival a locally organised cultural program of music, arts and dance commences on 28th February. The festival is also known as St. Jude Festival when thousands Christians pilgrims converge on St Jude's Church to plead their case to the patron saint of lost causes.
Maha Lakshmi Temple
The presiding deity of this temple is goddess Devi Mahalakshmi. The temple was constructed in 18th century. It is a nice place to visit as you will witness several devotees come to take the blessing of Mahalakshmi.
|
Concrete slabs build structure and community. Concrete slab thickness affects building strength and weatherproofing. A well-designed concrete slab can be a lasting symbol of togetherness and durability.
Concrete slabs have various uses today. They support bridges, buildings, roads, and pavements from wind and flood damage. Concrete slab thickness is crucial while building infrastructure. It must be sturdy enough to withstand huge weights without cracking but light enough to function. With many kinds for different uses, choosing the appropriate one for your project can make or break it.
Environmental circumstances, local restrictions, and design requirements determine the best concrete slab material. Understanding how each aspect impacts performance can help engineers choose the optimal option for their project. If a type is excessively expensive or impracticable, it may have to be rejected due to budget limits.
What Is The Thickness Of A Concrete Slab?
When it comes to determining the thickness of a concrete slab, there is no one-size-fits-all answer. How thick a concrete slab should be depends on several factors including its purpose and what type of load it will bear. This article will provide an overview of how thick a concrete slab should be for various applications.
The minimum thickness for any residential or commercial building is 4 inches (10 cm). For heavy equipment, such as forklifts, the recommended thickness ranges from 6 to 8 inches (15–20 cm). The thickness also needs to take into account potential soil movement, seismic activity, and other environmental conditions that may affect the structural integrity of the foundation. Additionally, if you are laying a driveway, walkway or patio then 5–6 inches (12–15 cm) is usually sufficient depending on the weight being supported by the structure.
In order to ensure your project’s success and longevity, taking all these elements into consideration is essential when deciding on the correct thickness of a concrete slab. It’s important to note that thicker slabs require more resources in terms of materials and labor but can offer greater support over time than thinner slabs so they may end up being worth their cost in the long run. With this information in hand, we can now move onto answering the question ‘how thick should a concrete slab be?’
How Thick Should A Concrete Slab Be?
When constructing a building, it is essential to consider the thickness of the concrete slab. This vital component serves as a foundation and provides structural integrity, so getting it right is paramount.
Understanding how thick does a concrete slab need to be can aid in making an informed decision about what type of construction materials should be used for any given project. There are various types of concrete slabs available on the market that come with different sizes, depths, and weights depending on their purpose. For example, if one requires a strong foundation for commercial purposes such as parking garages or warehouses then thicker slabs must be chosen than those intended for residential homes. Fortunately, there are many resources available including concrete slab thickness charts that can help guide users towards selecting the appropriate material for each situation.
In terms of regular home use cases, most experts recommend using at least 4 inches when constructing a house’s concrete foundation slab. However, this might vary based on factors such as soil composition or climate conditions; thus it is important to consult with specialists before deciding upon specific measurements and weight specifications. Ultimately, by taking into account the variables involved and following guidelines from reliable industry sources like concrete slab thickness charts, one can make sure they get the best results from their investment without compromising safety standards.
What Is The Minimum Thickness Of A Concrete Slab?
The structural integrity of a concrete slab depends largely on the thickness. It is estimated that up to 80% of all residential foundations are constructed using 4 inch thick slabs. When it comes to determining the minimum thickness of a concrete slab, it varies from one application to another and there isn’t an absolute answer. Generally speaking, for a basic driveway or patio, 3-4 inches is typically enough; whereas for industrial applications such as large warehouses, 8-10 inches may be necessary.
When selecting the proper concrete floor thickness for your project, you’ll need to consider several factors including its purpose, type of loading (static or dynamic), size and shape of the area being poured, soil conditions, climate extremes in temperature and moisture levels etc. Concrete slab sizes can range anywhere from 2” – 12” depending on load requirements but most commonly used thicknesses include 4″, 6″ and 8″. For instance standard garage floors use 4″ slabs while driveways might require 6″-8″. Ultimately when deciding how thick should a concrete foundation slab be the best practice would be to consult with local building codes and professional contractors who specialize in this field before beginning work.
What Is The Thickness Of A Concrete Foundation Slab?
Concrete slabs are a versatile and durable building material commonly used in foundation construction. According to the National Ready Mixed Concrete Association, over 500 million cubic yards of concrete is produced each year for use on our roads, bridges, buildings, and other infrastructure projects. The thickness of a concrete slab for foundations depends on several factors such as soil type, load-bearing capacity, climate conditions, etc.
When determining the minimum thickness of a foundation slab for residential applications it’s important to consider these four points: 1) local code requirements; 2) frost depth; 3) weight distribution; 4) structural integrity. Local codes set forth specific guidelines that must be adhered to when constructing any new structure. Frost depth varies regionally and should be taken into account when designing a foundation slab due to potential heaving or shifting caused by freezing temperatures. Weight distribution plays an essential role in reinforcing the strength of a concrete slab so proper calculations must be made before installing a slab. Finally, structural integrity is key to ensure that your foundation will last throughout its lifespan while providing support against wind gusts or seismic activity if located in areas prone to them.
Ultimately, the ideal thickness of a concrete foundation slab can vary depending on site-specific characteristics like soil composition and weather patterns but typically ranges between 6 inches (152 mm) and 8 inches (203 mm). To achieve maximum stability and longevity it’s recommended to consult with an experienced engineer who can advise you about what’s best for your particular project needs.
What Is The Thickness Of A Concrete Slab For A House?
When it comes to constructing a house, the thickness of a concrete slab is one of the most important considerations. Like an intricate puzzle piece that needs to fit perfectly into its place, this element can make or break your dream home. Thus, when deciding on what should be the ideal width of a concrete floor, many factors come into play.
Generally speaking, the recommended thickness for residential slabs usually ranges between 4 and 6 inches. However, depending on the application and load-bearing capacity required in the specific location where you are building your house, this range may vary significantly. For instance, if you are looking for a thicker slab to support heavier loads such as those generated by hot tubs or additional rooms like a basement or garage, then 8” will be more suitable. Additionally, if you are laying down tiles over your concrete slab then you might need 10” of extra thickness so that they do not crack easily due to movement at ground level. Ultimately, consulting with a professional builder before making any decisions could save you time and money in the long run.
Given all these factors, it is clear that there are multiple variables which must be taken into account when determining what is the best thickness for your particular project. A thorough understanding of all associated details allows for informed decisions about how thick each slab should be and ultimately helps ensure that your dream home becomes reality.
What Is The Ideal Thickness Of A Concrete Floor?
A concrete floor is a great choice for an array of applications, from residential homes to industrial buildings. When installed properly, it can last decades without needing repairs or maintenance. But how thick should the slab be? This is an important question that needs to be answered in order to ensure safe and effective use of this material.
The ideal thickness of a concrete floor depends on its intended purpose. For residential homes, a 4-inch slab provides ample support while being cost-efficient. Thicker slabs may also be necessary if heavy equipment will be used extensively; in such cases, 6 inches or more are usually recommended. Additionally, thicker slabs often provide better insulation against noise and temperature fluctuations, which can make them even more attractive for homebuilders. Furthermore, when installing a concrete floor outdoors – especially those exposed to extreme weather conditions – reinforcing mesh and other measures may need to be taken to increase its durability and strength.
Ultimately, the optimal thickness of a concrete slab varies depending on its usage requirements and local building codes; consulting with experienced professionals can help you determine what works best for your project’s specific needs.
How Thick Should Concrete Slab Be For Heavy Equipment?
Making the right choice of concrete slab thickness is essential for ensuring durability and longevity of a structure. As such, it is important to consider the purpose and function of the space before settling on an ideal measurement. When dealing with heavy equipment, such as manufacturing machinery or industrial tools, architects and engineers should take into account these factors when deciding how thick a concrete slab should be.
When selecting materials for use in high-traffic areas where large loads will be placed upon them, there are certain industry standards that must be taken into consideration. This includes looking at load bearing capacity and compression strength which greatly vary depending on the type of material used. In addition, taking into account any additional insulation needs may also be necessary due to fluctuations in temperature or sound dampening requirements. With all these considerations in mind, a good rule of thumb for heavy equipment would be to select slabs no thinner than 125mm (5 inches) thick for optimal safety and performance. By doing so, users can rest assured that their structures are secure enough to support whatever weight they require while remaining aesthetically appealing.
What Are The Standard Sizes Of Concrete Slab?
Concrete slabs are like the foundation of a building, providing stability and support. They come in various sizes to meet different needs and requirements. Here is a list of standard concrete slab sizes:
- 4 inches thick for residential patios, pathways and driveways;
- 6 inches thick for commercial sidewalks and light traffic areas with occasional heavy vehicle loads;
- 8-10 inches thick for heavier equipment and more frequent loading from vehicles;
- 12-14 inches thick for industrial structures that need extra strength. When making your choice about the thickness of your concrete slab, it’s important to consider what kind of load will be placed on the slab, how often it will be used, as well as any local regulations or codes that may apply. Consider consulting an expert to determine which size best fits your project needs. It can help you make sure you get the most out of your investment by getting the right type and size of concrete slab.
Frequently Asked Questions
What Are The Pros And Cons Of Different Thicknesses Of Concrete Slab?
The thickness of concrete slabs is an important factor to consider when constructing a building. Thickness affects the insulation, strength and durability of the finished structure. It is therefore crucial that the most suitable slab thickness be chosen based on the specific application and environment. This article will examine the pros and cons of different thicknesses of concrete slab.
Thicker slabs offer higher levels of stability and resistance to environmental factors such as moisture or temperature changes. The increased weight also adds to their resilience against seismic activity or other extreme events. On the downside, thicker slabs require more material, which can lead to greater costs in terms of labor and resources needed for construction. Furthermore, they are heavier than thinner slabs and may need extra support during installation.
When choosing a slab thickness, it is essential to take into account all relevant factors including cost considerations, space requirements, structural integrity needs as well as environmental conditions present at the site where it will be installed. By doing so, one can make sure that an optimal balance between performance and cost-effectiveness is achieved.
How Long Does A Concrete Slab Last?
Concrete slabs are like the foundation of a house – they provide support and stability, allowing it to last for many years. Understanding how long a concrete slab lasts is essential in making sure that any construction project meets its expected lifespan. To help understand this better, let us consider an allegory: imagine our concrete slab is like a racecar, capable of great speed and power but needing regular maintenance if it was to keep running at its best for as long as possible.
The longevity of a concrete slab depends on several factors such as weather conditions, quality of materials used and environmental stressors. Proper use and care can significantly extend the life of the slab, ensuring optimal performance over time. Here’s what we need to do to ensure our ‘race car’ runs well:
- Install reinforcements such as rebar or mesh reinforcement;
- Apply waterproofing agents during installation;
- Maintain proper drainage around the area where the slab will be placed. When these three elements are taken into account, one can expect a properly constructed concrete slab to have a lifespan ranging from 25-50 years depending on usage and environment. With adequate preventive measures in place, the lifetime can even go up beyond 50 years in some cases! The key is to understand when your ‘race car’ needs service and repairs so you can maintain it accordingly.
So there you have it – by taking necessary precautions and employing appropriate techniques while constructing your concrete slabs, you should be able to enjoy their benefits for many decades without worrying about premature deterioration or failure. Don’t forget – regular checkups are important in keeping them running smoothly!
What Is The Cost Difference Between Thicker And Thinner Slabs?
Similar to an artist’s canvas, the thickness of a concrete slab can affect both its cost and longevity. Cost differences between thicker and thinner slabs are not always obvious; however, they should be taken into account when considering one for any project.
When evaluating the cost difference between thicker and thinner slabs, it is important to consider five key factors: strength, durability, ease of installation, insulation properties and aesthetic appeal. Strength plays a vital role in ensuring that the slab will provide adequate support. Durability ensures that the slab will withstand all weather conditions over time without deteriorating quickly or easily cracking. Ease of installation helps determine how much labour and materials are needed during construction as well as any additional costs associated with special tools or techniques. Insulation properties help to regulate temperatures within a building which can result in lower energy costs over time. Finally, the aesthetic appeal influences how desirable the finished product looks depending on colour variations or specific design requirements.
These factors need to be analysed carefully so that informed decisions about slab thicknesses can be made prior to purchase. As such, contractors must take many elements into consideration before determining what type of slab best meets their needs while still staying within budget constraints. Ultimately, this process requires careful research and evaluation with sound judgement being applied throughout each step; only then can customers ensure they get the most value out of their investment while also achieving desired results from their projects.
Is It Better To Use Reinforced Or Unreinforced Concrete For Slabs?
When deciding between reinforced and unreinforced concrete slabs, it is important to consider both the cost and performance benefits. Reinforced concrete offers a longer-lasting product with better durability, while unreinforced concrete may be more economical for certain applications. Here are five key points to help you decide which option is best for your project:
- Reinforced concrete can provide greater strength than unreinforced concrete due to its steel reinforcement bars or mesh that increase tension resistance. This makes it more suitable for heavier loads such as driveways, loading docks, and parking garages.
- Unreinforced concrete is less expensive when compared to reinforced options because of the additional materials associated with reinforcing the slab. It also has higher thermal mass properties in comparison to other materials so can be beneficial if used in sun-exposed areas where heat gain needs to be reduced.
- Rebar typically provides superior fire protection since it increases the temperature at which structural failure occurs; however, this should not replace standard fireproofing measures like sprinklers or smoke detectors. Additionally, rebar adds considerable weight to a structure and must be considered during design phases if building size restrictions apply.
- Both types of slabs require proper engineering support for successful installation and long-term use; therefore, consulting an experienced engineer before starting any project will ensure compliance with local codes and standards. Proper maintenance should also take place regularly following completion of a slab project as well as monitoring over time to detect early signs of cracks or damage from load impact.
Finally, understanding the differences between reinforced and unreinforced concrete will allow homeowners and builders alike determine which type of slab would work best for their particular application without sacrificing quality or longevity at the expense of budget concerns.
What Is The Difference Between A Concrete Slab And A Poured Concrete Foundation?
A picture is worth a thousand words and it’s no different when considering the differences between concrete slabs and poured concrete foundations. Many people are unaware of these two building materials, but they differ in several ways.
The primary distinction lies in their application: while both can be used for flat surfaces such as floors, walkways and patios, the slab is generally thinner than the foundation. Concrete slabs are typically 2 to 4 inches thick whereas poured concrete foundations often measure 8 or more inches in thickness. Furthermore, most poured concrete foundations have reinforcement rods or grid-work which helps them support greater amounts of weight compared to regular concrete slabs.
When making your selection one should consider:
- Strength: o Poured Concrete Foundations – Greater strength due to thicker material with reinforced steel grids inside o Concrete Slab – Thinner material without additional reinforcements
- Cost: o Poured Concrete Foundation – Expensive due to deeper excavation needed and extra reinforcement required o Concrete Slab – Less expensive because less excavating is necessary and there is no extra reinforcing needed
To summarize, understanding what you want to use each type of product for is key to choosing the right option for your project. Though cost may factor into this decision, strength should remain the core focus when selecting either a poured concrete foundation or a concrete slab.
Satire may move people. Hence, when choosing a concrete slab thickness, one must weigh the benefits and cons. Thicker slabs last longer and need less reinforcement, although they cost more because to their weight and materials. Thinner slabs are cheaper to install and can be strengthened with steel mesh or rebar, but they last less due to their weakness.
Remember that dwellings need poured concrete foundations, not concrete slabs. Before choosing a concrete slab building project, consider cost, durability, and longevity.
In conclusion, there are many factors that affect concrete slab thickness, so careful thought should always be given before choosing. One can choose the best financial and structural solution by researching and understanding all their options. Hence, satire reminds us to think before acting to achieve long-term success.
|
Table of Contents
What Is Intellectual Property?
Intellectual property (IP) comprises ideas, information, and knowledge. IP can be seen as the result and outcome of research in an academic context: “intellectual” because it is an innovative product and “proprietary” because it is consider a marketable product.
Intellectual property rights (IPRs) are specific legal claims that protect intellectual property owners. Intellectual property rights can be classified into the following main categories.
A patent is a legal monopoly valid for 20 years and granted to the patent office against the description of an invention and payment of royalties.
A patent position is destroyed by making the idea public before filing a patent application (except for a short grace period in the United States). Consider patenting before publication.
Copyrights apply to literary and dramatic works, artistic. And also, musical works, sound and video recordings, broadcasts, and cable transmissions.
Copyright is also the standard way to protect software, although some programs can be patented if they are part of an invention.
Copyright is creat automatically, does not have to be claimed, and exists 70 years after the author’s death.
Database on the right
Database rights apply to databases not protected by Copyright (only one European right; the maximum term is 15 years).
Design law applies to aspects of the shape or configuration of an article. Unregistered design rights (e.g., computer chips) may protect internal or external features. The properties must be attractive and evaluated with the eye with exclusive designs. (The register design rights are covered for a maximum period of 25 years).
A trademark is a mark (logo) or another distinctive mark applied or associated with products or services and does not describe the products or services. (Once registered, the IPR trademark is unlimited.)
All though Confidential information is the knowledge that only you have and that you have only disclosed under a confidentiality/nondisclosure agreement.
Four Types Of Intellectual Property
- Copyright ©
- Trade secrets
What Is Copyright
For there, According to Copyright. Gov’s copyright guide, “Copyright is a form of protection establish by Unit States law (Title 17, US Code) for authors of” original works of authorship. “Copyright protects writing, pictures, music, art. And also, other forms of intellectual creation. It means that if you have written or created something that you do not want to reuse without your permission, you have the Copyright in that work. If users want to use, reuse, or rearrange your work, they must first contact you to attribute it to you as the owner. And also, use it for any purpose they deem appropriate. There remains one exclusion to this rule, which is fair use. It can if anyone wishes to use any part of their work for educational, parody, commentary, or news.
What Are Patents
According to the Joint States Patent Office,” a patent for creation is the granting of property to the inventor.” Typically, this patent is valid for 20 years from when the inventor attempts to patent his invention by filling it with the US government. The list of things that can be patent is quite long and open to interpretation. Any new and helpful procedure, machine, manufacture, or composition of substances discovered, or any new and useful improvement thereof, may be patent.”
What Are The Trademark
According to the USPTO, a trademark is a “word, phrase, symbol, or design, or a combination thereof, which
identifies and distinguishes the origin of some goods from those of others. “If, for example, your company name is a logo or your company slogan, I can register as a trademark. The brand serves as a brand identification for your business or products. Images, slogans, and colors may be trademarks. For example, Tiffany Blue is a brand color that Tiffany & Co. uses in promotional materials and boxes, bags, and more.
What Are Trade Secrets
In general, any sensitive business information that gives a company a competitive advantage can be consider a trade secret. For example, Coca-Cola’s secret formula could be regard as a trade secret. Now, if you were to start a soft drinks business and make soft drinks identical to Coca-Cola, it would be a violation of Coca-Cola’s trade secret. And also, It is a typical example, but trade secrets can even be define as distribution methods (Walmart), sales methods, consumer information, advertising campaigns and strategies, a list of suppliers, a list of customers, and production methods. And also, Usually, trade secrets are reveal through corporate (industrial) espionage, breach of contract, or something as simple as leaving your prototype iPhone in a bar.
Also Read: What Is Space Computing? The Basics
|
|This article needs additional citations for verification. (March 2009) (Learn how and when to remove this template message)|
The Black Guard ( Arabic: عبيد البوخاري, meaning "servants of al-Bukhari"), were the corps of West African and Black Moroccan soldiers assembled by the Alaouite sultan of Morocco, Moulay Ismail (reigned 1672–1727). The Black Guard descended from black tribes of the south brought to Morocco from sub-Saharan Africa, who were settled with their families in a special colonies, at Mechra er-Remel, to have children and to work as indentured servants. At age 10, they were trained in certain skills; the girls in domestic life or entertainments, and the boys in masonry, archery, horsemanship, and musketry. At age 15 those that were chosen entered the army. They would marry and have children and continue the cycle. Considered more loyal than Arab or Berber warriors because of their lack of tribal affiliation, Ismail's black soldiers formed the bulk of his standing army and numbered 150,000 at their peak.
The Black Guard were mainly in charge of collecting taxes and patrolling Morocco's unstable countryside; they crushed rebellions against Ismail's rule not only by dissident tribes but also by Ismail's seditious sons, who defected from service as his provincial governors to insurrection as would-be usurpers of his throne. The Black guard were the personal guard and servants of Sultan Ismail, they might have also participated in campaigns against the European-controlled fortress enclaves dotting his empire's coast (such as Tangier, taken over after the English withdrew from it and distressed it in 1684 in response), although tasks of this kind were often allocated to European slaves (called Aluj Arabic: العلوج plural of Alj, meaning "white christian slave") and loyal Moroccan tribal soldiers, considered more military and cavalry-able. They were well-respected, well paid, and politically powerful. Around 1697-1698 they were even given the right to possess property.
Moulay Ismail always went about his court surrounded by a bodyguard of eighty black soldiers, with muskets and scimitars at the ready in case of any attempt on the sultan's life. At his throne, Ismail was attended by a servant charged with twirling a parasol above the sultan at all times (a legend says that on at least one occasion, Ismail pulled out his sword and murdered an attendant who had allowed the sun to briefly fall upon his skin).
Despite endless civil wars and civil slaughter, the Black Guard remained brutally loyal and disciplined through the turmoil of Ismail's reign. More than any other factor did they enable the sultan to remain on Morocco's throne for half a century.
After Ismail's death, the quality of the 'Abid went downhill, as they were no longer paid as well. Some became brigands, others quit and moved to the cities. Subsequent leaders attempted and some succeeded in resurrecting the group. However, they were never as formidable as they were in Ismail's time. The main group was dissolved in the 19th century, with only a handful left as personal bodyguards to the king.
The Black Guard name was changed to Moroccan Royal Guard after Morocco gained its independence in 1956 but this unit is not composed of descendants of the black slave since its members are selected from elite units within the Moroccan Army. The descendants of the Black Guard still work as servants at the King's palace, and were considered personal possession of the king inherited from father to son until Morocco abolished Indentured servitude at the start of the 20th-century.
- Wilfrid Blunt, Black Sunrise: The Life and Times of Mulai Ismail, Emperor of Morocco 1646-1727
- Giles Milton, White Gold: The Extraordinary Story of Thomas Pellow and North Africa's One Million White Slaves
- Hoiberg, Dale H., ed. (2010). "'Abīd al-Bukhārī". Encyclopedia Britannica. I: A-ak Bayes (15th ed.). Chicago, IL: Encyclopedia Britannica Inc. p. 32. ISBN 978-1-59339-837-8.
- "...A la fin du règne de Moulay Ismaïl, qui resta au pouvoir pendant 57 ans, la garde noire comptait 150000 combattants...", p39 of "Des Tranchés de Verdun à l'église Saint Bernard" by Bakari Kamian edition KARTHALA
- "...A la fin du règne de Moulay Ismaïl, qui resta au pouvoir pendant 57 ans...", p39 of "Des Tranchés de Verdun à l'église Saint Bernard" by Bakari Kamian edition KARTHALA
- Bakari Kamian. (2001). Des Tranchés de Verdun à l'église Saint Bernard.
|
Highway and traffic signs use a number of different types of supports.
All sign supports on highways within the clear zone must either be of a breakaway type meeting crashworthiness criteria of either NCHRP Report 350 (for sign supports installed prior to December 31, 2019) or the AASHTO Manual for Assessing Safety Hardware (MASH), or be protected by guardrail, barrier, or an energy absorbing system meeting NCHRP Report 350 or MASH criteria.
Wood and steel are the two primary materials used for small sign supports. Larger sign supports, such as cantilever structures or sign bridges, are usually made of steel. Other metals such as aluminum may be used.
Wood posts are used in a number of states for permanaent or temporary signs. These posts usually come in sizes from 4x4 inches to 6x8 inches, with larger box beam sizes made of laminated plywood used for larger guide signs.
All posts above 4x6 inch nominal size must be drilled perpendicular to traffic flow to allow the post to break away when struck by a motor vehicle, yet still maintain rigidity against wind forces and other non-impact loads.
Permanent wood sign posts are made of redwood or pressure-treated softwoods. Temporary sign posts can be made of non-treated wood.
There are four primary types of steel shapes used for sign supports:
U-Channel: This post type is often preferred for maintenance and light construction applications, since it is easy to directly drive into the ground and assemble, and has low cost. Sizes are defined by weight per foot; i.e. "3 lb/ft post".
Disadvantages of U-channel include its lower loading capacity and the inability to mount signs at right angles on the same post.
No more than 3 U-channel posts should be used for a single sign without addition of a breakaway feature.
Square Tube: This post type is characterized by a square shape, sometimes perforated by mounting holes at 1 inch spacings. Sizes are defined by outside square dimensions. Standard sizes range from 1 1/2 inches to 2 1/2 inches in 1/4 inch increments.
A square tube post can be made much stronger or spliced by inserting another smaller square tube post inside. This is called creating a "telescoping" post.
The advantages of square tube posts are excellent flexibility in mounting options, greater loading capacity over U-channel, the ability to mount signs on 4 sides of the post, and the ability to quickly replace damaged posts by pulling the old post out of the foundation and inserting a new post.
Disadvantages include a moderately higher cost per installation over U-channel.
Some sizes and combinations of square tube posts may require the use of slip bases or other breakaway features to meet crashworthiness criteria.
Tubular Steel: This post type is characterized by a round steel tube or pipe shape. Sizes are defined by nominal steel tube diameters. The sign panels are either bolted directly through the post or clamped to the outside of the post.
The advantages of tubular steel posts are low cost, the ability to use commonly available steel pipe, and the ability to mount signs at any desired angle.
Disadvantages include the need to field-drill the post or use special clamps to attach signs, and the requirement to use special proprietary breakaway hardware for larger post sizes if installed in the clear zone.
There are variants on tubular posts offered by vendors that use a different shape and cross-section for the post, but share many of the same characteristics as standard tubular posts.
Breakaway: This post type is characterized by an I-beam type shape. Sizes are defined by standard AISC beam designations. The upper portions of steel breakaway posts are often "coped" or diagonally cut away to reduce the total weight of the post system.
The advantages of large steel I-beam posts are the ability to support very large sign panels, and the ability to quickly replace damaged posts by bolting a new post onto the existing breakaway foundation.
Disadvantages include a difficulty in breaking away safely except in a single direction, unless special proprietary hardware is used.
All large steel posts use a breakaway feature, unless protected by barrier or placed out of the clear zone. This is usually accomplished by adding a slotted plate to the top of the foundation post and another slotted plate to the bottom of the sign post, and by cutting the post just below the sign panel and adding a hinge system. The bottom plates are then bolted together at a specified torque. When struck, the post slips off the foundation at the bottom, and rotates around the hinge plate below the sign panel (see diagram). This allows the vehicle to safely pass under the sign after impact.
Whenever reasonably feasible, existing structures such as overpasses should be used for support of overhead signs. This reduces cost and improves safety by minimizing the number of separate structures needed.
There are two major types of overhead signs:
Cantilever: Cantilever signs are used to place a single sign or multiple smaller signs over one side of a highway.
Different styles of cantilever signs include tapered steel tube, single steel tubular, and truss type construction.
Sign Bridge: Sign bridges are used to place single or multiple signs over specific lanes or portions of a highway.
Different styles of sign bridges include tapered steel tube, single steel tubular, and truss type construction.
Updated 30 October 2019 (updated to new style and revised)
Scripting: Richard C. Moeur
All text and images on this page © Richard C. Moeur. All rights reserved.
Linked sign layout files in PDF format provided courtesy of FHWA's MUTCD website
Unauthorized use of text, images, and other content is strictly prohibited. Refer to Copyright, Disclaimer, and Standard Use Agreement for details.
|
BY DR. STEPHANIE NANI
Nearly everyone experiences muscle pain from time to time but it is often temporary, and resolves on its own. However, when a muscle is injured or over stressed small contractions known as trigger points may form causing a wide variety of chronic pain conditions.
Trigger points are highly irritated, painful spots found in muscles that are the result of injury, overuse, or chronic stress. Trigger points can be found by careful diagnosis. They are usually painful to the touch and contain nodules (or knots) and tight bands that can often be felt under the skin. When these trigger points irritate the nerves around them they cause “referred pain”, in other words they send their pain to some other site in the body, often far away from the actual trigger point itself. This can be very misleading to health care providers and is often the reason why so many conventional treatments for chronic pain fail. Conditions that may result from trigger points include neck or back pain, joint pain, pain in the limbs, sciatica, headaches, migraines, sinus irritation, heartburn, dizziness, nausea, irritable bowel, and many others). Some experts even believe that trigger points are the beginning stage of fibromyalgia.
|
Getting more protein in your diet, though not red meat, may reduce your risk for stroke, a review of studies found.
Scientists reviewed seven prospective studies involving more than 2,50,000 people and found that after adjusting for various stroke risks and for other nutrients consumed, those who had the highest consumption of protein had a 20 per cent decreased risk for stroke compared with those with the lowest.
Each increase of 20 grams per day in protein — about the amount in a 3-ounce serving of chicken or fish or a cup of beans — was associated with a 26 per cent decrease in risk, a dose-response relationship that further strengthens the association.
The finding does not apply to red meat, which has been shown to increase the risk for stroke and was not evaluated in the studies reviewed. Some evidence suggested that animal protein was slightly more effective than vegetable protein, although there was not enough data on vegetable consumption to reach a definitive conclusion about the exact difference.
|
Echoes of Oregon History Learning Guide
Indian Agent Regulations, 1855
Transcript of original document:
Please enlarge the image to the right to read the text of the regulations.
Indian Superintendent Joel Palmer negotiated treaties with Oregon Indians which placed them on reservations. The U.S. Senate delayed ratification of the treaties, however, and Indian-white tensions increased. On October 8, 1855, a band of white volunteers surrounded a camp of reservation Indians and killed twenty-three men, women, and children. These men then scalped their victims and returned to Jacksonville. Indians began to attack whites the next day. At the same time, Superintendent Palmer finished these instructions to his agents. The instructions treat male Indians on reservations who are over the age of twelve as prisoners, while Indians off the reservation were to be treated as outlaws. Palmer was willing to permit Indians to work for white settlers if the whites would guarantee good conduct. These instruction show how whites regarded all Indians as at least potentially hostile.
For Further Discussion
1. How do these regulations restrict Indian freedom of movement?
2. Why do you think Indian females were excluded from this enrollment?
3. How would you feel if you were required to live by similar rules? if you were required to enforce these rules?
4. Can you think of situations similar to the system of temporary reservations Superintendent Palmer is establishing?
|
2. Global Warming
Heating by the greenhouse effect
Our planetís surface is now kept at a comfortable temperature because the atmosphere traps some of the radiant heat from the Sun and keeps it near the surface, warming the planet and sustaining living creatures. Jean Baptise Joseph Fourier (1768-1830) first conceived the mechanism in the 1820s, while wondering how the Sunís heat could be retained to keep the Earth hot. Fourierís idea, still accepted today, is that the atmosphere lets some of the Sunís radiation in, but it doesnít let all of the radiation back out. Visible sunlight passes through our transparent atmosphere to warm the Earthís land and oceans, and some of this heat is reradiated in infrared form. The longer infrared rays are less energetic than visible ones and do not slice through the atmosphere as easily as visible light.
So our atmosphere absorbs some of the infrared heat radiation, and some of the trapped heat is reradiated downward to warm the planetís surface and the air immediately above it. Fourier likened the thin atmospheric blanket to a huge glass bell jar, made out of clouds and gases, that holds the Earthís heat close to its surface.
The warming by heat-trapping gases in the air is now known as the ďgreenhouse effectĒ, but this is a misnomer. The air inside a garden greenhouse is heated because it is enclosed, preventing the circulation of air currents that would carry away heat and cool the interior. Nevertheless, the term is now so common that we will also sometimes designate the heat-trapping gases as greenhouse gases, and let greenhouse effect designate the process by which an atmosphere traps heat near a planetís surface.
Right now, the warming influence is literally a matter of life and death. It keeps the average surface temperature of the planet at 288 degrees kelvin (15 degrees Celsius or 59 degrees Fahrenheit). Without this greenhouse effect, the average surface temperature would be 255 degrees kelvin (-18 degrees Celsius or 0 degrees Fahrenheit); a temperature so low that all water on Earth would freeze, the oceans would turn into ice and life, as we know it, would not exist.
The gases that absorb the most infrared heat radiation are minor ingredients of our atmosphere. They are water vapor and carbon dioxide, with water vapor absorbing the most. Sixty to seventy percent of the Earthís greenhouse warming is now caused by water vapor and carbon dioxide provides just a few degrees.
The main constituents of the atmosphere, nitrogen (77 percent) and oxygen (21 percent) play no part in the greenhouse effect. The two atoms in these diatomic molecules are bound tightly together and are therefore incapable of absorbing significant infrared radiation. In contrast, water vapor and carbon dioxide molecules consist of three atoms that are less constrained in their motion, so they absorb the heat radiation.
Why doesnít the atmosphere just keep heating up until it explodes? The greenhouse warming rises to a fixed temperature that balances the heat input from sunlight and the heat radiated into space. The level of water in a pond similarly remains much the same even though water is running in one end and out the other.
(page 1 of 9)
Copyright 2010, Professor Kenneth R. Lang, Tufts University
|
Black and Hispanic Americans are exposed to higher amounts of air pollution that non-Hispanic white Americans generate, according to new research.
Poor air quality is the largest environmental health risk in the United States. Fine particulate matter pollution is responsible for more than 100,000 deaths each year from heart attacks, strokes, lung cancer, and other diseases.
But not everyone is equally exposed to poor air quality—nor are all people equally responsible for generating it.
The new finding, which quantifies for the first time the racial gap between who generates air pollution and who breathes it, appears in the Proceedings of the National Academy of Sciences.
“Similar to previous studies, we show that racial-ethnic minorities are exposed to more pollution on average than non-Hispanic whites,” says first author Christopher Tessum, a research scientist in the University of Washington’s civil and environmental engineering department and a recent University of Minnesota graduate.
On average, non-Hispanic white Americans spend more money on pollution-intensive goods and services, which means they generate more pollution than other groups
“What is new is that we find that those differences do not occur because minorities on average cause more pollution than whites—in fact, the opposite is true.”
The team compared what people spend their money on—like buying groceries or gas or getting clothes dry-cleaned—to the pollution these activities generate. Then the researchers overlaid these results on a map of where people live to see if there was the difference between the pollution that specific racial-ethnic groups generate and what they experience.
“Someone had to make the pen you bought at the store,” says coauthor Julian Marshall, a professor of civil and environmental engineering. “We wanted to look at where the pollution associated with making that pen is located. Is it close to where people live? And who lives there?”
The team found that on average, non-Hispanic white Americans spend more money on pollution-intensive goods and services, which means they generate more pollution than other groups. But white Americans also experience a “pollution advantage” in that they face exposure to approximately 17 percent less pollution than they generate.
On average, black Americans experience about 56 percent more pollution than they generate. For Hispanics, it is slightly higher: 63 percent.
At the same time, black and Hispanic populations bear a “pollution burden.” On average, black Americans experience about 56 percent more pollution than they generate. For Hispanics, it is slightly higher: 63 percent.
Longstanding societal trends, such as income inequality, may influence these disparities. Also, racial patterns in where people live often reflect segregation or other conditions from decades earlier. Black and Hispanic Americans are more likely to live in locations with higher concentrations of pollution compared to white Americans, which means they have increased average daily exposure to it.
In general, the US has made strides to reduce air pollution across the country: Average exposure to particulate pollution declined approximately 50 percent between 2003 and 2015. But pollution inequity remained high over that same period.
“Our work is at the intersection of many important and timely topics such as race, inequality, affluence, environmental justice, and environmental regulation,” says corresponding author Jason Hill, an associate professor of bioproducts and biosystems engineering at the University of Minnesota.
The team hopes this pollution-inequity metric can provide a simple and intuitive way for policymakers and the public to see the disparity between the pollution that population groups generate and their pollution exposure.
“The approach we establish in this study could be extended to other pollutants, locations, and groupings of people,” Marshall says. “When it comes to determining who causes air pollution—and who breathes that pollution—this research is just the beginning.”
Additional coauthors are from the University of Texas at Austin, the University of New Mexico, Carnegie Mellon University, Lumina Decision Systems, the University of Washington, and the University of Minnesota.
The US Environmental Protection Agency, the US Department of Energy, the US Department of Agriculture, and the University of Minnesota Institute on the Environment funded the study.
Source: University of Washington
|
Evolution has presented itself throughout history for centuries. The world is never put to rest and things are constantly changing and improving. From the very beginning, impeccable leaders have altered the world in search for reform and enhancement. For example, many know the intelligent, Benjamin Franklin who discovered electricity or the studious Thomas Eddison who provided the world with a light bulb. Moreover, the founding fathers issued an entire nation to the people with a series of rules and restrictions to ensure peace for society. In addition, the Wright Brothers, Thomas Paine, Martin Luther King, Rosa Parks, Susan B. Anthony or John F. Kennedy for example. All of these famous beings have altered the world for different reasons and have created history by their actions. All of the previous leaders and more have created and fought for different ideas they had inspired in their hearts because they were eager to view change in the world. Long ago, dating back to as early as 1850, women across the nation had been blessed with the idea of change. For decades and decades, women had no rights and were often alienated because of the stigma they faced by society. This stigma held women to a certain standard where they were not respected by the public. However, this stigma did not frighten women and it would not come close to compete against the perseverance and determination these women held. The Women’s Suffrage Movement, also known as the liberation movement, was the struggle for women to obtain rights, have the ability to run for office, and vote in political elections. The Women’s Suffrage Movement was apart of the Women’s Rights Movement which surprisingly, involved both men and women. This movement for equality was not simple or easy. The fight for equality became ongoing for over one-hundred years. Many women had developed speeches and created petitions as well as marched in protests and parades. This movement was based on the belief that all women deserve all rights and responsibilities of being a citizen.
The Women’s Suffrage Movement was one of the most life changing events in U.S. history that altered the course of basic human rights and equality of women throughout the nation and around the world. Before The Women’s Suffrage Movement women were entitled to almost no basic human rights and were often treated like maids. The role of women before the movement was basically to do as told. They had to follow strict rules and maintain the look society titled them with, which was do what you were told to do. Before the Women’s Suffrage, women were considered to have one role which was the role of a wife and mother. They were prohibited to venture out of their domestic life and even named “too emotional” for most real jobs. In the nineteenth century many women were seen as delicate and not capable of manual labor. They especially could not participate in politics. Almost all women were looked at as inferior to men. In fact, society even believed women were incapable of managing finances and at this point in time all of the property was entitled to the husband or man of the home. Women had very little say in anything and if her husband divorced her she was forced to give custody to the man. Although women had no freedom they worked especially hard in maintaining a clean and polished home for the family. Their daily chores included caring for the children, carrying water to the home from a stream or water source, cleaning, cooking, and making their own cleaning supplies. This on the other hand was very difficult. They had to craft their own brooms, polishes, and cleaning solutions. Women had many other duties to fulfill in their everyday lives but the above are just a few. They were also expected to behave accordingly and be charming and entertaining to the guests and cater to everyone of her family’s needs.Transition…………………………………………………………..
“…women were considered a physically weaker sex, less capable than their male counterparts ”
Women were stereotyped in society and looked upon as weak. It was believed that they couldn’t participate in any outside activities from their home life and it was strictly forbidden to bide against the husbands rule. A woman living in the U.S. in the nineteenth century did not have the same rights as her husband, brother, or even her son. Women were allowed to have other jobs that actually paid, however, many women did not engage in finding a job. Mostly women of color were employed in small jobs due to low incomes to help her family survive. Also, any money or wage a women made at the time was entitled to her husband and all of her earnings were not awarded to her. Women struggled greatly at this time and encountered external and internal conflicts. They were conflicted with their rights the government provided, which again,m was little to none. They also struggled internally, being a women at this point in time was increasingly difficult and they were forced to conceal their emotions no matter how bad they may have felt.
The Women’s Suffrage Movement was a decades- long fight to win the rights for women to vote in The United States and it was one of the most difficult movements to overcome/complete.
“At the same time, all sorts of reform groups were prolieferating across the United States- temperance leagues, religious movements, moral-reform societies, anti- slavery organizations- and in many of these, women played a prominent role ”
As women became more involved, many questions arose and women began to think about what it really meant to be a woman in the United States. One idea sparked another and women were uprising to a new movement and a big step for women everywhere. Many questioned societies stricken rules about “womanhood” and began to develop the idea of rights. This eventually lead to the discussion of the problem all the women had faced with the inequality they underwent. Women gathered together to create a new change for the future. They were inspired to change the world for themselves and their children. Not all women participated in the movement as many people frowned upon the acts. At this point in time, women strictly followed their orders.
“We hold these truths to be self-evident,” proclaimed the Declaration of Sentiments that the delegates produced, “that all men and women are created equal, that they are endowed by their creator with certain inalienable rights, that among these are life, liberty, and the pursuit of happiness’”
In 1848, a group of abolitionist activists gathered together at Seneca Falls, New York to discuss the matters of women’s rights. Elizabeth Cady Stanton and Lucretia Mott, held the convention and invited all of the delegates to talk about these matters. All of the delegates came together and officially decided that they believed women should be treated with the same amount of equity and respect as men. They believed they were entitled to have individual rights such as the right to vote. Believe it or not, men also were apart of this convention as well. They did not play a big role in the movement but men were actually apart of the suffrage movement at the very beginning. As more ideas were developed and discussed, the movement grew stronger and the foundation for it became more than just participating in political events. This grew into a new era and a new beginning for women all over the world.
|
The governor (French governor ) is the leader of a civil or military area in a geographically limited area. Previously, governors and regents tasks similar to today's governors.
The word goes back to Greek κυβερνήτης kybernetes " helmsman " ( which also Cybernetics and English cyber descended ). Among the Romans, the function was called Gubernator " helmsman ", from which eventually emerged modern words: French governor, governor english, spanish gobernador, Italian Governatore, Russian губернатор Gubernator.
The meaning of the word by a governor is thus someone who sets the direction.
Today, there are civilian governors, for example:
- In those member countries of the Commonwealth of Nations, in which the British Queen is head of state (except UK ): In these so-called Commonwealth Realms, the Queen is represented by a Governor-General, at the level of the Australian and Canadian member States by a governor
- In the British overseas territories as a representative of the British Crown
- In the provinces of the People's Republic of China, see Governor ( People's Republic of China)
- In the states of India, used as the representative head of the President
- In the modern prefectures of Japan
- In Russia as the heads of the federal subjects ( administrative units ), see Governor (Russia)
- In the provinces of Turkey
- In the Argentine provinces
- In the states of Brazil
- In the states of Mexico
- In the states and overseas territories of the United States as leaders, see Governor (United States)
- In the Angolan provinces
- In the 13 regions of Namibia
Previously, there were governors:
- In the colonies of European countries and America
- In the historical provinces of France ( before the French Revolution ) at the top of the governorates
- In the old province of Japan, see Kokushi
Military Governors are the supreme commander of a fort, a garrison, a military facility or militarily occupied territory ( country ) (see occupation, occupation zone ).
Governor is also
- The French translation for Governor
- The boss (President) of a central bank (eg Oesterreichische Nationalbank )
- The former name for male tutor in princely households
|
Teaching Boxes create links between Earth system resources and classroom-ready instructional units (often 5-6 lesson plans and associated activities and procedures) that are designed to bridge the gap between educational resources and how to implement them in the classroom. Materials model scientific inquiry, allowing teachers to build classroom experiences around data collection and analysis from multiple lines of evidence, and engaging students in the process of science. Features include an overview, goals, prerequisites, technology requirements, time requirements and concepts and standards. Suggestions for homework and assessment are available.
Collection is intended for:
Middle (6-8), High (9-12)
Try searching on these terms (type in keyword box):
Plate tectonics, Earthquake, Volcano, Fossil, Sea floor spreading, Weather, Clouds, Wind, Food web, Coastal upwelling, Marine ecology, Climate change, Ice melt, Topography, Sea level change, fault, Seismic wave, Landslide, Liquefication, Structural failure, Mountain building, Erosion, Rock cycle.
Collection Scope and
|
I am what some would consider a “professional napper.” If napping could be a major, I could have graduated three years ago.
Like most people, I used to believe napping was unproductive and a sign of laziness.
But really, napping is an art form that promotes productivity.
Napping is a difficult task to master that requires much dedication, but the benefits are more than worth it.
According to The National Sleep Foundation, “Naps can restore alertness, enhance performance and reduce mistakes and accidents.”
Naps can also have health benefits, like preventing heart disease. A study by Dimitrios Trichopoulos showed people who nap at least three times a week are 37 percent less likely to die of heart disease compared to the people who don’t regularly nap.
Napping helps reduce stress, which is linked to health issues. So, why not start napping?
Dr. Sara C. Mednick, author of “Take a Nap! Change Your Life,” explained napping is a great way to relax your mind and allow for new creativity to enter. She also mentioned napping can restore the sensitivity of hearing, taste and light.
There are different types of naps one can participate in. The first type is a planned nap, which is a nap you plan to take in different circumstances.
Some types may include planning on napping when you get off of work or planning to nap before you go out.
The second type of nap is an emergency nap. The emergency nap happens more than you think, and can happen at any time.
These naps occur when you cannot function because of drowsiness. You must nap in order to finish out the rest of the day or complete whatever task you are working on.
The last type is a habitual nap. This type of nap happens every day around the same time, which makes it a routine.
Routine naps train your body to be revitalized and focused for the second half of your day, making you more productive.
The best type of naps are said to be 20 to 30 minute “power naps.” These short nap breaks can help “improve alertness and performance without leaving you feeling groggy or interfering with nighttime sleep.”
There are some things you should avoid if you decide to take naps: Do not take naps after 4 pm unless you are planning on staying out late.
Later naps can interfere with your nightly sleep schedule and make you groggy and distracted the next day from lack of sleep.
Like I mentioned above, try to stay away from the 2 to 3-hour knockout naps. I know it’s hard, especially if you have the time to do so, but this can also really affect your nightly sleep.
If you are the type of person who feels guilty while taking a nap, just know successful people, including Thomas Edison, George W. Bush, John F. Kennedy, Albert Einstein and Winston Churchill, all benefitted from taking daily afternoon naps.
I plan my entire day around what time I will be able to take my nap.
I constantly lie to my friends when they ask me to lunch or to go work out by saying I’m busy, when I really just need to fit my nap in.
Naps are my only way to function throughout the day and napping has been proven to be one of the best daily routines you can do for your overall health and sanity. So, when in doubt, just nap it out.
|
Title: Second Treatise of Government and A Letter Concerning Toleration (Oxford World's Classics)
Locke's Second Treatise of Government (1689) is one of the great classics of political philosophy, widely regarded as the foundational text of modern liberalism. In it Locke insists on majority rule, and regards no government as legitimate unless it has the consent of the people. He sets aside people's ethnicities, religions, and cultures and envisages political societies which command our assent because they meet our elemental needs simply as humans. His work helped to entrench ideas of a social contract, human rights, and protection of property as the guiding principles for just actions and just societies.
Published in the same year, A Letter Concerning Toleration aimed to end Christianity's wars of religion and called for the separation of church and state so that everyone could enjoy freedom of conscience. In this edition of these two major works, Mark Goldie considers the contested nature of Locke's reputation, which is often appropriated by opposing political and religious ideologies.
|
Volvo says "road train" technology trials are a success
The concept of vehicle platooning is based on a convoy of vehicles where a professional driver in a lead vehicle drives a line of other vehicles, with each vehicle determining its distance, speed and direction based on the car in front. Drivers are left free to focus on other things, for example reading the paper, and can leave the convoy at any time.
Reported benefits include improved road safety – it removes the need for human participation in driving, which is said to be the cause of at least 80% of accidents. It also benefits the environment through a reduction of up to 20% in fuel consumption and CO2 emissions. It could also help relieve traffic congestion.
Volvo says that the technology development is well underway and could go into production in a few years' time but admits that public acceptance and European legislation (at least 25 EU governments must pass similar laws) could take a lot longer.
|
Law – The Pesticide Contamination Prevention Act
DPR began addressing pesticide contamination of ground water in the early 1980’s after the discovery of contamination from the legal application of the fumigant dibromochloropropane (DBCP). Reports of additional pesticides in ground water resulted in the passage of the Pesticide Contamination Prevention Act (PCPA) in 1985.
The PCPA (Food and Agricultural Code Sections 13141-13152) requires DPR to:
- Obtain environmental fate and chemistry data for agricultural pesticides before they can be registered for use in California.
- Identify agricultural pesticides with the potential to pollute ground water.
- Sample wells for presence of agricultural pesticides in ground water.
- Obtain, report and analyze the results of well sampling for pesticides conducted by public agencies.
- Formally review a detected pesticide to determine if its continued use can be allowed.
- Adopt use modifications to protect ground water from pollution if the formal review indicates that continued use can be allowed.
Reports mandated by the PCPA:
For content questions, contact:
1001 I Street, P.O. Box 4015
Sacramento, CA 95812-4015
Phone: (916) 324-5144
|
Mar 25 2019
“The robots are coming. Hide the WD-40. Lock up your nine-volt batteries. Build a booby trap out of giant magnets; dig a moat as deep as a grave. “Ever since a study by the University of Oxford predicted that 47 percent of U.S. jobs are at risk of being replaced by robots and artificial intelligence over the next fifteen to twenty years, I haven’t been able to stop thinking about the future of work,” Andrés Oppenheimer writes, in “The Robots Are Coming: The Future of Jobs in the Age of Automation” (Vintage). No one is safe. ”
Source: The New Yorker
Michel Baudin‘s comments:
In this article, Jill Lepore skewers the countless gurus who, for the past 100 years, have been predicting a future in which robots have eliminated all jobs, manufacturing or not. While Lepore does not go back that far, “Robot” is a word from science fiction, specifically Karel Čapek’s 1920 play Rossum’s Universal Robots. In this play, robots actually kill off humans.
Unemployment since 1991
If robots had actually had a massive impact on jobs, it would certainly show in the employment statistics. As seen in the following figure it doesn’t, at least in the world’s 6 largest economies.
While the curves of different countries are at different levels, they don’t show an increase over the past 30 years. Comparisons between the levels of unemployment among countries, however, are not as meaningful as they appear.
The national statistics bureaus don’t define “unemployed” the same way. The maximum number of hours worked per month to be counted as unemployed are not the same, women’s participation in the work force varies, and the bureaus have varying levels of political independence.
Changes in unemployment rates since 1991
On the other hand, we can assume that, in any given country, they calculate these numbers the same way every year. As a result, we are better off comparing their variations over time.
In the following figure, we pinch all the country curves to a common starting point in 1991. Then we plot the relative changes from that year on.
Of the six countries, China is the only one to show a near doubling, as a one-step rise in the early 2000s. According to the ILO, however, “in China, the indicator of employment in the aggregate economy-wide sense is of limited value.”
Japan’s unemployment is slightly higher in 2019 than in 1991, with wild swings in between, due to the long recession of the 1990s and the financial crisis in the late 2000s, not robots. The other 4 countries all show lower unemployment in 2019 than in 1991.
During these decades, manufacturers closed factories and laid off workers. This disrupted the lives of individuals and communities but mostly for causes unrelated to robots.
All the above charts say is that society as a whole created more jobs than it destroyed. What we have been living through is an example of Schumpeter’s creative destruction, not a science fiction dystopia.
Your job and my job may be threatened by robots but there is no evidence, even from the recent past, that they increase overall unemployment. This, however, does not prevent anyone predicting it yet again.
|
Unequivocal Evidence Discovered that Sea Levels Were Once 70 Feet Higher
Clues Found in Bermuda
Almost 10 years ago, a team of geologists and zoologists published a study based on preliminary evidence that showed that sea levels were almost 70 feet higher about 400,000 years ago. This was met with a good dose of skepticism. But now this same team has published a new study based on new "unequivocal evidence" that confirms the timing and extent of the sea's rise & fall.
Read on for more details.This evidence was found in Bermuda (photo above). From the Smithsonian release:
Storrs Olson, research zoologist at the Smithsonian’s National Museum of Natural History, and geologist Paul Hearty of the Bald Head Island Conservancy discovered sedimentary and fossil evidence in the walls of a limestone quarry in Bermuda that documents a rise in sea level during an interglacial period of the Middle Pleistocene in excess of 21 meters above its current level. [...]
The nature of the sediments and fossil accumulation found by Olson and Hearty was not compatible with the deposits left by a tsunami but rather with the gradual, yet relatively rapid, increase in the volume of the planet’s ocean caused by melting ice sheets.
The more we learn about past events like this, the better we can understand them and try to apply that to modeling the future. Hopefully, we can avoid this kind of catastrophe, but if not, it will be very useful to know what can be done to mitigate its impacts.
A lot of people will see the "400,000 years ago" figure and think that it can't possibly happen again, or quickly. But extrapolating from past events must take into account what has changed since then. 6 billion humans burning over 80 million barrels of oil each day, millions of tons of coal, huge quantities of natural gas, cutting down forests, raising billions of animals that produce methane, etc.. That has to be an important variable.
Again, from the Smithsonian:
This particular interglacial period is considered by some scientists to be a suitable comparison to our current interglacial period. With future carbon dioxide levels possibly rising higher than any time in the past million years, it is important to consider the potential effects on polar ice sheets.
Photo: Ministry of Tourism & Transport, Bermuda, with permission.
More Nature Articles
Salamander Population Declining in Central America
Book About Air-Filtering Plants: How to Grow Fresh Air
Best Air-Filtering House Plants According to NASA!
Animal Armageddon: See the Most Horrific Disasters in the History of Our Planet
|
More than a third of Americans are affected by astigmatism, which occurs when the corneal surface of the eye is misshapen, causing light to focus incorrectly on the retina. Although most cases are so mild that no corrective action is necessary, others cause blurry vision, poor depth perception, distortion, eye strain, and sometimes headaches.
There are many things the average person may not understand about astigmatism. What follows are some of the issues eye patients bring up with their doctors, and how you can address them.
Astigmatism affects children too.
In most cases, astigmatism is inherited. Unlike “over 40” vision (presbyopia), astigmatism can occur in children, who may not realize their vision is distorted or may be unable to describe their discomfort.
What you can do: Stay ahead of changes in your child’s eyes. Annual eye exams are so important for youngsters, particularly in the era of the coronavirus when many of them are spending more time in front of computer screens.
Yes, you can wear contact lenses.
At one time, people with astigmatism were told they could not wear contact lenses; however, newer lens types, such as toric lenses and gas permeable contact lenses, stabilize and/or hold the natural lens to the correct shape for proper light refraction for astigmatism.
Depending on your prescription, your corrective lenses could be a pricier proposition for you than your standard nearsighted individuals, and possibly less comfortable. In the case of gas permeable (GP) lenses, the stiffer lens can be uncomfortable for some users.
What you can do: If you are wearing hard GP contact lenses that are causing you discomfort, discuss options with your eye doctor.
Eye rubbing makes astigmatism worse.
It may feel good for a second or two, but rubbing your eyes puts pressure on the cornea, and the entire eyeball, which can cause damage and changes in shape.
What you can do: Keep soothing eyedrops at hand to relieve dryness and irritation that may tempt you to rub. If you suffer from dry eye, it may also help to gently swipe or dab your eyes with a cool, wet washcloth to remove grit and discharge from the eye.
LASIK is a great option.
There’s a common myth that LASIK is not an option for those with astigmatism. To be 100% clear: Yes, we can correct astigmatism with the modern LASIK procedures we perform. With our surgeons’ vast experience paired with our advanced technology, we are able to correct not just astigmatism but also nearsightedness or farsightedness that may accompany it.
Once eyes heal after LASIK surgery, many people with astigmatism experience fewer headaches and depth-perception problems than ever before. You may even find that over the years your LASIK procedure actually saves you money compared to a lifetime of glasses and contact lenses, making LASIK a good choice for people with astigmatism.
Missouri Eye Institute has helped thousands of patients attain freedom from glasses and contact lenses. Contact us at (800) 383-3831 to schedule a thorough eye exam or visit MissouriEye.com to learn more about our services.
Very professional and friendly. I have been here twice in the last two years and I would highly recommend them to anyone.
Staff was informative and caring. They explained what to expect at every step of the procedure. Lots of caring and information.
The entire staff at MEI was very kind and professional. I highly recommend them for your eye care. Very friendly!
|
become an editor
the entire directory
only in C++/Papers
The Anatomy of the Assignment Operator
- In depth discussion of writing solid assignment operators. Explains exception safety and memory management issues.
The Assignment Operator Revisited
- Looks at how difficult it is to copy state from one object to another. (Richard Gillam)
C++ in the Real World
- This article explores the strengths of C++, and how to exploit them in projects.
- Discusses implementation details like multiple inheritance, virtual functions and exception handling.
- List of C++ source code optimizations that can give big returns, especially when used in tight loops. By Andrew S. Winter.
C++: Beyond The Standard Library
- Takes a look at Blitz++, MTL (Matrix Template Library), ACE, Loki and Boost.
Constant Objects and Constant Expressions
- Explains why constant objects are not as useful as some people would like them to be.
Contracts: From Analysis to C++ Implementation
- Describes a set of techniques and tools (an environment) facilitating prototyping of, providing general mechanisms for, object-oriented architectures based on the idea of assertion checking and Design by Contract (DbC) in C++.
Create Movie from HBitmap
- Describes creating AVI/WMV/QuickTime movies from sequences of bitmaps with source code examples.
Creating Truly Maintainable Class Factories
- Presents a solution that is easily extensible and maintainable; what's more, it is particularly well suited to creating objects from XML data.
The Design of C++0x
- Bjarne Stroustrup provides insights on what will likely be added or changed in the upcoming version of the C++ standard.
Dynamic C++ Classes
- Describes a lightweight mechanism to update code in a running program. (Gísli Hjálmtýsson, Robert Gray) [PDF]
Enumeration Constants vs. Constant Objects
- Presents insights on choosing between symbolic constants as either enumeration constants or constant objects.
Functional Style in C++
- Discusses closures, late binding, and lambda abstractions.
Herb Sutter - Publications
- Over 80 in-depth articles about object-oriented software development and C++ design and programming have been published in C/C++ Users Journal, C++ Report, Dr. Dobb's Journal, Java Report, Visual C++ Developer's Journal, and other magazines.
An Introduction to Garbage Collection, Part I
- Presents an introduction to garbage collection, presenting the subject in enough detail to show the various tradeoffs and advantages between various techniques and what goes on under the hood in a typical garbage collector.
The Miseducation of C++
- Modern C++ is a more expressive, simpler language than C, and a language in its own right, so why do so many people insist on teaching it historically? Kevlin Henney appeals for a reform of the C++ education system. [PDF]
Optimizing Software in C++
- An optimization manual for advanced C++ programmers on Windows, Linux and Mac platforms. Topics include: the choice of platform and operating system, choice of compiler and framework, finding performance bottlenecks, the efficiency of different C++ constructs, multi-kernel systems, parallelization with vector operations, CPU dispatching and efficient container class templates. [PDF]
Publications by Bjarne Stroustrup
- Reasonably complete list of Bjarne's publications.
- Shows all choices for defining symbolic constants.
What is Koenig Lookup?
- A definition of argument-dependent name lookup with examples for application with HP aC++.
C++0x Feature Support in GCC 4.5
- This article focuses on a subset of several C++0x features that GCC version 4.5 supports, including static assertions, initializer lists, type narrowing, newer semantics of the auto keyword, lambda functions, and variadic templates. (March 01, 2011)
Simpler Multithreading in C++0x
- The new standard will support multithreading, with a new thread library. This article explains how this will improve porting code, and reduce the number of APIs and syntaxes used. (August 18, 2008)
An Introduction to XML Data Binding in C++
- This article looks at XML Data Binding at a new alternative to automate much of the task of processing XML data by presenting the information stored in XML as a statically-typed, vocabulary-specific object model. (Boris Kolpackov) (May 04, 2007)
Introducing the Catenator
- This article introduces a very sophisticated and useful data structure for efficient string processing, while at the same time revealing some interesting features of C++. (Adam Sanitt) (September 30, 2005)
The Design and Implementation of SPECS: An Alternative C++ Syntax
- By B.M. Werther and D.M. Conway, Dept. Computer Science, Monash University, Melbourne. [PDF] (January 01, 1996)
" search on:
to edit this category.
Copyright © 1998-2016 AOL Inc.
Visit our sister sites
Last update: June 11, 2013 at 15:24:48 UTC -
|
This image depicts the relationship between Russia and Transcaucasia as the “courtship” between a bride and groom.
The man on the right is holding a copy of the pro-Russian newspaper Golos kavkaza (The Voice of the Caucasus) which was published between 1906 and 1917. The man on the left is depicted in a bridal dress with a sash bearing the inscription “Transcaucasia.”
This caricature was accompanied by a short poem about Kote Tumanishvili, a Georgian public figure. The poem’s statement that Tumanishvili “flirts with the north” can be interpreted as both a criticism of Tumanishvili and a generalization about the political situation of the Caucasus vis-à-vis Russia.
Image: The National Parliamentary Library of Georgia
|
Holding Unpopular Opinions Ended the Public Career of Atomic Bomb's Inventor
After the dropping of atomic bombs on Hiroshima and Nagasaki, the United States government did not pursue the development of the hydrogen bomb in the years after World War II. But after the Soviets successfully detonated an atomic bomb in 1949, President Harry S. Truman ordered the creation of a hydrogen bomb project.
In November, 1952, sixty years ago, the United States detonated the world's first thermonuclear weapon, the hydrogen bomb in the Pacific. The test gave the United States an advantage in the nuclear arms race with the Soviet Union. The new weapon was approximately 1,000 times more powerful than conventional nuclear devices.
Opponents of development of the hydrogen bomb included J. Robert Oppenheimer, one of the fathers of the atomic bomb. He argued that little would be accomplished except the speeding up of the arms race. This turned out to be true. The Soviet Union exploded a thermonuclear device the following year and by the late 1970s, seven nations had constructed hydrogen bombs. The nuclear arms race had been launched.
The origins of the hydrogen bomb date to the early 1940s, when the Italian-born physicist Enrico Fermi suggested to the Hungarian-born Edward Teller of the U.S. Manhattan Project that a weapon based on nuclear fission was possible. Dr. Teller was assigned to build the atomic bomb
Oppenheimer (left) and Groves (right) at the remains of the Trinity test in September 1945. The white canvas overshoes prevent fallout from sticking to the soles of their shoes.
The first atomic bomb was detonated on July 16, 1945, in the Trinity test in New Mexico; Oppenheimer (1904-67) was shaken and remarked that it brought to mind words from the Bhagavad Gita: "Now, I am become Death, the destroyer of worlds."
After the war he became a chief adviser to the newly created United States Atomic Energy Commission and used that position to lobby for international control of nuclear power to avert nuclear proliferation and an arms race with the Soviet Union. After provoking the ire of many politicians with his outspoken opinions, he had his security clearance revoked in a much-publicized hearing in 1954, and was effectively stripped of his direct political influence.
Oppenheimer said the bomb, had "dramatized so mercilessly the inhumanity and evil of modern war. In some sort of crude sense which no vulgarity, no humor, no overstatements can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose."
He also said, "I carry no weight on my conscience" regarding the atomic bombing of Hiroshima and Nagasaki.
"Scientists are not delinquents. Our work has changed the conditions in which men live, but the use made of these changes is the problem of governments, not of scientists."
Oppenheimer’s later problems started in the late 1930s when he became involved in leftist groups. He supported Communist, trade union and liberal causes.
He explained his interests this way. "I had had a continuing smoldering fury about the treatment of Jews in Germany. I had relatives there, and was later to help in extricating them and bringing them to this country. I saw what the Depression was doing to my students. Often they could get no jobs, or jobs which were wholly inadequate. And through them, I began to understand how deeply political and economic events could affect men's lives. I began to feel the need to participate more fully in the life of the community."
Oppenheimer's denied that he was ever a member of the Communist party.
In 1953, the Atomic Energy Commission suspended his security clearance. Despite testimonials from scores of witnesses during the hearings, his clearance was not reinstated. Oppenheimer returned to academic life, but as one colleague would say, the public ordeal had broken his spirit,”according to The New York Times.”
He continued to lecture, write and work in physics. President John F. Kennedy awarded (and Lyndon B. Johnson presented) him with the Enrico Fermi Award as a gesture of political rehabilitation.
The New York Times wrote, “The action against Dr. Oppenheimer dismayed the scientific community and many other Americans. He was widely pictured as a victim of McCarthyism who was being penalized for holding honest, if unpopular, opinions. The A.E.C., Mr. Strauss and the Eisenhower Administration were accused of carrying out a witch hunt in an attempt to account for Soviet atomic successes and to feed a public hysteria about Communists.”
Visit your local library for these resources:
The Bomb: A History of Hell on Earth
Gerard J De Groot, 2005.
J. Robert Oppenheimer : A Life
Abraham Pais; Robert P Crease, (2006).
American Prometheus : The Triumph and Tragedy of J. Robert Oppenheimer
Kai Bird; Martin J Sherwin, (2005).
Dark sun: The making of the hydrogen bomb
Richard Rhodes (1995).
In the Shadow of the Bomb: Bethe, Oppenheimer, and the Moral Responsibility of the Scientist
S.S. Schweber, (2000).
Science and the Common Understanding
J. Robert Oppenheimer, (1954).
The Open Mind
J. Robert Oppenheimer, (1955).
The Flying Trapeze: Three Crises for Physicists
J. Robert Oppenheimer, (1964).
J. Robert Oppenheimer, N. Metropolis, Gian-Carlo Rota, D.H. Sharp, (1984).
Atom and Void: Essays on Science and Community
J. Robert Oppenheimer, (1989).
Nuclear weapon test Romeo (yield 11 Mt) on Bikini Atoll. The test was part of the Operation Castle. Romeo was the first nuclear test conducted on a barge. The barge was located in the Bravo crater.
Oppenheimer at the Guest Lodge, Oak Ridge. c. 1946"
February 1946 by Ed Westcott (U.S. Government photographer)
In September 1945, many participants returned to the Trinity Test site for news crews. Here Oppenheimer and Groves examine the remains of one the bases of the steel test tower. U.S. Army Corps of Engineers.
Likely Stories Blog
|
(WFP NVP Augusta Y. Thomas sits in front of the lunch counter where she once protested segregation.)
Black History in the United States is both long and short. It begins with the settlement of America, but due to the brutal oppression of slavery, the stories of our Black American heroes have only recently started being etched into our history books.
Black History Month is a time to recognize those who changed the paradigm of American life -- from Frederick Douglas and Rosa Parks to Jackie Robinson and Martin Luther King, Jr. to those who are not widely known by all Americans -- like Blanche Kelso Bruce, John Sturdivant, and Ella Baker -- but whose deeds have contributed to the prosperity and freedom of all of us.
(Three pioneering African American AFGE leaders appear in this photo; Barbara Hutchinson, John Sturdivant and Rita Mason all helped make AFGE into the organization it is today).
The official theme of 2016's Black History Month is Hallowed Grounds: Sites of African American Memories. The aim is to highlight locations that prompt all Americans to remember that "the imprint of Americans of African descent is deeply embedded in the narrative of the American past."
Today, it's easy to forget how recently slavery existed and we fail to see its ongoing impact. But, consider this:
Now compare that to our own AFGE history. It had only been 67 years from the end of the Civil War when AFGE was founded in 1932 -- not even a full lifetime for most Americans today. It would take 30 years after AFGE was founded for a young woman named Augusta Y. Thomas to sit at a segregated lunch counter in Greensboro, North Carolina. From there, it would be nearly three decades before an African American would be elected AFGE National President.
What's your story of Black history? Use this form to add your story to Black History Month. We may select it to highlight on our website and on social media.
Freedom and equality are not won quickly. It wasn't until the late 1960's that African Americans' right to vote began to be legitimately enforced. Although the 18th Amendment granted all black men the right to vote in 1869 (Black women would have to wait for the 19th amendment in 1920), white supremacist lawmakers passed Jim Crow laws to continue to oppress their former slaves after the Civil War. Although "literacy tests" and poll taxes are gone, voter suppression continues today.
AFGE's Women's & Fair Practices Departments (WFP) are working to make progress move just a little faster. Founded at the height of the Civil Rights Movement and expanded in the 1970's, today's WFP promotes the civil, human, women's and workers' rights of federal and D.C. government workers through four program areas: education and training, member mobilization/organizing, representation through litigation, and legislative/political action.
Black History Month is a reminder not just of the past but also our potential. AFGE is proud that African Americans serve at every level of our union and play an integral role in our future. We celebrate Black History Month to stand in solidarity with Black heroes of the past, present, and future.
|
According to reports, being the first solar-powered airport in the world, the Cochin International Airport Limited (CIAL) is all set to take part in this year’s UN Framework Convention on Climate Change’s (UNFCCC) Conference of Parties (COP-21) or the 2015 Paris Climate Summit, to be held in December.
Nearly 196 countries are expected to participate in the meet. The Prime Minister Narendra Modi has also evinced interest in presenting the CIAL model in the conference to drive home the message of renewable energy. Besides, the Centre is also exploring ways to replicate the CIAL model solar power generation in at least 30 airports in the country.
Speaking to Express, ACK Nair, CIAL airport director, said: “we have received a communication from the Civil Aviation Ministry to make a video presentation on the solar power project of the airport and send it to the office of the Prime Minister of India (PMO). Besides, the central government has also directed the other PPP (Public Private Partnership) airports coming up in the country to pursue the CIAL model after its successful implementation of various projects.”
The CIAL has also received queries from Liberia, Ghana, and Vietnam to study the project and some of them have also sought technical help to set up similar projects in their country, said the airport officials.
Since renewable energy becomes the focal point in many countries, the CIAL model will be one of the effective tools for the PM to stress the need to develop decentralized renewable energy plants to reduce the green house emission up to 35 per cent by 2030 from 2005 level, said the officials. After commissioning of the new solar power station with a capacity of 12 MW at the airport, the CIAL could generate 29.72 lakhs unit of solar energy in 55 days of its operation and the airport required only around 27.5 lakhs units to meet its normal energy requirements, said the airport spokesperson.
The 12MWp solar PV plant set up as part of its green initiatives is generating nearly 48,000 units per day.
Source : Panchabuta, 18th October 2015
URL : http://panchabuta.com/2015/10/18/cials-solar-power-model-to-go-places/
|
Earth’s inner-inner core or Earth’s immaculate shine is natural phenomena that scientists have studied for decades. Here are scientific research to bring useful information about the Earth and methods to reduce CO2 to help protect the Earth.
Seismologists from the Australian National University found evidence that inside the Earth’s core is a smaller, innermost core.
“The existence of an internal metallic ball within the inner core, the innermost inner core, was hypothesized about 20 years ago. We now provide another line of evidence to prove the hypothesis,” said Thanh Son Pham, coauthor of a paper in Nature Communications.
Pham and his collaborator studied the reverberations of large earthquakes that traveled through the planet’s core and “bounced” back to their point of origin. Comparing the travel times of the seismic waves, they inferred that there is a center region that is different from the outer layer of the core — a newly identified fifth layer of the Earth.
Scientists at the University of Washington discovered two previously unknown forms of frozen salt water.
The finding may help researchers who are working to make sense of unfamiliar chemical structures on other planets in the solar system. While trying to study how salt affects the formation of ice on much colder planets under much higher atmospheric pressure, Baptiste Journaux and his team made what he called a rare “fundamental” discovery.
They compressed salt water at up to 25,000 times Earth’s atmospheric pressure while lowering the temperature below –190 degrees Fahrenheit. Surprisingly, ice crystals began forming in arrangements never before seen on Earth. When salt water freezes naturally on Earth, it arranges into a lattice structure of one salt molecule for every two water molecules. In the experiment, published in PNAS, salt water froze in different arrangements (two salt molecules for every 17 water molecules and one salt molecule for 13 water molecules), consistent with the chemical signatures observed on other planets’ ice moons.
An international team of scientists explained why planet Earth has a uniform glow.
When seen from space, Earth is uniformly bright, despite differences in the hemispheres that should affect how much sunlight gets reflected. The new study, in PNAS, solves a decades-old mystery and could help scientists implement geoengineering solutions to combat climate change, said co-author Yohai Kaspi.
Albedo is how much sunlight is reflected from the planet’s surface. Lighter areas, such as the snow cover of the northern hemisphere, reflect more light than dark surfaces, like the oceans that cover much of the southern hemisphere. To determine why these differences in albedo aren’t apparent from space, Kaspi and his collaborators cross-referenced satellite, storm, and cloud-cover data. They determined that while the southern oceans absorb more sunlight, they also produce more storms and storm clouds, which reflect solar radiation.
GE Research and the GE Vernova business have been partnering closely with the U.S. Department of Energy (DOE), the Advanced Research Projects Agency (ARPA-E), and a host of other industry and academic research partners to accelerate new advancements in CO2 removal.
GE’s Carbon Capture Technology Breakout Team has developed a unique DAC system that couples its decades of experience designing thermal management solutions and material systems expertise to develop innovative sorbent materials for CO2 capture. With GE’s DAC system, the thermal management design provides an optimal environment for the sorbent materials to remove CO2 from the air. The team is employing a similar approach in a project with the Defense Advanced Research Projects Agency (DARPA) to capture clean, potable water from extremely arid, desert-like air.
|
About Sampling Rates in Digital Audio
We continue to dive into the foundations of digital audio. I hopefully cleared up a thing or two about the choice of bit depth already. And today I’d like to take you on a short trip into the unwieldy territory of the sampling theorem.
A source of constant debate in the world of digital audio is the choice of sampling rate. I have taken part in a lot of these debates myself, and in most cases I’d rather prefer the standard rates of 44.1 kHz or 48 kHz. As a DSP developer, I’m notoriously cheap on CPU cycles, and lower sampling rates obviously take up much less of this precious resource.
But there’s also the high-end bunch that would rather prefer working with 192 kHz all over the place. That’s understandable, because intuitively, more is better (except for the increased CPU load). However, this conclusion is mostly based on the misconception that sampling results in a coarse, stairstepped representation of a continuous audio signal. In reality, this is not the case. But fully trusting this requires understanding of the sometimes very counterintuitive sampling theorem.
In essence, a continuous signal can be exactly reconstructed from measurements taken at regular intervals (the sampling rate). With the limitation that the signal must not have any frequency content exceeding the so-called Nyquist limit or Nyquist frequency, which is at half of the sampling rate. Let’s take that for granted right now and first assume that this requirement is always fulfilled for the audio converters we use. The question would be then: which sampling rate do we need?
Chosing the sampling rate
As we did in the bit depth article before, we need to chose some kind of requirement we want to fulfill. This way we can find out which choice of sampling rate we would need according to that requirement. Again, the most reasonable choice is to look at the capabilities of human hearing. To chose a useful bit depth, we looked at the dynamic range of hearing and especially at the level of the softest sound we can perceive. Analogously, to chose a reasonable sampling rate, we need to look at the frequency range of hearing. Here’s a plot of the absolute hearing threshold, which is the lowest sound pressure level at which a sine wave of a given frequency can be reliably perceived by a healthy 20 year old person.
As we can see, there is a very steep rise of the threshold towards higher frequencies that starts at about 10 kHz. Although an often quoted ballpark number for the human hearing range is 20-20000 Hz, the levels required to perceive such frequencies are already quite insane. A more accurate number would be around 18 kHz. And over 30 years of age, the upper limit decreases even more. So in practice, the usual 20 kHz number does already include a bit of margin.
The lowest common sampling rate used in pro audio is 44.1 kHz, so this is theoretically enough to handle frequency content up to 22.05 kHz. All in all, using this standard CD sampling rate is around 4 kHz higher than would be needed in an ideal case, which gives us a nice security margin.
So far, we just assumed that the Nyquist criterion is fulfilled. But what happens if this is not the case? Well, if the original signal contains frequency content above the Nyquist limit, this content is effectively folded into the frequency range below this limit (this is a little shortcut to save you from a complete lecture on digital signal theory). That means that, if we have a sine wave at 30 kHz and sample it at a rate of 44.1 kHz without further countermeasures, the reconstructed signal would be a sine wave at 14.1 kHz. That is below the Nyquist limit by the difference between the original frequency and the Nyquist frequency. This phenomenon is called aliasing. Note that if we increase the frequency of the original sine, the aliased result decreases in frequency.
To avoid that, the original signal must be low-pass filtered before sampling it. This removes all signal components that would result in aliasing. In the earlier days of digital audio, this filtering had to be performed using analog circuitry in front of the actual converter, which always requires some margin due to the limited steepness of analog filters. Modern delta-sigma converters work differently. The conversion is done at a much higher sampling rate internally, but with only one bit resolution (information theory makes it possible to trade sampling rate and bit depth for each other). Then, most of the antialias filtering can be performed digitally, which is much more efficient. In this case, less margin is required in the analog domain.
Modern converter hardware is set up so that aliasing is not an issue, which essentially means that the level of any aliasing artifacts is below the noise level of the converter. The lowpass filters needed for that are very steep, so the slope of these doesn’t take up too much of the signal bandwidth. So far, it looks like with reasonably modern hardware, no sampling rate higher than 44.1 kHz or 48 kHz is needed for audio recording and playback.
But we barely scratched the surface yet. There are a lot more implications with the choice of sampling rate, especially when it comes to further processing. Also, we do not yet know exactly how our individual converter actually performs. In the future, you will learn more about how digital signal processing is affected by the choice of sampling rate. We’ll also investigate on the often debated question if it might be possible under certain circumstances to hear frequency content above the 20 kHz limit. And we’ll put our converters on the test bench, to see if they really hold their promises.
Have you tried using double sampling rates like 88.2 or 96 kHz for recording? Or maybe even 192 kHz? What are your experiences with that? Discuss in the comments!
|
The heart is an important organ that pumps blood around your body. If it’s not working well, other organs will suffer. Problems with your heart and blood vessels, such as high blood pressure, diabetes, and high cholesterol harm your heart. The factors that cause cardiovascular disease also increase your risk of cognitive decline and dementia.
Risk Factors for Heart Disease and Dementia
Research in the Journal of the American Geriatrics Society found that brain function is linked to levels of bad cholesterol (LDL), along with cardiovascular risk. Studies have found that high blood pressure, type 2 diabetes, and high blood cholesterol raise the risk of cognitive decline and dementia. Chronic cardiovascular disorders are even more likely to contribute to Alzheimer's disease. Inflammation further accelerates brain degeneration.
Reduce Your Dementia Risk by Keeping Fit
The Centers for Disease Control and Prevention (CDC) see exercise as vital for good health. Keeping fit can prevent various diseases and help your muscles, heart, and brain work better. Exercise also provides the brain with oxygen and stimulates various chemicals in the nervous system, reducing the risk of heart disease, blood vessel disease, and type 2 diabetes. Research has shown that endorphins released from exercise can reduce pain, inflammation, and have mental health benefits.
What Exercise Is Best for Your Brain?
Which type of exercise gives the best results? Research suggests that short periods of high-intensity exercise (6 minutes) may help delay the onset of Alzheimer's and Parkinson's disease. Other studies have found that cycling and stretching can reduce the incidence of dementia among the elderly. But whatever exercise you do is better than not exercising. In addition to reducing the risk of heart disease and strokes, you’re protecting yourself from dementia and Parkinson's. Exercise also helps slow disease progression, keeping the brain healthy.
Older adults should exercise carefully to avoid injuries. The National Institute on Aging (NIA) recommends starting with slow or low-intensity exercise and warming up and cooling down appropriately.
Studies show a linke between heart disease risk factors, such as high blood pressure, smoking, high cholesterol levels, obesity, and damage to the brain. Help your heart stay healthy with exercise to reduce your risk of dementia. Holistic health will help you have both a healthy heart and a healthy brain.
A Healthy Body, Mind, and Brain Is the Best Gift at 50+ at The Aspen Tree The Forestias Operated by Baycrest with Lifetime Care
Medical technology helps us live longer but what matters most is good health to live a free, meaningful life, free from worries. The Aspen Tree at The Forestias has been developed with research and eldercare leaders including Canada’s Baycrest to meet your every need.
You can live in a multi-generation community at The Forestias with Holistic Lifetime Care plus full facilities and a Health & Wellness program tailored to older adults, with yoga, swimming, singing, playing music, meditation, outdoor activities, hydrotherapy, and lots more to safeguard your physical, mental, and brain health.
The Aspen Tree at The Forestias also has a Health & Brain Center with health services to counter dementia. The team of specialists looks after your health 24 hours a day to give you peace of mind and holistic good health.
Live free from care in the free time of life. Come and discover your perfect lifestyle.
Find out more https://mqdc.com/aspentree
LINE OA: @TheAspenTree or CLICk https://mqdc.link/3Emhkde
|
HHS family shares impact of terrorism in Canada
Hospital sites lower flags for June 23 National Day of Remembrance for Victims of Terrorism
Rob Alexander and his family know first-hand the devastating impact of terrorism in Canada.
His father, Dr. Anchanatt Mathew Alexander, was among the 329 people killed in the June 23, 1985 bombing of Air India flight 182. The Hamilton surgeon worked at the former Chedoke hospital and McMaster hospital, now McMaster University Medical Centre, a Hamilton Health Sciences (HHS) site. He went on to become chief of staff at the West Haldimand General Hospital in Hagersville.
Dr. Alexander was flying to India to visit his elderly mother when the plane exploded over the coast of Ireland, killing everyone on board.
National Day of Remembrance
The bombing is the largest mass murder and terrorist act in Canada’s history, and it’s the reason organizations across the country, including HHS, lower flags to half-mast each June 23 in recognition of Canada’s National Day of Remembrance for Victims of Terrorism.
The Air India Victims’ Families Association successfully lobbied the federal government for this official day of recognition back in 2005 as a result of the judicial public inquiry into the tragedy. Rob is a member and a current director of the association.
“I’m pleased to see HHS and other organizations lower their flags in remembrance,” says Rob, an insurance and benefits consultant who lives with his wife Linda and their two children in Hamilton.
Linda is a 21-year HHS employee who works as a medical secretary in the anatomical pathology lab at Juravinski Hospital and Cancer Centre. “It’s an important recognition of this national tragedy,” she says.
Deeper understanding needed
The family hopes that the Day of Remembrance will help educate Canadians about Air India flight 182, and its significance in Canada’s history.
“Many Canadians still view the bombing as a foreign act because it happened on an Indian airline,” says Rob. “In fact, this terrorist plot was hatched in Canada and carried out on Canadian soil. The bomb was planted on the plane in Vancouver, and almost 80 per cent of the people onboard would have been Canadian citizens and permanent residents.”
From Vancouver the plane travelled to Toronto, picking up passengers including Dr. Alexander. It stopped next in Montreal, and was en route to London–Delhi–Mumbai when it exploded over the Irish coast.
Rob, now 52, was 15 years old at the time. His two siblings, Tania and Jamie, were just 12 and 9. The family didn’t know for days that there were no survivors. Rob remembers imagining that his dad survived and was heroically saving others, just like in the movies.
“I pictured him helping people into life rafts, and helping injured survivors. He had hospital emergency department experience, and was one of those good men who liked to help.”
Rob was a small child when he immigrated to Canada in 1971 with his father and mother, Esmie. His siblings were born here.
“My dad could have worked anywhere in the world but he chose Canada, and Hamilton,” says Rob, who attended Hillfield Strathallan College along with his sister and brother. From Hillfield Strathallan, all three siblings all attended McMaster University.
“My parents liked this community and wanted their kids to grow up here.”
A show of respect
Lowering flags to half-mast on June 23 recognizes the largest mass murder and terrorist act in Canada’s history, and is also a reminder of the ongoing role Canadians can play in creating inclusive, safe communities.
“I am so proud to know that my incredible employers at HHS are interested in recognizing this day and to show how much it means to all our communities who have endured even more recent tragedies, including Canadian lives lost on Ukraine International Airlines flight 752 and the terrible events in London, Ontario,” says Linda.
She’s referring to the Ukrainian plane shot down by an Iranian surface-to-air missile in January 2020, killing 55 Canadian citizens and 30 permanent residents. And one year ago, four Muslim family members were killed London, Ontario in what police call a hate-motivated truck attack. A 21-year-old man faces four counts of first-degree murder.
These recent tragedies show that terrorist acts continue to take place on Canadian soil, as well as against Canadians, say Rob and Linda.
Meanwhile, Rob continues working to raise awareness of the Air India tragedy and promote the National Day of Remembrance through his advocacy efforts with the Air India Victims’ Families Association.
“Though the bombing happened 37 years ago, it’s a significant event in Canada’s history that should be front-and-centre in people’s minds, especially on June 23,” says Rob.
“The National Day of Remembrance is an important reminder that we must not become complacent. These types of mass-casualty events can change the course of a country’s history and have a deep, lasting impact on citizens, especially since these are avoidable events that have wasted hundreds of lives.”
|
Physicists at the Hong Kong University of Technology and Science have proved that no machine will ever allow a person to travel through time.
The problem is not that we don?t have the technology yet. Simply put, travelling through time is beyond the limits of the physical laws of the universe.
A group of physicists at the Hong Kong University of Science and Technology (HKUST) led by Prof Shengwang Du reported the direct observation of optical precursor of a single photon and proved that single photons cannot travel faster than the speed of light in vacuum. HKUST's study reaffirms Einstein's theory that nothing travels faster than light and closes a decade-long debate about the speed of a single photon.
Prof Du's study demonstrates that a single photon, the fundamental quanta of light, also obeys the traffic law of the universe just like classical EM waves. Einstein claimed that the speed of light was the traffic law of the universe or in simple language, nothing can travel faster than light. HKUST's team is the first to experimentally show that optical precursors exist at the single-photon level, and that they are the fastest part of the single-photon wave packet even in a so called 'superluminal' medium.
"The results add to our understanding of how a single photon moves. They also confirm the upper bound on how fast information travels with light," said Prof Du. "By showing that single photons cannot travel faster than the speed of light, our results bring a closure to the debate on the true speed of information carried by a single photon. Our findings will also likely have potential applications by giving scientists a better picture on the transmission of quantum information."
For more details on the issue, heads on to this press release
|
Not much is known about the native life of the world, most noteworthy is the Yautja, but there also seems to be a plethora of wildlife.
Little is known about the planet. In Alien vs Predator: Requiem (AVP: R) it is shown orbiting a trinary star system and possesses a ring. Gravity, day/night cycles, atmospheric composition, continental distribution, etc., are unknown.
Terrain and ClimateEdit
The climate is presumably hot and humid throughout most of the world. According to AVP:R, intense volcanic activity is still present. The volcanic regions are also known to contain areas of lethal radioactivity, as the dangerous Vy'drach dwells in these areas.
Little is known about the Yautja's history. The movie Predators suggests the Traditional Predators and the Super Predators had a clan war that resulted in the Super Predators being banished from the homeworld.
- It was previously thought to have been the setting planet in the film Predators, but it has been confirmed that the planet is a Super Predator hunting reserve and not the homeworld, allowing for the possibility that there are other planets inhabited by the Predators or Predator sub-species.
- It appeared in the fan film AVP: Redemption. A Weyland-Yutani video was received by a Yautja Council Member on Yautja Prime, who then showed the video via hologram to the King Predator. The King then sent a message to a presumably banished Predator Warrior, though why he was banished is unknown. He was ordered to eliminate the xenomorph infestation and destroy the Sulaco.
|
Unless you’re a robot sent back in time to eliminate John Connor (The Terminator for those of y’ all who have been living under a rock for the last half century), you’re well aware of the crucial part that sleep plays in all our lives. Getting eight hours of sleep or anywhere near that is vital in keeping your mind and body functioning at an optimum level.
Without the benefit of the proper number of hours sleeping, you run the risk of developing a number of health issues. Sleep is important for overall health, and inadequate sleep is associated with numerous health problems. Research from the United States Department of Health and Human Services shows that not getting enough sleep, or getting poor-quality sleep, increases the risk of high blood pressure, heart disease, obesity, and diabetes.
Sleep deprivation can also be very dangerous. Sleep-deprived people who were tested using a driving simulator or performing hand-eye coordination tasks did as badly as, or worse than, people who were intoxicated. Drowsy driving causes thousands of car crashes each year, some of them fatal. Additionally, sleep deprivation magnifies the effect of alcohol on the body. A fatigued person who drinks will be more impaired than a well-rested person.
Unfortunately, we all live in a world that offers plenty of distractions and disturbances to sleep. Worse, there’s just no avoiding these issues that prevent you from enjoying a good night’s sleep. And that’s why you try to find suitable ways to block out the noise, whether it’s the raucous upstairs neighbor, your boisterous children or the barking pet in your backyard.
One solution that I have had personal success with is BedPhones and SleepPhones. Both are great at supporting my quest for slumber. If you are not familiar with either, you are in the right place as I will take you through the finer points of these devices in the hope of helping you decide which one you should get.
Let’s first start by discussing how BedPhones and SleepPhones work.
Click here to jump straight to our recommendations!
Table of Contents
- How Do BedPhones and SleepPhones Work?
- What BedPhones and SleepPhones Can Offer You
- What are the Differences Between BedPhones and SleepPhones?
- BedPhone vs. SleepPhones: What are the Problems?
- Other Options That You Can Try
- What to Consider When Buying Either a BedPhone or a SleepPhone
- Best Headphones for Sleeping
How Do BedPhones and SleepPhones Work?
First of all, it’s imperative that you first understand how these devices work.
SleepPhones look like wide headbands. The earphone element can be located within the outer fabric, so when you want to use them, you just place them over your head so that they rest over your ears. And since the width of the outer fabric, SleepPhones can also serve a second purpose and that is as an eye mask whilst covering your ears and helping you to drift off to dreamland.
On the other hand, BedPhones look like your normal in-ear headphones, but they don’t go into your ears. These are easy to use; just hook them over your ear and make sure that the tiny flat speaker lies flush against your ear.
What BedPhones and SleepPhones Can Offer You
While BedPhones and SleepPhones have been created for similar purposes, each differ from the other and we’ll discuss how so later in this article. Before we get there, let’s first take a look at what they both can give you.
1) Prolonging your life
A good night’s rest actually helps keep your body in tiptop shape and functioning well, for a much longer period of time.
2) Avoiding stress and depression
Your brain can endure through a lot of issues if given consistent sleep since it can slowly but surely establish foundations for betterment over the course of those nights.
3) Improving your memory and focus
Of all the benefits listed here, this is the one that is a no-brainer. But you’d be surprised at just how much more you can remember and how easily you can pick up on little details, even avoiding mishaps far more easily, with the help of proper and consistent sleep.
4) Avoiding unnecessary weight gain and inflammation
Sleep is when our body does a lot of significant processes. Some of them relate to digestion and even general maintenance of the body, preventing a lot of issues from popping up.
What are the Differences Between BedPhones and SleepPhones?
Now that you know why a lot of people are using these devices to help themselves get more sound sleep, let’s go about comparing them:
There’s a reason why I’m mentioning price first and it is a fairly simple one: bedphones are cheaper than SleepPhones by a wide margin. Just how far away are the price points between these two? Consider this: even in the closest cases, bedphones are still twice as cheap as SleepPhones and that difference can go all the way into the range of them being cheaper by a factor of six or even seven.
Considering how substantial this difference is, you would be right to presume it’s because bedphones are much worse than SleepPhones. Depending on your particular needs, that just might be true, but we’ll be diving deeper into that specific subject to make sure you are armed with the right kind of information before you make a purchase.
Effectiveness in Helping User Get Sleep
Both of these products function in an identical manner, providing you with a lengthy cord to run to the machine, while also offering plenty of space to work with. Most importantly, they block out the noise. There’s really not much difference when it comes to each device’s effectiveness in helping you get the kind of sleep you want.
I have also determined that their general level of noise reduction and cancelation are neck-to-neck, most probably because of the similarities of their base designs.
The Device’s Versatility
Before you think how odd it might be to talk about versatility when it comes to products that are designed to ease your sleep, give me a chance to explain. One of them has other uses as well. The Bedphones can even be worn during:
Playing sports – Actually recommended for contact sports but otherwise, feel free to use them for other athletic activities like jogging or biking.
Traveling – Whether it’s a long commute or just a leisurely drive to the grocery store, you can wear these either way.
While focusing on creative work – Your creative juices while writing, painting and so on and so forth can be hindered by outside noise, but you can wear these Bedphones even during such times, without any worry.
Don’t let the name fool you, bedphones are not strictly for use only when you’re in bed and want to drift off to sleep. Their versatility gives them a nice edge in this portion of the comparison.
This is because of the fact that SleepPhones function exactly as the name says. They make it easier for you to sleep — nothing more, nothing less. You cannot use them for any other purpose other than getting you some zzzsss. When it’s lights out and the time has come for you to sleep, then SleepPhones are great for the job. In any other situations, the headband design makes them pretty much useless.
Comfort and Quality
When it comes to the range of use and overall comfort provided by the cord, both of these products stand on equal footing. The general standard for both bedphones and SleepPhones is around 4 to 5 feet of cord length and does not often change.
But bedphones have several downsides, which I will list below:
1) The Design
While not basically always a bad thing, this design does mean they can get lost fairly easily or even get crushed if you place your head on them the wrong way. In addition, their comfort for the ear is far lesser, even with the thick foam it possesses.
2) It Limits Your Sleep
Another complaint that is often expressed with these devices is that you end up not getting any sleep on your sides. You start off with some minor discomfort, which can later on become an irritant all on its own, that prevents you from getting zzzsss.
3) Loss of Quality
Sleeping on these the wrong way can even cause them to do their job poorly. The quality of the sound they deliver can be made worse by the pressure of your ear on them, making them far less effective in some situations.
4) Its Durability
They are not too durable. They can last as little as a year or worse, less before you need to buy a new one.
Meanwhile, SleepPhones, are amazing when talking about comfort. They are far more luxurious and cozier when compared to Bedphones.
First of all, the headband is made to be particularly comfortable and help in your sleep, alongside the headphone within it. Not only that, but it lets you sleep however you like, without suffering any adverse effects.
Side sleepers, or those who like to sleep on their belly and many more will find themselves hardly noticing them as they put on the headband. Not only that, but there is no loss of quality at all, no matter which way you use them.
Another great thing is that the headband is machine-washable, meaning that the material is, in fact, durable enough to withstand it. That’s a very good endorsement, especially when you consider most warranties begin at a year of use, some with even more so.
This is a definite good sign for their possible longevity, beyond just the obvious proof of them being made so well that you can wash them so easily. This is why the SleepPhones are a hands-down winner of this round.
Want to skip straight to the conclusion? Click here!
BedPhone vs. SleepPhones: What are the Problems?
Both BedPhones and SleepPhones work great at helping you to sleep better, but like most things, they do come with their fair share of problems and it’s important for someone who is considering a purchase, to be aware of these.
- Laying on a speaker can be quite uncomfortable for others. While it is possible to modify the speaker and even somewhat mold it, this still does not work for some people. This means that they never quite feel fully comfortable which could impact on that perfect sleep
- The actual fit of these headphones largely depends on the size and shape of your ear. Some people are unable to get the right fit.
- The speaker part of the BedPhones is actually very thin, which means that they can break very easily.
- When you buy these headphones, you will need to make sure that you get the right size. If you get one that is too small, it will feel really restrictive. Too big, and the headband will slip out of place as you move around while sleeping. Use a tape measure to measure the size of your head around the ear part to help make sure you buy the right size
- Some people have discovered that the speakers can possibly move out of place inside the headband. Because the speakers aren’t built into the headband and are removable, you may find that the speakers can become extricated from their initial placement.
Other Options That You Can Try
Apart from bedphones and SleepPhones, I thought I would also give you a couple of other options that might help you to get to sleep a bit easier. Some of these have worked well for me in the past so I think they are well worth mentioning.
Not interested in the other options? Jump straight to find out what we recommend!
1) Foam Earplugs
No, this is not the set you get at a hotel or some other rubbish off-cut of foam. After trying a lot of different noise-blocking earplugs in the past, I have pretty much found the perfect ones: the Howard Leight Laser Lite Plugs.
Don’t be tempted by any other brand or even Howard Leight’s own higher decibel rated models, the ones you want are the yellow and pink laser lite model. These are by far the best and comfiest earplugs I have ever used for sleeping and I highly recommend them. They last upwards of 10 uses per set and a box of 200 pairs should only cost you about $20, so you can buy in bulk on these and save big.
2) Custom Molded Earplugs
I have had some success with custom molded earplugs in the past although I now like the disposable kind that was previously mentioned above. I recommend trying the Decibullz Custom Molded Earplugs. These are a set of DIY custom molds that you make from the comforts of home and they fit exactly to your outer ear. The process is a little tricky so get some help if you think you need it but overall, it takes around 10 minutes to set them into a reusable set or earplugs that block out a lot of external noise.
3) Pyle Bluetooth Pillow Speaker
Another device that I would suggest trying is the Pyle pillow speaker. For about a year, this was my solution to getting a good night’s rest. One of the problems with using a bedside speaker is that the sound is mono-directional. But the Pyle resolves this concern by placing a left and right speaker inside your pillow. I really began appreciating it when listening to ambient noise recording but obviously, it isn’t a feasible solution if you share a bed with someone else like your wife.
What to Consider When Buying Either a BedPhone or a SleepPhone
We are going to break down the buying process into different areas and product categories to help find the right set for you. Take a moment to think about how you sleep and what will match your needs best before making a purchase and if you have any questions feel free to leave a comment down below and we can do our best to advise you on the right choice.
1) The Quality of the Build
When you sleep, you will be making contact with the pillow as you toss and turn. This can cause substantial wear and strain on any headphones so you are going to need something that is either tough and durable enough to take the abuse or something that is specifically designed to be used in such a way.
2) Are They Comfortable to Wear All Night Long?
This is the most significant factor in my estimation. After all, if you aren’t comfortable, you are going to have problems falling asleep quickly and even more trouble trying to stay asleep. You want something that you would be happy to wear when you try to get your full of 8 hours of sleep without experiencing pain or discomfort.
3) Is Sound Important When Choosing a Set Of Headphones For Sleeping?
Sound is immensely important when picking a set of headphones but when you are choosing them to sleep in, your priorities should be somewhat different. In this situation, you shouldn’t worry about out and out fidelity but rather you should be looking for something that doesn’t do anything jarring or annoying that might put you off your sleep. Examples of this could be a distracting background hiss or even a sharp treble spike. Both of these can be off-putting enough that you start thinking about the sound rather than drifting off.
When it comes to sound, you want your headphones or earphones to be smooth and inoffensive. Neutral and balanced headphones with low impedance do well here but I also would look past a set of headphones with toned down high notes.
Best Headphones for Sleeping
The Cozyphones are one of the best-executed headphones for sleeping and the ones which I am currently using every night. They have a soft headband that unlike other models allow your ears to breath. Comfortable and lightweight after a few weeks with them you will barely notice you have them on.
My second pick has to go to Maxrock. Most times, I would advocate sleeping with earphones in because when you roll to your side they usually push deeper into your ear canal and cause discomfort. The Maxrock though is so small an unobtrusive that you can comfortably sleep with them all night. However, if you lose them in the dark, best of luck finding them.
The Sleepace an all in one system that blends an eye mask and headphones into the one package. A lot of people will like this because it saves the fumble in the dark for two items instead of just one. They also have a dedicated app that monitors your sleep and adjusts the music playback according to your preferences.
Agptek makes a lot of situation-specific headphones and their take of sleepphones is a very good one. I found it to be one of the most breathable models in this test and really like the inline volume control so I could tune the sound without reaching for my phone.
I wanted to throw in the e3000 as some kind of all-rounder. This is you can sleep in but something that also sounds great all through the day. I managed to test the e3000 this year and was blown away with the sound quality and naturally thanks to its diminutive size, it makes a perfect earbud for sleeping with. Think of it as a premium upgrade to the Maxrock version I listed above.
These are very similar in design to the Cozyphones but make sure you get the size right as they aren’t a one size fits all kind of device. A lot of people swear by them but I found the speaker part to be a bit intrusive and there were other solutions that had better breathability.
|
The impact of early marriage on the reproductive health of women has been well documented, but the effect on mental health often gets overlooked. Child brides often find themselves struggling to cope with anxiety and depression and find little sympathy or support in their marital home.
A sociological study done by the University of Calicut among 600 women who had married before the legal age found that most of them were in conflict with their husbands and other members of the marital home. They were under pressure to take over the household chores and produce a child early.
Any assertion of right or voicing an opinion was treated as a challenge and often met with ridicule, even physical abuse.
A new India wide study by the Delhi-based SAMA Resource Group for Women and Health is also examining the wider impact of early marriage on a woman’s health. Early findings of the report say that when girls are forced to leave school and marry, they experience a loss of mobility. The immediate result is a loss of companionship as they are no longer free to meet their friends. This is a major cause for distress.
Every aspect of their lives comes under close watch – from what they wear to whom they speak to – so there is a constant feeling of apprehension that they might break the rules.
Any sign of sadness or unduly quiet behaviour is regarded as proper and hence gets ignored. It is only when the signs of mental health become very obvious that outside help is sought and this is not professional help, but from traditional faith healers.
“Whenever there is physical violence, it shows up in scars”, says Praful Kamble, Program Officer of SNEHA’s Little Sisters program which has been working towards bringing addressing domestic violence issues in Mumbai’s Dharavi area. “But the impact on the mind is 25% more. There is depression and a sense of shock. And when there is negative support from the family, the woman feels even more isolated.”
Geeta (name changed) experienced verbal violence from her in laws and husband, as her son was constantly ill. Even her sisters-in-law did not support her. One day she threw kerosene on herself and set herself on fire.
“I did it out of despair”, she says. “Caring for a sick child was stressful as it is and then to be constantly blamed for it was a miserable feeling. I was worried for my child and had no idea where to seek help.”
There are multiple linkages between early marriage and health. Mental health is a key one, and needs greater focus in India’s programs and policies.
|
Human ingestion of microplastics (MPs) is inevitable due to the ubiquity of MPs in various foods and drinking water. Whether the ingestion of MPs poses a substantial risk to human health is far from understood.
Microplastics are fragments of any type of plastic less than 5 mm in length, according to the U.S. National Oceanic and Atmospheric Administration (NOAA), and the European Chemicals Agency. They cause pollution by entering natural ecosystems from a variety of sources including cosmetics, clothing, and industrial processes.
What is IBD?
Inflammatory bowel disease (IBD) is a term for two conditions (Crohn’s disease and ulcerative colitis) that are characterized by chronic inflammation of the gastrointestinal (GI) tract.1 Prolonged inflammation results in damage to the GI tract. Some of the differences between Crohn’s disease and ulcerative colitis:
- It can affect any part of the GI tract (from the mouth to the anus). Most often it affects the portion of the small intestine before the large intestine/colon.
- Damaged areas appear in patches that are next to areas of healthy tissue.
- Inflammation may reach through multiple layers of the walls of the GI tract.
- Occurs only in the large intestine (colon) and rectum.
- Damaged areas are continuous (not patchy), usually starting at the rectum and spreading further into the colon.
- Inflammation is present only in the innermost layer of the lining of the colon.
New Study Findings
A new study published in the journal Environmental Science & Technology evaluated the association between inflammatory bowel disease and greater amounts of microplastics in the stool.
Dr. Maria Neira, director of Public Health, Environment, and Social Determinants of Health at the World Health Organization (WHO), says that “We urgently need to know more about the health impact of microplastics because they are everywhere — including in our drinking water.”
“Based on the limited information we have, microplastics in drinking water don’t appear to pose a health risk at current levels. But we need to find out more. We also need to stop the rise in plastic pollution worldwide.”
Dr. Yan Zhang the corresponding author of the study previously found in an animal model that microplastics can accumulate in the liver, kidney, and gut, and that the accumulation was strongly dependent on the size of the microplastics.
The research team study fecal samples of 52 participants with IBD and 50 people without IBD who were otherwise healthy. They completed a questionnaire about the foods and drinks that they consumed and their working and living conditions over the previous year.
They found that participants with IBD had significantly more microplastics in their feces than the healthy group. They also found that the severity of ulcerative colitis and Crohn’s disease correlated with the amounts of microplastics.
The team noted that these patients with more microplastics tended to drink more bottled water, eat more takeaway food, or have greater exposure to dust where they lived or worked.
The researchers pointed out that the research cannot demonstrate that microplastics cause IBD, but rather they conclude that people with IBD are more likely to retain microplastics.
Timothy Huzar. (2020, Jan 8). IBD and microplastics: Is there a link? Medical News Today. Retrieved from:
Blair Crawford, Christopher; Quinn, Brian (2016). Microplastic Pollutants (1st ed.). Elsevier Science. ISBN 9780128094068
Arthur, Courtney; Baker, Joel; Bamford, Holly (2009). “Proceedings of the International Research Workshop on the Occurrence, Effects, and Fate of Microplastic Marine Debris” (PDF). NOAA Technical Memorandum.
Collignon, Amandine; Hecq, Jean-Henri; Galgani, François; Collard, France; Goffart, Anne (2014). “Annual variation in neustonic micro- and meso-plastic particles and zooplankton in the Bay of Calvi (Mediterranean–Corsica)” (PDF). Marine Pollution Bulletin. 79 (1–2): 293–8. doi:10.1016/j.marpolbul.2013.11.023. PMID 24360334.
|
First name origin & meaning:
First name variations: Karney
Last name origins & meanings:
- Irish: Anglicized form of Gaelic Ó Catharnaigh
‘descendant of Catharnach’, a byname meaning ‘warlike’.
- Irish: reduced form of McCarney.
- Irish: variant of
- Irish: Anglicized form of Gaelic Ó
Cearnaigh ‘descendant of Cearnach’. Compare Kearney.
Comments for Carney
|
SSG612: AP US Gov+ (2013-2014)
CURRICULUM PROGRAM: Advanced Placement
COURSE TITLE: AP US Govt-Politics+
CALENDAR YEAR: 2013-2014
GRADE LEVEL: 12
COURSE LENGTH: 36 weeks
Major Concepts/Content: A well-designed AP course in United States Government and Politics will give students an analytical perspective on government and politics in the United States. This course includes both the study of general concepts used to interpret U.S. government and politics and the analysis of specific examples. It also requires familiarity with the various institutions, groups, beliefs, and ideas that constitute U.S. government and politics. While there is no single approach that an AP United States Government and Politics course must follow, students should become acquainted with the variety of theoretical perspectives and explanations for various behaviors and outcomes. Certain topics are usually covered in all college courses.
Major Instructional Activities: Instructional activities will be provided relative to the content standards of the AP US Government, and use chronological and spatial thinking, historical research, and interpretation to demonstrate intellectual reasoning, reflection and research skills.
Major Evaluative Techniques: Evaluation will be comprised of assessments for/of learning in content standards knowledge, historical analysis, making historical connections and social studies research skills utilizing primary source documents.
Course Objectives: Upon completion of the AP Government course of study, students should be able to:
- know important facts, concepts, and theories pertaining to U.S. government and politics
- understand typical patterns of political processes and behavior and their consequences (including the components of political behavior, the principles used to explain or justify various government structures and procedures, and the political effects of these structures and procedures)
- be able to analyze and interpret basic data relevant to U.S. government and politics (including data presented in charts, tables, and other formats)
- be able to critically analyze relevant theories and concepts, apply them appropriately, and develop their connections across the curriculum
Course Notes: Weighted - Must Take AP Exam (+ indicated Weighted) Taken in lieu of US Govt.
|
‘One Belt, One Road’. This is the slogan of Chinese President Xi Jinping’s landmark development strategy to create a new, twin-pronged ‘Silk Road’ between China and Europe.
It resurrects the halcyon early days of Eurasian integration when overland routes were established between the Spice Islands of present-day Indonesia and the capitals of Europe, passing through multiple cities whose fortunes prospered as trade flourished.
Bukhara, Samarkand, Tashkent, Kashgar, Kandahar, Tehran, Baghdad, Palmyra, Lanzhou. All these names once threw up images of medieval wealth, with their fabulous spires, learned universities and libraries, powerful overlords and multicultural marketplaces. Alas, most are now known for wholly different reasons.
The originator of the Silk Road of antiquity was the Han Dynasty, who traded the eponymous luxury (in addition to many other goods) across its vast empire and beyond from the 2nd century BC until its fall in the 3rd century AD. Whilst it survived in various incarnations, the route best known to history was at its strongest during the so-called ‘Pax Mongolica’ and here it is worth quoting at length from the eminent J.H. Parry:
In the great days of the Mongol Khans much Chinese merchandise destined for Europe had travelled overland on the backs of camels and donkeys by many different caravan routes, to termini in the ports of the Levant and the Black Sea; and European merchants, not infrequently, had themselves travelled with their goods by these routes. Flourishing Italian merchant colonies had grown up at the principal termini, at Constantinople and Pera, its commercial suburb; at Tana (Azof); at Caffa in the Crimea and at other Black Sea ports. In the fourteenth century Pegolotti’s safe route to Peking became exceedingly unsafe and European travel to the east came to an end. The overland routes in general declined in importance, not only because of political disturbance, but from the same physical causes which kept the predatory nomads on the move. Progressive desiccation in the lands of central Asia made pasture unreliable. The flow of merchandise overland diminished, and the ancient towns through which the caravans passed became impoverished. (Parry, 1963, p.56)
The final death knell in the coffin of the Silk Road was the fall of Constantinople to the Ottoman Turks in 1453, with European states and merchants no longer able to possess a foothold in the Middle East, let alone a launchpad for Asian trade.
Now, the Barking Rail Freight Terminal in London is waiting to become the 15th destination on the ‘New Silk Route, a Chinese freight train expected in the coming days. Overland trade is being re-popularised, a cheaper alternative to air freight, a safer and quicker alternative to the sea. It forms one strand of the ‘One Belt, One Road’ initiative, the other to create a ‘Maritime Silk Road’ between China, India, the Middle East and Africa.
For the countries of Central Asia, decimated by first the Russian Empire and then the ravages of Soviet rule, it is an opportunity to reinvent themselves and potentially recapture some of their past glory. Simultaneously it offers China a chance to increase both its economic and political influence in regions where the US footprint is light at best. What Russia thinks is another matter.
It is unlikely that China’s ‘One Belt, One Road’ will captivate the popular imagination in the same way that the Silk Road of old does, yet it is nevertheless a proactive step by the Chinese government to integrate a giant landmass in a way not seen for centuries.
What the geopolitical consequences of this bold venture will be cannot yet be known, but it certainly goes some way to undermining critics who view China as an insular power unwilling to responsibly use its ascending role on the global stage.
Parry, J. H. The Age of Reconnaissance (1963)
|
Dealing With Toothache
Toothache refers to the pain felt in and around the jaws and teeth of a person and is caused by several factors, the main one being tooth decay. Toothache can be felt in several different ways. A broken tooth or lost filling may sometimes start the pain. There are several ways to deal with a toothache. It could be dealt with by a medical treatment or by soothing the pain quickly using easy methods. This paper seeks to explain these methods in detail.
Soothing the Pain Quickly
This can be done by taking a painkiller. Aspirin and Ibuprofen (Non-steroidal anti-inflammatory drugs), provide a quick and effective relief especially for minor toothaches. A throbbing tooth can impede someone’s ability to sleep, speak or eat. It is similarly harder to treat a toothache while in pain, so getting relief from painkillers will come in handy. However, one should only use the recommended dosage that is printed on the package or prescribed by a doctor. Another way to sooth the pain quickly is by applying a cold compress. A person can feel a food storage bag preferably with ice, then cover it using a thin paper towel, and apply it on the check area immediately outside the tooth. The cold temperature helps to ease the pain. The ice should not be applied to the tooth directly. Additionally, the area around the teeth can be numbed.
One can buy an over-the-counter gum and tooth numb gel. This will help reduce the throbbing at least for a few hours. The gels can be directly applied to the tooth and normally work for a many hours. At times, toothaches can be triggered by small pieces of food lodged in the tooth. These will exacerbate the pain of gingivitis or cavities. For this case, the mouth can be thoroughly cleaned. The use of hydrogen peroxide can also be an alternative way to sooth the pain. This will not only clean the area but also ease the pain. One must be sure to rinse their mouth with a lot of water, and refrain from swallowing the hydrogen peroxide totally. The peroxide can be applied by dipping a Q-tip in, ensuring saturation, then applying the peroxide to the affected area liberally. This procedure can be repeated a number of times.
Knowing when to visit a doctor is extremely important. This is due to the fact that if the toothache resulted from a major infection or decay, it would be impossible to go away on its own. One should see a dentist or doctor if they experience along with the toothache the following symptoms: fever and chills (This is a sign of a serious infection), pain which becomes worse and refuses to go away (one may have a cavity which becomes worse after each meal), pain in a wisdom tooth, and trouble breathing or swallowing.
In case one has a cavity which exposes the nerves of their teeth thereby causing pain, a dentist may put a filling which will protect the nerves from overstimulation. Lastly, if one has dental abscess, which usually happens when the tooth pulp gets infected, the best way is to perform a tooth canal. A dentist will have to clean the inside of the patient’s tooth to rid it of the infection. Because this procedure can be very painful, the dentist will first numb the mouth with local anesthesia
|
What's the Latest Development?
MIT scientist John Romanishin has done what some said couldn't be done: He has created a mini-cube robot that has no external moving parts yet can move, climb, leap, and -- most importantly -- work together with its fellows to create larger shapes. The motion comes from an internal flywheel that can go as fast as 20,000 revolutions per minute and delivers angular momentum when stopped. Magnets on the cube's edge and faces allow it to connect to other cubes. Romanishin and his colleagues will discuss the invention at next month's IEEE/RSJ conference on intelligent robots and systems.
What's the Big Idea?
In a video, MIT robotics professor Daniela Rus says that unlike fixed-architecture robots -- which are usually meant to perform a single task -- the cubes can be assembled and reassembled into different shapes that can perform a variety of tasks. Currently they receive commands from a computer via a radio, but eventually the team plans to build algorithms into the cubes themselves so that, according to post-doc Kyle Gilpin, a swarm of cubes can figure out on their own the best way to complete a task given to them. This could allow such swarms to temporarily repair large structures during an emergency, or enter dangerous environments to identify problems and help provide solutions.
Photo Credit: Shutterstock.com
|
Liver cancer is one of the top ten causes of cancer-related deaths worldwide and in
the United States. Hepatocellular carcinoma (HCC) accounts for 75% of all liver cancer cases, most frequently occurring in patients with chronic liver diseases.
The liver is also a frequent site for metastases originating from colorectal cancer, pancreatic cancer, melanoma, lung cancer, and breast cancer.
Depending on the location, severity, and staging of liver cancer, multiple treatment options are currently available, including surgical resection, liver transplantation, ablation techniques (including radiofrequency ablation (RFA), microwave ablation (MWA), high-intensity focused ultrasound (HIFU), cryoablation), chemotherapy, radiation therapy, targeted drug therapy, and immunotherapies.
Even with all the current treatments available, the 5-year survival rate in the U.S. is only 20%, the second-lowest amongst all cancers.
One noninvasive new sound technology has been developed at the University of Michigan, called histotripsy, which can mechanically destroy target tissues by controlled acoustic cavitation.
For the study, researchers used a rat model in which they were able to break down liver tumors in rats, kill cancer cells and spur the immune system to prevent further spread.
The therapy was able to destroy 50-70% of the liver tumor volume, and by doing so the rats’ immune systems were able to clear away the rest, with no evidence of recurrence or metastases in more than 80% of the animals.
The same treatment is being tested in human liver cancer trials in the United States and Europe.
Tejaswi Worlikar, et al. Impact of Histotripsy on Development of Intrahepatic Metastases in a Rodent Liver Tumor Model. 2022. Cancers. DOI: 10.3390/cancers14071612.
|
Put simply, a lot!
Next to the heart, the diaphragm is the most important muscle in the body. Recent research shows that dysfunction of the diaphragm is associated with low back pain. The diaphragm is a large muscle separating the chest from the
abdomen, is the primary breathing muscle, as well as part of the core muscles.
The core muscles are a spherical boundary of muscles that surrounds the abdominal cavity. They include the: diaphragm, pelvic floor, lumbar musculature, abdominal muscles and rectus abdominis. These muscles work together to stabilize the spine, pelvis, hips, and support the function of chest and abdominal organs.
Gray’s picture of diaphragm. Note the circumferential attachments to back, ribs, and sternum.
Proper breathing requires the diaphragm to flatten and descend into the abdomen with inspiration, and doming upwards into the chest cavity with exhalation (there is much more, but to keep it simple we will stop there). Because the diaphragm connects from the back to the front of the body, it requires coordinated muscle activity from the anterior abdominal muscles and core muscles, in order to function efficiently.
Dysfunction from the diaphragm can occur from a problem with the muscle itself or from any of the components of the core musculature; examples include: smoking, prior abdominal or pelvic surgery, deconditioning, abdominal hernias, open heart surgery, etc. A less efficient diaphragm leads to overactivity of the secondary breathing muscles (intercostals, scalenes, and sternocleidomastoids), and back muscles. Dysfunction of the diaphragm can also lead to core dysfunction. This cascade of events can then lead to back pain, neck pain, pelvic floor problems, hip problems, etc.
Without addressing the function of the diagphram, exercises to treat neck pain, back pain, pelvic floor problems, and hip problems can fail. Treatment should be aimed at restoring the function of the diaphragm through special breathing exercises that work on coordinating the activity of the diaphragm with core musculature and restoring balance to the secondary breathing muscles.
Valley Sports and Spine Clinic has trained alongside select physical therapists from Blacksburg, Christiansburg, and Radford to develop techniques, adapted from the Postural Restoration Institute, for evaluating and treating diaphragm problems. If you have neck pain, back pain, urinary incontinence, bowel difficulty, or hip problems, you may need to have your diaphragm function evaluated and treated. We can help!
Ethan Colliver, DO
Valley Sports & Spine Clinic Giving you Back your Life
|
Comparing the Glycemic Index to the Glycemic Load
6 of 9 in Series: The Essentials of Starting a Low-Glycemic Diet
The glycemic load, which is based on the idea that a high-glycemic food eaten in small quantities produces a blood sugar response that's similar to the response produced by low-glycemic foods, is a much more useful tool for your day-to-day use. It allows you to have more food choices than the glycemic index does alone.
That's good news because no one wants to be too restricted in what he or she can eat. But to create the glycemic load, researchers first had to come up with the glycemic index.
The glycemic index concept was developed in 1981 by two University of Toronto researchers, Dr. Thomas Wolever and Dr. David Jenkins. Their research compared the effect of 25 grams of carbohydrates (just picture two slices of bread if you're not familiar with the metric system) to that of 50 grams of carbohydrates (picture four slices of bread) to see whether the smaller amount created a lower-glycemic response in the human body based on the lower quantity of carbohydrates.
However, with the amount of carbohydrates varying so much in different foods (for instance, some fruits and vegetables have only 5 grams of carbohydrates whereas starches have up to 15 grams), 50 grams of carbohydrates (the standard amount used for glycemic index testing) doesn't always depict the portion size a person may typically eat.
To account for this variation, in 1997, Harvard University's Dr. Walter Willet created the glycemic load, which calculates the quality and quantity of carbohydrates at a meal. The fact that the glycemic load takes portion size into account is quite helpful because the average person is far less likely to eat 50 grams of a particular food in one sitting.
Looking at portion sizes and carbohydrate grams can give you a better understanding of the glycemic load. Although foods vary, the following table breaks down the average amount of carbohydrates in each carbohydrate-containing food group based on a particular portion size.
|Food Group||Carbohydrate Grams||Portion Size|
|Starches||15||1/2 cup pasta, 1 slice bread, 1/3 cup white rice|
|Fruits||15||1 small piece|
|Dairy products||12||1 cup milk, 1 cup light yogurt|
|Nonstarchy vegetables||5||1/2 cup cooked, 1 cup raw|
As you can see, the amount of carbohydrates in a serving of a particular food depends as much on the portion size as it does on the food itself. So consuming 50 grams of carbohydrates (which is definitely more than one serving) will have a dramatic impact on your blood sugar.
Take carrots, for example. Carrots have a high glycemic index when cooked (41 to be exact), yet they're considered a nonstarchy vegetable. To consume 50 grams of carbohydrates in carrots, you'd have to eat 5 cups! Because the amount of carbohydrates in carrots is so low compared to their average portion size, the glycemic load of carrots is low as well.
On the other hand, a serving of instant white rice, another high-glycemic food with a glycemic index of 72, has around 15 grams of carbohydrates per 1/3-cup serving. To eat 50 grams of carbohydrates in instant white rice, you'd have to eat slightly more than 1 cup of rice — a fairly typical portion size for most people. This portion size means the glycemic load for instant white rice doesn't change much from the food's glycemic index.
The glycemic index compares the potential of foods with equal amounts of carbohydrates to raise blood sugar. The purpose of the glycemic load is to have a usable indicator of the glycemic index that takes portion size into account.
Although adding glycemic load to the mix may cause the glycemic index of some foods, such as white rice, to remain the same, it opens up the door for enjoying more foods that may have a high glycemic index but a low glycemic load based on different portion sizes.
|
Modeling 2D Geometries
I want to model a 2D geometry in ansys (12.1) and I am not sure if I my method is the right one. First I drawed polygon lines with a closed end, and then I extruded this sketch. My problem is that the geometry becomes 3-dimensional with the extrusion. And it is impossible to put the depth (for the extrusion) to 0mm or even 1e-1000mm (i.e. as thinn as possible) . So my question is: is there another way to create simlpe faces in ansys or does it only go by extrusion?
Thanks in advance for an answer!
P.S.: Sorry if this is a stupid question but I am new with ansys and I am desperate ;)
Surfaces from Sketches
Extrude is for 3D data… If you just want to create a surface from a sketch, then go to “Concept => Surfaces from Sketches”.
This will prompt you to select your sketch(es). Then click generate.
The SurfaceSketch will appear in the model tree with its associated Sketches…
Next you will need to select the edges to apply named selections for things like inlets and outlets…
It works http://www.cfd-online.com/Forums/images/icons/icon7.gif ! Thank you for your detailed reply http://www.cfd-online.com/Forums/ima...ons/icon14.gif
|All times are GMT -4. The time now is 18:21.|
|
Convetion on Biological Diversity
Carbon Capture and Storage
CHCs is collective terminology for organic compounds containing chlorine. CHCs are used as raw materials in the chemical industry.
Climate, as opposed to weather, means the state of the atmosphere and the land or water beneath it over long periods of time. Statements about climate are usually made on the basis of meteorological data. These include the temperature, air humidity and air pressure, wind conditions and water temperature in a particular region over a long period of time.
Climate change (or climate variability) is the change in the earth’s climate over a long period of time. The climate has changed for as long as the earth has existed. A large number of cyclical and non-cyclical processes and events (e.g. the intensity of solar radiation or volcanic activity) affect the earth’s natural climate and can fundamentally change it. Alongside natural climate change, it is also possible for humans to change the climate. It is thought highly probable that the rise in global warming observed since the start of industrialisation has largely anthropogenic causes.
Climate impact research deals with the effects of climate change and studies issues of scientific and social importance in the areas of global change, global warming and sustainable development.
Climate research (or climatology) is an interdisciplinary science that combines meteorology and geography. It investigates climate principles, in other words the average state of the atmosphere in a particular region. Palaeoclimatology is a sub-discipline that studies climate history, i.e. past climates.
Climate research uses long-term observation of radiation, temperature, air pressure, wind and wind systems and precipitation, as well as
geographic factors, such as longitude and latitude, altitude, terrain, soil characteristics and vegetation, to draw conclusions about
climatic conditions. In addition to this instrumental and historical data, palaeoclimate research uses climate proxies. A climate
proxy is an indirect climate indicator recorded in natural climate archives, such as tree rings, corals, lake or ocean sediments, pollen
or ice cores (cores drilled from the Greenland and Antarctic land ice sheets are some of the most important climate archives and are now
providing information about global climate changes over the past 800,000 years or so, including changes of CO2-concentration in
the atmosphere. Ice cores from mountain glaciers in temperate zones are also analysed). Climate proxies can also be used to help
reconstruct climates of the past for periods before instrumental records were kept. Climate proxies usually have to be calibrated against
instrumental records to enable a quantitative picture of past climate conditions to be obtained.
Isotope ratios are an important climate proxy for palaeoclimatic conditions. For instance, the ratio of oxygen isotopes 16O
and 18O in calcitic fossils in marine sediments acts as a proxy for palaeotemperatures. The idea behind it is that the calcite
that is found in the sediments of the ocean floor - e.g. in the skeletons of fossilized protozoans - stores a different amount of the
two oxygen isotopes in warm and cold periods. This enables researchers to draw conclusions about past temperature changes based on 16O/18O ratios.
The 16O/18O ratio is also a proxy for palaeo-salinity.
A colloid (Gr. kola "glue" and eidos "shape, appearance") is a particle with a typical radius between approx. 0.1 and 10 micrometers and finely distributed in another substance (the dispersant). Colloidal solutions may also be referred to as suspensions. In some cases they behave like genuine solutions. Wetting and dispersing agents as well as other additives are often included to maintain stability and to prevent precipitation in colloidal solutions.
Community ecology (synecology) is a subdiscipline of ecology which studies the distribution, abundance, demography, and interactions between coexisting populations.
Plant clarification facilities are water purification facilities using the self-purifying power of nature. The water to be purified passes vertically and horizontally though a soil bed composed of sandy/gravelly mineral materials and planted out with marsh vegetation (reeds, rushes, cat's tail and others). This process variant is known as an "overgrown soil filter". Where so-called technical wet areas and artificial floating islands are concerned, the water is fed for cleaning purposes into a body of water fully rooted by the occupying plants. Processes in the root region of the plants (interplay between plants, soil and microorganisms) filter out both organic and inorganic dissolved substances as well as suspended components in the wastewater by binding them to the body of the soil. During the passage of water through the fully rooted soil filter, fecal microorganisms present in domestic wastewater are also taken out of circulation. For this reason, water treated in plant clarification facilities is also considered suitable for reuse under increased hygienic requirements, e.g. for agricultural irrigation purposes. Process variants encountered in plant clarification facilities are also known as root-space procedures, reed/rush clarification facilities, overgrown/planted soil filters or hydrobotanic clarification facilities.
Cumulative indicates an amassment of material. The said material may consist of specific substances in an organism (it may be pollutants, for example, but may also be fats etc.). It may also occur as an enrichment of substances in groundwater, soil or other substrates however.
|
Getting Started with the Maps App on Your iPad
You can find lots of great functions in the Maps app for iPad, including getting directions from one location to another by foot, car, or public transportation. You can bookmark locations to return to them again. And the Maps app makes it possible to get information about locations, such as the phone numbers and Web links to businesses.
Going to your current location
You must have an Internet connection; your location can be pinpointed more exactly if you have a 3G iPad, but even Wi-Fi models do a pretty good job.
To display your current location in Maps, follow these steps:
From the Home screen, tap the Maps icon. Tap the Current Location icon (the small circle to the left of the Search field).
Your current location is displayed as a blue pin with a blue circle around it. The circle indicates how accurate the location is; it could actually be anywhere within the area of the circle.
Double-tap the screen to zoom in on your location.
If you don't have a 3G iPad, your current location is a rough estimate based on a triangulation method. Only 3G-enabled iPads with GPS can really pinpoint your location. Still, if you type in a starting location and an ending location to get directions, you can get pretty accurate results even with a Wi-Fi-only iPad.
The Maps app offers four views: Classic, Satellite, Hybrid, and Terrain. You view the Classic view by default the first time you open Maps. To change views, swipe the bottom-right corner of the screen to turn the "page" and reveal the Maps menu. Tap the view you want and flick the corner of the page to fold it back. Here's what these views offer:
Classic is your basic street map you might find in any road atlas.
Satellite is an aerial photographic view.
Hybrid offers a satellite view with street names included.
Terrain is a topographical map showing mountains and other variations in the landscape.
You can also turn on a feature that displays an overlay on the Classic map to show current traffic conditions. This feature shows roads in red, yellow, or green to indicate any obstructions (red for serious, yellow for caution) or roads on which cars are moving right along (green).
Zooming in and out
You'll appreciate the feature in the Maps app that allows you to zoom in and out to see more or less detailed maps and to move around a displayed map.
With a map displayed, you can double-tap with a single finger to zoom in.
Double-tap with two fingers to zoom out, revealing less detail.
Place two fingers positioned together on the screen and move them apart to zoom in.
Place two fingers apart on the screen and then pinch them together to zoom out.
Press your finger to the screen, and drag the map in any direction to move to an adjacent area.
Going to another location
If you're at Point A and want to get to Point B, you need to know how to find any location other than your current location using Maps.
To find another location, open the Maps app and tap in the Search field. Then do either of the following:
Type a location, using a street address with city and state, or a destination, such as Empire State Building or Detroit Airport.
Maps may make suggestions as you type if it finds any logical matches. Tap the Search button, and the location appears with a red pin inserted in it and a label with the location, an Information icon, and in some cases, a Street view icon. If several locations match your search term, you may see several pins on the map.
Tap the Bookmark icon (the little book symbol to the left of the Search field), and then tap the Recent tab to reveal recently visited sites. Tap a bookmark to go there.
|
Today, Mrs. B gives me a lesson in good teaching.
The University of Houston's College of Engineering
presents this series about the machines that make
our civilization run, and the people whose
ingenuity created them.
Jane Haldimand was born in
1769, the daughter of a wealthy London merchant. We
know little of her early life. She married a Swiss
doctor, Alexander Marcet, when she was thirty. He
was a fairly distinguished professor of medicine --
on his way to becoming quite wealthy. But Jane
Jane Marcet, was not one for a gilded cage.
Soon after her marriage, she began writing
instructive books for young people.
The title of her first book was Conversations
on Chemistry, intended more especially for
the female sex. It came out in 1806. The style is
arresting. It's a running conversation between a
Mrs. B and two young ladies, Caroline and Emily.
Listen as they talk about heat radiation. Mrs. B
Before I conclude the subject ... I must observe
that different surfaces [radiate heat] in different
Emily asks, "These surfaces [are all] the same
temperature?" Mrs. B answers, "Undoubtedly. I will
show you [an] ingenious apparatus." She produces a
cubical tin. One side is sanded, one rusted, one
covered with soot, one polished.
She fills the tin with hot water. Then she uses a
focusing mirror to reflect the heat from each side
onto a thermometer. She gets four different
readings. I'm going to recommend that experiment
for our thermal lab at the university.
Within a gentle parlor propriety, Mrs. Marcet and
her alter ego, Mrs. B, boldly take on
any subject. They talk about Watt's new steam
engine -- its valving and power takeoff mechanism.
Of course, the appeal of that kind of material
wasn't limited to young women. My American edition
belonged to someone named Charles Smith, who lived
in Baltimore in 1835. He's made marginal notes
about lime water and about the solubility of tree
After this book, Marcet wrote on political economy,
geology, and much more. Her book on political
economy was very popular. It sold over 160,000
copies in America alone.
A crinoline wall separated women and men
intellectually in 1800. Jane Marcet lived behind
that wall. Yet her books marched out into the
middle of the 19th century and helped transform it.
The year she wrote her book on chemistry, a
15-year-old boy worked in a London bookbindery. He
Michael Faraday. When Marcet's book passed
through, he read it. It transformed him. Faraday
went on to create our modern concepts of
The crowning irony is on page 105 in the 1833
edition of her chemistry book. On page 105 the
editor has added a version of the experiment in
which Faraday anticipated the electric motor.
Already the book bears the fruit of her first
edition. And Mrs. Marcet has given me -- a new role
model for my teaching.
I'm John Lienhard, at the University of Houston,
where we're interested in the way inventive minds
My biographical sources were the Dictionary of
National Biography articles on Jane and
Alexander Marcet and her uncle, Frederick Haldimand.
He was a hero of our French and Indian War.
Jones, T.P., New Conversations on
Chemistry... Philadelphia: John Grigg, 1833.
(This is an updating of Marcet's original book,
which was published in 1806. I couldn't lay my
hands on a copy of the original.)
Marcet, J. Conversations on Political
Economy ... , 5th ed. London: Longman etc.,
Marcet, J., The History of Africa ...
London: H. Colburn and R. Bentley, 1830.
Williams, L.P., Faraday, Michael. Dictionary
of Scientific Biography (C.C. Gilespie,
ed.). Charles Scribner's Sons, 1970-1980.
In modern terms, Mrs. B's radiation experiment
showed how the the radiant emittance of each side
of the tin is different. As that property varies,
so does the heat emitted from each side. The sooty
side emits about 95 percent of a theoretical
maximum. The polished side probably emits less than
5 percent. (See e.g., Lienhard, J.H., A Heat
Transfer Textbook, Englewood Cliffs, NJ:
Prentice-Hall, Inc., 1981 and 1987, Table 11.1,
Jane Marcet died in 1858 at the age of 89. By then,
she'd strongly influenced her good friend, the
famous English author Harriet Martineau. Martineau
began by writing her own version of Marcet's
Political Economy. She went on to
become a powerful advocate of religious liberalism
and the abolition of slavery.
For more on Marcet, see Episodes 741, 745,
828, 900, and 950.
The Engines of Our Ingenuity is
Copyright © 1988-1997 by John H.
Episode | Search Episodes |
|
In analyzing data from a spectroscopic experiment, the inverse of each experimentally determined wavelength of the Balmer series is plotted versus , where ni is the initial energy level from which a transition to the n = 2 level takes place. The slope of the line is
(a) the shortest wavelength of the Balmer series.
(b) – h, where h is Planck’s constant.
(c) one divided by the longest wavelength in the Balmer series.
(d) –hc, where h is Planck’s constant.
(e) – R, where R is the Rydberg constant.
|
- Religion and Philosophy»
- Atheism & Agnosticism
What is Pascal's Wager
Blaise Pascal and his "Pensees"
Blaise Pascal was born in Rouen France in 1623. A child prodigy, he has been classified as many things: A Mathematician, a Physicist, an Inventor, a Christian philosopher. He made contributions to the study of fluids, probability theory, geometry, economics, and even invented an early mechanic calculator.
But he is best remembered amongst Philosophers of religion for the wager or gambit he posed to those that claim, "I am so made that I cannot believe..." in his Pensees, published in 1669 (seven years after his death).
Pascal's Wager is a relatively simple bit a reasoning used to resolve what just might be the most dire of all decisions. Does god exist?
It is laid out classically as follows:
God either is or is not. you have no information that would influence your decision either way and so taking a stand on this question is essentially the equivalent of flipping a coin. You must wager, there is no option that allows you to abstain from choosing one of the two propositions.
Given these conditions let us surmise the gains and losses associated with each option. If you wager that god is and you are correct you stand to gain immortality in paradise, if you are incorrect then you lose nothing. While if you wager that god is not, you still likewise lose nothing if you are correct but risk eternal damnation if you are wrong. As Pascal put it, "If you gain, you gain all. If you lose, you lose nothing. Wager then, without hesitation, that He exists."
While this conclusion seems straight forward and logical enough a number of objections have been raised in regard to it's simplicity and apparently low regard for the attributes of god.
Firstly, the argument assumes that you are wagering on the correct god. Within the pantheon of deities among both current religions and long anachronistic mythologies, if the actual, "god," is not the Christian god (which is the one Pascal is writing on) might not you earn the increased brunt of this actual god's ire for believing in a false god rather than by simply abstaining from any belief?
The second problem flows from this idea of belief and the assumption that you can will yourself into believing something simply as a matter of pragmatic utility. Pascal mistakenly attributes wishful malleability to individual credulity. A carrot or a stick may drive a beast of burden forward but the beast does not move because he believes he is on the correct course to a predetermined destination. The point is that neither the supposed pleasantries of heaven nor the rumored anguish of hell is sufficient to create authentic belief where once none stood.
The final and most significant objection to Pascal's Wager is that it assumes god is a sycophant who can be duped and manipulated. Surely an omniscient god would, by definition, know the falsity of your belief and the motives behind it.
Though, an atheist, I would like to think that if there is a god I would garner more favor by not trying to deceive him. I would like to think god prizes intellectual honesty and an reason use of the mind's faculties over craven and insincere propitiations performed merely to avoid punishment or accrue reward.
And so I suppose it all comes down to your assessment of god. Is he indeed the fearful insecure father that neurotically needs to be loved as the first five commandment would have us believe? Or is he a benevolent force that prizes authenticity, intellectual honesty, and a life of good works over false conciliation and condescension?
The Wager is yours.
|
Important Sleeping Safety Tips For Babies That Parents Need To Know About | Phase Two
You are well aware of the safety standards that have been shared by the Juvenile Products Manufacturers Association (JPMA) for parents of babies. As it was discussed in the previous article, Important Sleeping Safety Tips For Babies That Parents Need To Know About | Phase One, SIDS is a danger when parents don't follow the updated safety guidelines.
It has been stated that parents must purchase a crib that meets the updated safety requirements which means used cribs are not safe for a baby. You are also aware that placing blankets, toys, and pillows into the crib is not safe for infants. Additionally, the mattress must fit in the crib properly. That means there cannot be any large gaps between the frame of the crib and the mattress.
Toys can be placed in the baby's nursery on the surfaces of dressers or tables. But it can't be anywhere near the crib.
Finally, the crib sheets must fit perfectly well so that they do not come off of the mattress corner. If it comes off of the mattress, then it can cover the infant and therefore, the baby will not be able to breathe. Now that parents are aware of that, they still have questions about crib safety. That is whether or not it is safe to use bumpers.
Mummo has a grandparenting tip for new parents when it comes to placing bumpers in cribs as this is understandably a confusing one to know. That is because there are a lot of mixed pieces of information about whether or not bumpers are safe to be placed inside of the cribs or not.
The Journal of Pediatrics stated in September 2007 that crib bumpers are unsafe and should not be used in cribs. However, in 2011, the American Academy of Pediatrics (AAP) updated those guidelines. Further updates for bumper safety for cribs have been implemented since then. Let's go over those over the next three crib safety points in this second phase of Important Sleeping Safety Tips For Babies That Parents Need To Know About.
Tip 4: Never Use Bumper Pads That Are Pillow-Like
Bumpers for cribs that are like pillows are not safe to be placed in cribs. They must be flat and must be the proper fit for the sides of the crib where they are meant to be placed. In addition to that, they need to either tie or snap securely to the rails of the crib so they are in the right position. If they are not secured in the correct position properly, then the bumper pads can easily fall away from the sides of the crib and go right onto the mattress. That is a safety hazard as it can fall on or right next to the infant sleeping in the crib.
Another thing to keep in mind is that the pumper ties must not exceed nine inches. Otherwise, ties that are too long can become a strangulation hazard for the baby.
Tip 5: Only Use Bumper Pads Until The Child Can Pull Themselves Up In A Standing Position
Until the baby can go into a standing position, then the bumper pads can be safely used. Once the infant is able to stand up in the crib, then the bumper pads must be removed, or else he or she can easily climb out of the crib with them. Remember that as an infant develops, matures, and evolves, they become more inquisitive, and they are constantly making new discoveries.
That is something that parents must keep in mind when it comes to safety. Growing babies will experiment with the items they have that are accessible to them. They can easily untie or unsnap the bumper pads and imagining that alone can be quite worrisome to parents - for a good reason. This means once the infant begins to pull themselves across the floor, they will soon be crawling, and then they will be on their way to standing.
This phase can range anywhere between 5 to 9 months of age, and in rarer cases, it can be sooner or even later. For parents that made the decision to place pumper pads in the crib must monitor their babies carefully as they grow and develop to determine when the right time is to remove the pads.
Tip 6: The Ties On All Four Bumper Segments Must Be Completely Functional To Be Used
There are many bumpers nowadays sold in four pieces. The ties on each of the four segments must be functional to be used safely. That means they have to attach to the side of the crib correctly. However, if all ties on that segment cannot be securely attached to the crib do not use the bumper segment. That particular pad will come away quickly from the crib and end up inside of the mattress.
Imagine that segment falling away from the crib rail and ending up onto the mattress as the baby is sleeping. It can even end up on the baby which would immediately become a suffocation risk. With that said, as soon as any parent purchases new bumper pads segments that come in four, it is important to test the ties to make sure they are secure. If any of them are not, then they will need to be replaced by a bumper segment that has functional ties.
Parents can make the educated decision as far as purchasing the bumper pads or not based on what was noted in this article. They are encouraged to do further research and to reach out to their infants' pediatrician as well to discuss the safety of bumper pads.
In fact, parents need to do their research in regards to safety regulations in general as they design their baby's nursery even before they give birth. An important point to make as well is that many parents decide not to get bumpers for their infants at all. They turn out to be just fine. However, for some parents that feel safer with having the bumper pads placed on the sides of the crib, they can do so as long as they follow the guidelines.
*mummo means Grandma in Finnish.
|
From Wikipedia, the free encyclopedia - View original article
Visual memory describes the relationship between perceptual processing and the encoding, storage and retrieval of the resulting neural representations. Visual memory occurs over a broad time range spanning from eye movements to years in order to visually navigate to a previously visited location. Visual memory is a form of memory which preserves some characteristics of our senses pertaining to visual experience. We are able to place in memory visual information which resembles objects, places, animals or people in a mental image. The experience of visual memory is also referred to as the mind's eye through which we can retrieve from our memory a mental image of original objects, places, animals or people. Visual memory is one of several cognitive systems, which are all interconnected parts that combine to form the human memory. Types of palinopsia, the persistence or recurrence of a visual image after the stimulus has been removed, is a dysfunction of visual memory.
In humans, areas specialized for visual object recognition in the ventral stream have a more inferior location in the temporal cortex, whereas areas specialized for the visual-spatial location of objects in the dorsal stream have a more superior location in the parietal cortex. However, these two streams hypothesis, although useful, are a simplification of the visual system because the two streams maintain intercommunication along their entire rostral course.
A majority of experiments highlights a role of human posterior parietal cortex in visual working memory and attention. We therefore have to establish a clear separation of visual memory and attention from processes related to the planning of goal-directed motor behaviors.
We can only hold in mind a minute fraction of the visual scene. These mental representations are stored in visual short-term memory. Activity in the posterior parietal cortex is tightly correlated with the limited amount of scene information that can be stored in visual short-term memory. These results suggest that the posterior parietal cortex is a key neural locus of our impoverished mental representation of the visual world.
The posterior cortex might act as a capacity-limited store for the representation of the visual scene, the frontal/prefrontal cortex might be necessary for the consolidation and/or maintenance of this store, especially during extended retention intervals.
There is a visual cortex in each hemisphere of the brain, much of which is located in the Occipital lobe. The left hemisphere visual cortex receives signals mainly from the right visual field and the right visual cortex mainly from the left visual field, although each cortex receives a considerable amount of information from the ipsilateral visual field as well. The visual cortex also receives information from subcortical regions, such as the lateral geniculate body, located in the thalamus. However, ample evidence indicates that object identity and location are preferentially processed in ventral (occipito-temporal) and dorsal (occipito-parietal) cortical visual streams, respectively. Comparison of rCBF during performance of the two tasks again revealed differences between the ventral and dorsal pathways.
The dorsal stream pathway is mainly involved in the visual-spatial location of objects in the external world, and it is also known colloquially as the 'where' pathway. The dorsal stream pathway is also involved in the guidance of movements (e.g., reaching for an object in space), and is therefore implicated in the analysis of the movement of objects in addition to their spatial locations.
The dorsal stream pathway begins with purely visual information in the occipital lobe, and then this information is transferred to the parietal lobe for spatial awareness functions. Specifically, the posterior parietal cortex is essential for "the perception and interpretation of spatial relationships, accurate body image, and the learning of tasks involving coordination of the body in space."
The ventral stream pathway is mainly involved in object recognition, and is known colloquially as the 'what' pathway. It has connections to the medial temporal lobe (which is involved in the storage of long-term memories), the limbic system (which regulates emotions), and the dorsal stream pathway (which is involved in the visual-spatial locations and motions of objects). Therefore, the ventral stream pathway not only deals with the recognition of objects in the external world, but also the emotional judgement and analysis of these objects.
The ventral stream pathway begins with purely visual information in the primary visual cortex (occipital lobe), and then this information is transferred to the temporal lobe.
Located at the back of the brain, the occipital lobes receive and process visual information. The occipital lobes also process colors and shapes. Whereas the right occipital lobe interprets images from the left visual space, the left occipital lobe interprets images from the right visual space. Damage to the occipital lobes can permanently damage visual perception
Damage to the occipital lobe is characterized by loss of visual capability and the inability to identify colors both important processes in visual memory.
Visual short term memory is the capacity for holding a small amount of visual information in mind in an active, readily available state for a short period of time (usually no more than 30 seconds). Although visual short term memory is essential for the execution of a wide array of perceptual and cognitive functions, and is supported by an extensive network of brain regions, its storage capacity is severely limited.
Visual short-term memory storage is mediated by distinctive posterior brain mechanisms, such that capacity is determined both by a fixed number of objects and by object complexity.
The Benton Visual Retention Test is an assessment of visual perception, and visual memory abilities. More than 50 years of proven clinical use is the staple of the Benton Visual Retention Test. This test has proven its sensitivity to reading disabilities, nonverbal learning disabilities, traumatic brain injury, attention-deficit disorder, alzheimer's, and other forms of dementia. During testing participants are presented with 10 cards for 10 seconds with unique designs on each. After the time has passed participants are asked to immediately reproduce the designs from each card using their visual memory. In the second stage participants are asked to copy each of the 10 card designs while the cards are in view. The participants results from each task are then assessed and placed into six categories; omissions, distortions, preservations, rotations, misplacements, and sizing errors. The further the participant’s scores varies from the averages provided in the Benton Visual Retention Test manual the worse the participant is assessed to be on visual memory ability. The Benton Visual Retention Test has proved to be a generalizable test with the ability to be accurately administered to participants aged 8-adult, and no gender effect. Some studies have suggested a significant gender and education interaction indicating that an age-associated decline in visual memory performance may be more prominent for those individuals with a lower education level.
Neuroimaging studies focus on the neural networks involved in visual memory using methods designed to activate brain areas involved in encoding, storage, and recall. These studies involve the use of one or multiple types of brain imaging techniques designed to measure timing or activation within the brain. The data collected from neuroimaging studies gives researchers the ability to visualize which brain regions are activated in specific cognitive visual memory tasks. With the use of brain imaging devices researchers able to further investigate memory performance above and beyond standard tests based on exact response times, and activation.
The subject's resting brain activation level is first determined in order to form a control or 'baseline' to measure from. Subjects are blindfolded and instructed to lay motionless while simultaneously eliminating any visual imagery present in their mind's eye. These instructions are intended to minimize the activation of brain regions involved in visual memory to form a true resting brain state. After the scan is complete a control has been formed which can be compared with activated regions of the brain while performing visual memory tasks.
During encoding participants are typically exposed to 1-10 visual patterns while connected to a brain imaging device. As the subject encodes the visual patterns researchers are able to directly view the activation of areas involved in visual memory encoding. During recall subjects again need to have all visual stimuli removed by means of a dark room or blindfolding to avoid interfering activation of other visual areas in the brain. Subjects are asked to recall each image clearly in their mind's eye. While recalling the images researchers are able view the areas activated by the visual memory task. Comparing the control ‘baseline’ state to the activated areas during the visual memory task allows researchers to view which areas are used during visual memory.
The visuo-spatial sketchpad is part of Baddeley and Hitch’s model of working memory. It is responsible for temporarily storing visual and spatial information, which is currently being used or encoded. It is thought of as a three-dimensional cognitive map, which contains spatial features about where the person is and visual images of the area, or an object being concentrated on. It is used in tasks such as mental image manipulation where a person imagines how a real object would look if it were changed in some way (rotated, flipped, moved, change of colour, etc.). It is also responsible for representing how vivid an image is. A vivid image is one which you have a high potential for retrieving its sensory details. The visuo-spatial sketchpad is responsible for holding onto the visual and spatial qualities of a vivid image in your working memory, and the degree of vividness is directly affected by the limits of the sketchpad.
Iconic memory is the visual part of the sensory memory system. Iconic memory is responsible for visual priming, because it works very quickly and unconsciously. Iconic memory decays very quickly, but contains a very vivid image of the surrounding stimuli.
Spatial memory is a person’s knowledge of the space around them, and their whereabouts in it. It also encompasses all memories of areas and places, and how to get to and from them. Spatial memory is distinct from object memory and involves different parts of the brain. Spatial memory involves the dorsal parts of the brain and more specifically the hippocampus. However many times both types of memory are used together, such as when trying to remember where you put a lost object. A classic test of spatial memory is the Corsi block-tapping task, where an instructor taps a series of blocks in a random order and the participant attempts to imitate them. The amount of blocks they can tap before performance breaks down on average is called their Corsi span. Spatial memory is always being used whenever a person is moving any part of their body; therefore it is generally more vulnerable to decay then object memory is.
Object memory involves processing features of an object or material such as texture, color, size, and orientation. It is processed mainly in the ventral regions of the brain. A few studies have shown that on average most people can recall up to four items each with a set of four different visual qualities. It is a separate system from spatial memory and is not affected by interference from spatial tasks.
Visual memory is not always accurate and can be misled by outside conditions. This can be seen in studies carried out by Elizabeth Loftus and Gary Wells. In one study by Wells, individuals were exposed to misleading information after witnessing an event; they were then tested on their ability to remember details from this event. Their findings included: when given misinformation that contradicts the witnessed event they were less able to recall those details; and whether misinformation was given before or after the witnessed event did not seem to matter. Furthermore, visual memory can be subjected to various memory errors which will affect accuracy.
Visual memory, in an academic environment, entails work with pictures, symbols, numbers, letters, and especially words. Students must be able to look at a word, form an image of that word in their minds and be able to recall the appearance of the word later. When teachers introduce a new vocabulary word, generally they write it on the chalkboard, have the children spell it, read it and then use it in a sentence. The word is then erased from the chalkboard. Students with good visual memory will recognize that same word later in their readers or other texts and will be able to recall the appearance of the word to spell it.
Children who have not developed their visual memory skills cannot readily reproduce a sequence of visual stimuli. They frequently experience difficulty in remembering the overall visual appearance of words or the letter sequence of words for reading and spelling.
Findings surrounding sleep and visual memory have been mixed. Studies have reported performance increases after a bout of sleep compared with the same period of waking. The implications of this are that there is a slow, offline process during sleep that strengthens and enhances the memory trace. Further studies have found that quiet rest has shown the same learning benefits as sleep. Replay has been found to occur during post-training quiet wakefulness as well as sleep. In a recent study where a visual search task was administered quiet rest or sleep is found to be necessary for increasing the amount of associations between configurations and target locations that can be learned within a day. Reactivation in sleep was only observed after extensive training of rodents on familiar tasks. It rapidly dissipates; it also makes up a small proportion of total recorded activity in sleep. It has also been found that there are gender differences between males and females in regards to visual memory and sleep. In a study done testing sleep and memory for pictures it was found that daytime sleep contributed to retention of source memory rather than item memory in females, females did not have recollection or familiarity influenced by daytime sleep, whereas males undergoing daytime sleep had a trend towards increased familiarity. The reasons for this may be linked to different memory traces resulting from different encoding strategies, as well as with different electrophysiological changes during daytime sleep.
Brain damage is another factor that has been found to have an effect on visual memory. Memory impairment affects both novel and familiar experiences. Poor memory after damage to the brain is usually considered to result from information being lost or rendered inaccessible. With such impairment it is assumed that it must be due to the incorrect interpretation of previously encountered information as being novel. In experiments testing rats’ object recognition memory it was found that memory impairment can be the opposite, that there was a tendency to treat novel experiences as familiar. A possible solution for this impairment could be the use of a visual-restriction procedure that reduces interference.
Studies have shown that with aging, in terms of short-term visual memory, viewing time and task complexity have an impact on performance. When there is a delay or when the task is complex recall declines. In a study conducted to measure whether visual memory in older adults with age-related visual decline was caused by memory performance or visual functioning, the following were examined: relationships among age, visual activity, and visual and verbal memory in 89 community dwelling volunteers aged 60–87 years. The findings were that the effect of vision was not specific to visual memory. Therefore vision was found to be correlated with general memory function in older adults and is not modality specific. As we age performance in regards to spatial configurations deteriorates. In a task to store and combine two different spatial configurations to form a novel one young people out-performed the elderly. Vision also has an effect on performance. Sighted participants outperformed the visually impaired regardless of testing modality. This suggests that vision tends to shape the general supramodal mechanisms of memory.
Studies have shown that there is an effect of alcohol on visual memory. In a recent study visual working memory and its neutral correlates was assessed in university students who partake in binge drinking, the intermittent consumption of large amounts of alcohol. The findings revealed that there may be binge-drinking related functional alteration in recognition working memory processes. This suggests that impaired prefrontal cortex function may occur at an early age in binge drinkers. Another study conducted in 2004 examined the level of response to alcohol and brain response during visual working memory. This study looked at the neural correlated of the low level of response to alcohol using functional magnetic resonance imaging during a challenging visual memory task. The results were that young people who report having needed more alcohol to feel the effects showed higher levels of brain response during visual working memory, this suggests that the individual’s capacity to adjust to cognitive processing decreases, they are less able to adjust cognitive processing to contextual demands.
Hallucinatory palinopsia, which is a dysfunction of visual memory, is caused by posterior visual pathway cortical lesions and seizures, most commonly in the non-dominant parietal lobe. Focal hyperactivity causes persistent activation of a visual cortex-hippocampal neuronal circuit which encodes an object or scene that is already in visual memory. "All of the hallucinatory palinopsia symptoms occur concomitantly in a patient with one lesion, which supports current evidence that objects, features, and scenes are all units of visual memory, perhaps at different levels of processing. This alludes to neuroanatomical integration in visual memory creation and storage." Studying the excitability alterations associated with palinopsia in migraineurs could provide insight on mechanisms of encoding visual memory.
One common group of people that have visual memory problems are children with reading disabilities. It was often thought that disabilities are caused by failure to perceive the letters of a written word in the right order. However, studies show it is more likely that it is caused by a failure to encode and process the correct order of letters within the word. This means that the child perceives the word just as anyone else would, however their brains do not appear to hold onto the visual characteristics of the word. Although initially it was found that children with reading disabilities had comparable visual memory to those without difficulty, a more specific part of the visual memory system has been found to cause reading disabilities.
These parts are the sustained and transient visual processing systems. The sustained system is responsible for fine detail such as word and letter recognition and is very important in encoding words in their correct order. The transient system is responsible for controlling eye movements, and processing the larger visual environment around us. When these two processes do not work in synchronization this can cause reading disabilities. This has been tested by having children with and without reading disabilities perform on tasks related to the transient systems, where the children with reading disabilities did very poorly. It has also been found in postmortem examinations of the brains of people with reading disabilities that they have fewer neurons and connections in the areas representing the transient visual systems. However there is debate over whether this is the only reason for reading disabilities, scotopic sensitivity syndrome, deficits in verbal memory and orthographic knowledge are other proposed factors.
Deficits in visual memory can also be caused by disease and/or trauma to the brain. These can lead to the patient losing their spatial memory, and/or their visual memory for specific things. For example a patient “L.E.” suffered brain damage and her ability to draw from memory was severely diminished, whilst her spatial memory remained normal. Other patients represent the opposite, where memory for colors and shapes is unaffected but spatial memory for previously known places is greatly impaired. These case studies show that these two types of visual memory are located in different parts of the brain and are somewhat unrelated in terms of functioning in daily life.
|
Fact: Approximately 250,000 youths are tried in the adult criminal justice system annually.
Fact: On any given night, approximately 60,000 youth are incarcerated in a correctional facility or out-of-home placement.
Fact: Approximately 10,000 are incarcerated in an adult jail or prison.
Fact: The overwhelming majority are accused of minor and nonviolent offenses.
Source: Juvenile Justice, Public Welfare Foundation
|
Let’s get started by answering what a press release is:
A press release is simply a statement prepared for distribution to the news media announcing something claimed as having news value with the intent of gaining media coverage.
A press release consists of following elements:
The headline is the first single line of text in the press release and tells what the press release is about. It can be a very effective tool to grab the attention of the journalists; so writing it from a journalist's perspective is very important. Think what headlines catch your eyes in the newspaper.The headline should be descriptive but not too long, less than 100 characters is recommended.The headline should be formatted in title case, that is, each word in the sentence should have first letter capitalized, and rest of the letters in lowercase. Acronyms can be in uppercase.
The summary lets you build up your chance to sell your press release to the journalist.It is generally a requirement of online press release services. Identify a unique feature about your book and then write how it is going to revolutionize the world.The summary should be a single paragraph with about three to five lines, 250 characters is recommended.All sentences in the summary should be in sentence case, that is, only first letter of a sentence should be capital, and all others should be lowercase. Again, acronyms can be all capital letters.
The body should be at least 3000 characters or 500 words and no more than 8000 characters. The body should have a minimum of two paragraphs. All paragraphs should be ideally between 5 to 8 lines each. There should be a blank line after each paragraph for good visibility.
The dateline contains the release date of the press release and usually also the originating city of the press release. For online press release services like PRLog, the date stamp is automatic and should not be entered.
The introduction is where the press release body starts. It is the first paragraph in a press release, that generally gives basic answers to the questions of who, what, when, where and why.
The details come after the introduction. It gives further explanation, statistics, background, or other details relevant to the news and also serves to back up whatever claims were made in the introductory paragraph.
The about section is also called the “boilerplate” as it used over and over again. It is generally a short section providing background information on the press release issuing company or organization.
- Media Contact Information
This section contains the contact information like name, phone number, email address, mailing address, etc for the media relations contact person. For good credibility, the email address should be the same as the organization the press release is about. For example, if the press release is about an organization with a website called abcd.com, then the email address should be email@example.com.
- Check out some current Press Releases for ideas.
To distribute your press release, use a free service such as PRlog.org and let us know when your FastPencil Book has been featured in the news: firstname.lastname@example.org We promise to help spread the word!
|
January is something of a bumper month for notable birthdays, with two great English physicists born this month – Isaac Newton and Stephen Hawking. The former climbed to the zenith of classical physics while the latter achieved fame investigating the origins of the cosmos and extending Roger Penrose’s theorem of a space-time singularity in the centre of black holes to the entire universe. (See our separate blog about Hawking for further information.)
It seems, however, even in Newton’s day that time could be confusing. Depending on which calendar you adhered to – either the “old style“ Julian calendar in protestant and Orthodox regions, including Britain, or the Gregorian “new style“ calendar in Roman Catholic Europe, which was ten days ahead – that he was either born on Christmas Day 1642 or 4 January 1643!
Originally undistinguished in his studies (there’s hope for us all!) he went on “to distinctly advance every branch of mathematics then studied” and to develop theories of calculus. He is generally credited with the generalised binomial theorem and numerous other mathematical discoveries. He also studied optics – constructing a reflecting telescope – and was perhaps ahead of his time by suggesting that light is composed of particles or “corpuscles”, which were refracted by accelerating into a denser medium. That said, today’s quantum mechanics, photons and the idea of wave-particle duality bear only a minor resemblance to Newton’s understanding of light.
He was also interested in alchemy: after his death, examination of the great scientist’s hair showed it to contain mercury – probably the result of his alchemical pursuits. Well known for causing bizarre behaviour – and captured in the phrase “mad as a hatter” because milliners often succumbed – mercury poisoning could be an explanation for Newton’s eccentricity during his later years.
Nonetheless, Newton is probably best known for his advances in the study of celestial mechanics and gravitation. Like many of us today, he was fascinated by the appearance of a comet (one was visible over the winter of 1680-81); he established a proof that the elliptical form of planetary orbits would result from a centripetal force inversely proportional to the square of the radius vector. Among the many famous scientists with whom he communicated was the renowned astronomer Edmond Halley.
Newton expanded his thoughts in his most famous work, the Philosophiae Naturalis Principia Mathematica Principia, which was backed by Halley and published on 5 July 1687. In it, he stated the three universal laws of motion that were to stand for over 200 years and still underpin much of physics at this scale. He used the Latin word for weight – gravitas – for the effect that would become known as gravity, and defined the law of universal gravitation.
There’s not enough room here to describe the full scale of this work but, suffice to say, that it brought unprecedented accuracy to the calculation of planetary orbits and motion of celestial bodies such as comets. Most importantly, the Principia contains Newton’s three famous laws of motion:
1) The law of inertia;
2) The second law, which states that an applied force on an object equals the rate of change
of its momentum with time, most often expressed in the well-known form F = ma (force equals mass times acceleration); and
3) The third law which is often expressed as “for every action there is an equal and opposite reaction”. The SI unit for force is named the “newton” in honour of his work.
One story that is inextricably linked with Newton is the story of an apple falling from a tree in his garden as the genesis of his theory of gravity. Famous French author Voltaire was taken with the story of Newton and the apple and wrote in his 1727 Essay on Epic Poetry: “Sir Isaac Newton walking in his gardens, had the first thought of his system of gravitation, upon seeing an apple falling from a tree.”
And it seems that the story has some basis in fact, albeit that the fruit didn’t actually land on Newton’s head! William Stukeley recalls in his Memoirs of Sir Isaac Newton’s Life a conversation he had in Kensington, London on 15 April 1726. Newton pondered on “why should that apple always descend perpendicularly to the ground”, asking “why should it not go sideways, or upwards, but constantly to the earth’s centre?” Assuredly, for Newton, the reason was: “That the earth draws it – there must be a drawing power in matter and the sum of the drawing power in the matter of the earth must be in the earth’s centre.” Therefore, the apple falls towards the centre of the earth. Furthermore, Newton suggested: “If matter thus draws matter, it must be in proportion of its quantity. Therefore the apple draws the earth, as well as the earth draws the apple.”
|
Do you try to be eco-friendly? Did you know that Eco-friendly means anything that conserves energy and assists the prevention of air, water, and noise pollution? They are lots of things that contribute to protecting the environment such as solar panels, walking/biking or using public transport instead of cars, electric cars, using reusable products, buying fair trade food and gifts, using non environmentally harmful cleaning products, recycling, the list goes on…
Many of our attractions here at Hampshire’s Top Attractions do their bit to ensure they are taking steps to become eco-friendly and look at ways to ensure they are maintaining and increasing how eco-friendly they are as an attraction! Keep reading to find out what they have been up to!
Exbury Gardens goes full circle where their garden waste recycling is concerned. Rather than purchase compost or woodchip externally, they make their own from their garden waste! The trees and shrubs in this 200-acre woodland garden are cut back regularly. The removed branches and shrubbery are broken down in a woodchipper and added to various piles of compost heaps located onsite. This compost is then used as nutrients for new plantings and the woodchip is used as mulch to protect roots. Pictured here is Exbury’s Head Gardener Tom Clarke atop one of these impressive compost mounds.
Gilbert White’s house and gardens were once home to Britain’s first ecologist, learn more about Gilbert White and his legacy in the house, then visit his garden which is managed to promote maximum biodiversity. Visit the café for a locally-sourced meal, which strives to be zero waste.
Thruxton race circuit has electric car charging points on-site, where you can charge your eco-friendly electric vehicle. They also have an E-Voucher option available for all their driving experiences so that you’ll instantly have something to give to the lucky recipient as well as reducing your use of paper and opting for the more eco-friendly option! There is a mainline train station in Andover (10 minutes away), which runs from Waterloo, why not use public transport to get to the race circuit instead of driving?
A trip to Marwell Zoo is the chance to support global wildlife conservation! Learn about endangered animals, pop into their gift shop to purchase ethical toys and gifts, or marvel at their innovative and sustainable Energy for Life: Tropical House which will soon be powered by poo from the zoo!
Butser Ancient Farm, near Petersfield, boasts an eco-friendly visitor centre with solar panels and natural water filtration system. The natural ancient building techniques and traditional skills practiced at the experimental archaeology centre promote the close relationship our ancestors had with the environment around them. Explore the lessons from the past that can show us how to seek better balance with the natural world today.
As a conservation charity, the Hawk Conservancy Trust is always seeking new ways to reduce their impact on the environment and to develop new initiatives to benefit wildlife and habitats wherever they can. As a visitor, you can enjoy a range of sustainably sourced goods in the gift shop, and you’re encouraged to take along refillable water bottles and hot drinks containers.
Managed and run by a team of volunteers, Southampton’s heritage Steamship Shieldhall burns fuels with ultra-low sulphur content, with all its wastewater and sewage passing through an anaerobic treatment plant. All onboard waste is sorted into various categories for disposal by Veolia.
The Brickworks Museum has to work hard to be eco-friendly – the site is old, very cold, and very damp. In 2011 an array of 16 PV solar panels was purchased and these continue to provide them with power throughout the year and reduce the costs of their electricity. Very old and inefficient storage heaters have now all been replaced with modern convector heaters that are only used when needed. The old fluorescent lights that were all over the building have now made way for more energy-saving ones. The volunteers have always been excellent at recycling. All kinds of things from unwanted steel frames that were once farm buildings to motorway footbridges get repurposed into external display areas or shelters for trains. Their most recent recycling project was working with Southampton Wood Recycling who created a new reception desk out of old scaffold planks.
Milestones museum have 433 solar photovoltaic panels installed o the roof, which will provide the equivalent of half of the annual electricity use from a renewable, zero-carbon energy source.
In 2011 when Peppa Pig World was built at Paultons Park, a grass roof was included above the gift shop and George’s Spaceship Plazone. The roof is now home to thousands of bugs including bees and wild flowers.
Simple top tips to help visitors be eco friendly at attractions:
- Transport – bike, public transport or car shares are great options to visit attractions instead of driving.
- If having a picnic, opt for fair trade ingredients and use the Tupperware boxes you already have instead of cling film and foil to wrap your sarnies and prepare your own drinks in reusable bottles. Some attractions will even use these to refill.
- Use recycling points or take recyclable rubbish/ compost rubbish home.
- Turn off light switches, taps and hand dryers as soon as you have finished.
- Use reusable masks, if they are not reusable, be mindful of dropping them, as they are harmful to wildlife and the environment, keep them safe with you or dispose of them accordingly.
- Whilst you are at attractions and have an upcoming friend or family birthday, why not purchase a fair trade gift from the gift shop!
|
JAKARTA, Indonesia — A persistent divide in policy governing agriculture and forest management must be bridged, according to experts at a recent conference who called to “change the relationship” between the two sectors.
A panel discussion at the recent Forests Asia Summit in Jakarta provided examples of successes and failures in policy integration across Asia, where governments face the competing challenges of economic growth, food security for a growing population and protection of the region’s rapidly vanishing forests.
“There has been a separation of farms and forests in policy, while these landscapes are actually merged together,” said panelist Kanchi Kohli, an independent researcher from India.
This sectoral divide can be detrimental to biodiversity, food security and the ability of landscapes to provide ecosystem services, panelists concurred, exploring examples of public policy effects on both sectors.
Xie Chen of the Forest Economics and Development Research Center of China’s State Forestry Administration discussed some of the successes of integrating agricultural and forestry perspectives in policy under China’s Conversion of Cropland to Forest Program (CCFP; also known as ‘Grain for Green’ or the ‘Sloping Land Conversion Program’). Through CCFP, the government subsidizes 32 million rural households to plant trees in place of cropland.
Launched in 1999 in response to significant flooding and erosion due to highly degraded land, the project initially focused only on afforestation, but recent changes show significant impacts on food security and other ecosystem services.
“Gradually, we recognized the farmers’ livelihoods … at the very beginning we had forbidden the intercropping in CCFP land, but after that we encouraged intercropping, and also we allowed economic tree plantations,” Chen explained.
The payoff has been significant. By allowing rural households to view their agricultural and forested land in a holistic way, the program has increased production of many forest foods, including fruits, edible oils, medicinal plants and more.
For example, household fruit consumption increased: “The land conversion policy fruits contribute 30 percent of fruit supply,” Chen said. The productivity of agricultural land has also increased, with grain output per hectare rising among participating households. The increase in forested land also showed a reduction in vulnerability to natural disasters. These outcomes are thanks to what Chen called “trying to balance our agriculture and forest policy.”
Kohli provided an alternative scenario from her native India, where “the separation of farms and forests began in colonial times” and continues. In parts of India’s Karnataka state, forests and farms exist in an interconnected mosaic landscape; farmers traditionally use forests for collecting important foods as well as leaf litter. Forests are integral to the food security and food production of the area, but new government policy aimed at protecting forests has resulted in communities losing the right to use those forest products, she said.
“Forests were separated from food, while as we know forests also provide food,” Kohli said. “What is really affected by these hard boundaries is the freedom to choose what is food and how it should be produced.”
In India, agroforestry programs exist under both the Indian Ministry of Agriculture and the Ministry of Forestry — but these programs don’t talk to each other, she said. She argued that politicians and researchers need to focus on repairing what she called “the fracture between forests and farms.”
Dietrich Schmidt-Vogt, a senior scientist with the World Agroforestry Centre (ICRAF) in China, provided examples from around the Mekong Basin of how land-use change affects forests and agriculture. As Schmidt-Vogt explained, prior to these land use changes, up until the 1950s and ’60s, “the dominant type of land use in this area was shifting cultivation … which maintains a forest landscape in a kind of dynamic equilibrium.”
He pointed out that landscapes with shifting cultivation can be quite diverse: After a period of cultivation, the forest grows back over a fallow period of 10 years or more, thus the landscape includes crop lands and forest at various stages of regrowth, providing high levels of biodiversity and food security.
Schmidt-Vogt’s examples from three countries shows three very different changes in land use according to political and economic context: field crops mixed with forests now dominate in Thailand; maize crops have been adopted in Vietnam; and rubber plantation forestry has taken hold in China, where policies promote tree planting.
‘CHANGE THE RELATIONSHIP’
Food security has declined in the Chinese context, with nearly exclusive rubber plantation forestry replacing shifting cultivation, Schmidt-Vogt explained. The monocrop has also created increased vulnerability to changes in the price of rubber. In the Chinese case, Schmidt-Vogt showed that increased tree cover comes at the expense of natural forest. Biodiversity is being lost, further jeopardizing food security as secondary forests are taken over by monocrop rubber plantations.
Overall, the panelists called for an integrated, landscapes approach to agricultural and forest policy to account for the interconnectedness of both types of land use.
“There is an assumption that food production and biodiversity conservation are linked in an inverse relationship, and these presentations challenge that assumption,” argued Kiran Asher, Senior Scientist at the Center for International Forestry Research (CIFOR), who led the discussion session. Instead, she said, the unintended consequences of some attempts at green growth — such as rubber plantations in the Mekong region — are in fact detrimental to both biodiversity and food security. As a result, she said, “we have to change the relationship between food and forests.”
|
(CNN) -- "Lift Every Voice and Sing" is an uplifting spiritual, one that's often heard in churches and popularly recognized as the black national anthem. Timothy Askew grew up with its rhythms, but now the song holds a contentious place in his mind.
"I love the song," said Askew, an associate professor of English at Clark Atlanta University, a historically black college. "But it's not the song that is the problem. It's the label of the song as a 'black national anthem' that creates a lot of confusion and tension."
The song and its message of struggle and hope have long been attached to the African-American community. It lives on as a religious hymn for several protestant and African-American denominations and was quoted by the Rev. Joseph E. Lowery at Barack Obama's presidential inauguration.
After studying the music and lyrics of the song and its history for more than two decades, Askew decided the song was intentionally written with no specific reference to any race or ethnicity.
Askew explains his position in the new book, "Cultural Hegemony and African American Patriotism: An Analysis of the Song, 'Lift Every Voice and Sing,'" which was released by Linus Publications in June. The book explores the literary and musical traditions of the song, but also says that a national anthem for African-Americans can be construed as racially separatist and divisive.
"To sing the 'black national anthem' suggests that black people are separatist and want to have their own nation," Askew said. "This means that everything Martin Luther King Jr. believed about being one nation gets thrown out the window."
Askew first became intrigued with "Lift Every Voice and Sing" while working on his master's degree at Yale University. He was a Morehouse College music graduate, young, passionate and hungry for knowledge about African-American culture. A fellow classmate suggested Askew explore Yale's collection on James Weldon Johnson, an early civil rights activist who wrote the song decades earlier.
Johnson first wrote "Lift Every Voice and Sing" as a poem in 1900. Hundreds of African-American students performed it at a celebration of Abraham Lincoln's birthday at Jacksonville, Florida's Stanton School, where Johnson was principal. Johnson's brother, John Rosamond Johnson, later set the poem to music. By 1920, the NAACP had proclaimed the song the "Negro National Anthem."
"I remember methodically going into the Yale library every day and sitting there on the floor, rummaging through 700 boxes of James Johnson's work," Askew said. "I became so fascinated in his life and letters, that I wanted to know more about the creation of the song and how it related to our modern understanding of it."
He found letters of appreciation to Johnson from individuals of all different ethnic backgrounds. At that moment, Askew had a revelation: The song he'd known as the "black national anthem" was for everybody.
Some will call his perspective on the song a contradiction, Askew said, especially because he works at a historically black college. But he argues that universities like Clark Atlanta accept students of many races and ethnicities; a national anthem for one race excludes others, and ignores an existing national anthem, "The Star-Spangled Banner" by Francis Scott Key.
"Some people argue lines like 'We have come, treading our path through the blood of the slaughtered,' signify a tie to slavery and the black power struggle," Askew said. "But in all essence there is no specific reference to black people in this song. It lends itself to any people who have struggled."
He's not the only one who sees fault in a national anthem just for African-Americans.
Kenneth Durden, an African-American conservative blogger, responded to Askew's claims on his blog, "A Free Man, Thinking Freely." He said in an interview that Askew is right to make connections to King's view of one America.
"King always appealed to the American dream for all," Durden said. "He was a patriot and he never wanted blacks to deny or separate themselves from being American. I think claiming an anthem for ourselves as black people is doing just that."
What troubles Askew more is that the song became an identity marker for African-Americans.
"Who has the right to decide for all black people what racial symbol they should have?" Askew said. "Identity should be developed by the individual himself, not a group of people who think they know what is best for you."
Hilary O. Shelton, senior vice president for advocacy and policy for the NAACP, said Askew's ideas might be far-fetched.
"I don't see anything that is racially exclusive or discriminatory about the song," Shelton said. "The negro national anthem was adopted and welcomed by a very interracial group, and it speaks of hope in being full first-class citizens in our society."
"Lift Every Voice and Sing" isn't meant to cloud national identity or persuade African-Americans to be separatists, Shelton said. It's often sung in conjunction with "The Star-Spangled Banner," or with the reciting of the Pledge of Allegiance at NAACP events.
"His presumption is that this song is sung instead of our national anthem -- that we are less American and we are not as committed to America because we take pride in the Negro national anthem," Shelton said. "It is evident in our actions as an organization and here in America that we are about inclusion, not exclusion. To claim that we as African-Americans want to form a confederation or separate ourselves from white people because of one song is baffling to me."
This isn't the first time "Lift Every Voice and Sing" has sparked debate. In 2008, jazz singer Rene Marie substituted the words of "The Star-Spangled Banner" with the words of "Lift Every Voice and Sing" at Mayor John Hickenlooper's State of the City address in Denver, Colorado. Marie said it was a matter of artistic expression, but critics viewed the lyrical switch as disrespect toward the national anthem, a lack of patriotism and an insinuation of racial division.
"I think that we often try to separate the black experience from the American experience," said Marc Lamont Hill, an associate professor of education at Columbia University who studies hip-hop culture. "It's a black national anthem, but it's also a quintessential American song because of its message of fighting for freedom. It's not 'lift the black voices,' it's 'lift every voice.'"
Askew, though, maintains there's only one national anthem, "The Star-Spangled Banner," and that "Lift Every Voice and Sing" could take on a new role: a message of victory for all ethnic groups in the United States.
"We need to consider eliminating this alternative label of 'black national anthem' in order to promote unity," Askew said. "I know people will probably think that I'm a sellout, but I think it is important that African-Americans nationally understand that we should be moving towards racial cohesiveness."
|Find this article at:|
|
How Many Chemical Engineers Are There?
Chemical engineers work in the oil and gas, manufacturing, design, pulp and paper and petrochemicals industries. They also develop and direct facility operations.
They help to create greener, low-carbon industrial processes, upcycling and more at a scale that will change our world.
Five-year veterans often get their first taste of managerial work and more responsibility in their jobs. Many also start to focus more on people skills.
1. About 1.6 million
Chemical engineers are employed in the oil and gas, manufacturing, pharmaceuticals, paper, petrochemicals and food industries. Demand for these engineers largely mirrors the demand for the products produced by manufacturing industries. This field also offers opportunities in alternative energy research, nanotechnology and biotechnology.
Chemical engineering jobs involve designing, testing and supervising industrial processes and production. Other duties include calculating costs and schedules for projects. They may also test product quality and conduct forensic analysis of equipment problems.
Those with five years of experience can specialize in design, production or technical sales. They may become supervisors or take on leadership positions. Those with ten or more years in the field have gained considerable experience and are capable of managing projects from start to finish. They can also help train and supervise new engineers.
2. About 31,700
Chemical engineers conceive and design large-scale manufacturing processes for creating chemicals, fuels, pharmaceuticals, food, paper, clothing and more. Many of these mass-produced items that ordinary people use daily – from electronics and fuel to plastics, medicine and processed foods – wouldn’t exist without their efforts.
Entry-level chemical engineers work under more experienced engineers and must complete on-the-job training as they build their skills and experience. Over time they can advance to supervisory and technical sales positions.
Those who’ve been in the field for five years have become “senior” engineers and are likely involved in research, production, or development. They have gained considerable managerial experience and are largely in charge of operations, management, and personnel. They also have a significant input in product development and engineering decisions.
3. About ten-year veterans
Chemical engineers earn a very competitive salary. Their salaries far exceed the national average for all occupations, according to 2020 data from the Bureau of Labor Statistics. Those who choose to move into management will find themselves with even higher salaries.
A comparatively small group of chemical engineers make up the CEOs of major companies, including 3M, Du Pont, Union Carbide, Exxon, Dow Chemical and BF Goodrich. They also have made the news as military leaders, political figures and sportspeople.
These skills are requested frequently by employers in job postings for Chemical Engineer positions. They are considered Hot Technologies. These are the skills that distinguish a professional from their peers. Click on a skill to see how it rates against other skills. The percentage of jobs that request a skill is shown as well.
4. Five-year veterans
Chemical engineers can work in many different industries, including pharmaceuticals and medicine, oil & gas, design and construction, manufacturing, pulp and paper and the chemicals, petrochemicals, plastics and synthetic rubber subsectors. They can also find positions in the military.
They have a reputation for being able to take on any project and make it work. They often have to be creative, however.
A career in chemical engineering requires a high level of technical understanding. A bachelor’s degree is usually sufficient, though a graduate program can be beneficial. If you want to be licensed as a professional engineer, you need to complete four years of work experience and pass an exam. A great way to get first-hand knowledge of a specific industry is through an industrial placement that is available as part of your studies or as an internship.
5. Ten-year veterans
Chemical engineers say that the profession’s job satisfaction ratings are high, partly because they are able to solve big problems. It also requires creativity and the ability to think outside of the box, as the field is highly technical.
By five years into their careers, many chemical engineers have specialized in research, production or development and gained significant management responsibility. They enjoy good salaries, but often wish they could spend more time with family and friends.
Some leave the profession altogether, choosing tangential jobs in fields like finance, data or software development. Others opt for postgraduate study in the areas of science or design, and even medicine. But most continue to thrive in the largely male-dominated field.
|
Learn to teach music to children with MUSIC IN CHILDHOOD: FROM PRESCHOOL THROUGH THE ELEMENTARY GRADES, an inspirational and informative text that features practical strategies, imaginative scenarios, and comprehensive examples to help students prepare for their careers in music education. Available with InfoTrac Student Collections http://gocengage.com/infotrac.
Back to top
Rent Music in Childhood 4th edition today, or search our site for other textbooks by Patricia Shehan Campbell. Every textbook comes with a 21-day "Any Reason" guarantee. Published by CENGAGE Learning.
Need help ASAP? We have you covered with 24/7 instant online tutoring. Connect with one of our tutors now.
|
Have you ever heard of the list of most needed inventions?
These are the sorts of inventions that, if realized, would overcome technological hurdles that are preventing mankind from reaching our most cherished dreams. Room temperature super conductors, advanced nanotechnology and practical fusion power are just a few. There are a number of inventions like this that are needed to make information security a reliable, efficient and low cost process. And chief among them is the Holy Grail of information security: an un-spoofable identity authentication mechanism.
Just think of it! A way for people and machines to know with a certainty that it is you and only you that they are communicating with. No more worries that someone will steal your identity and empty your bank accounts. No problems with cyber criminals impersonating IT personnel and stealing information or crashing systems. Think of the money and time you could save on complex intrusion detection and prevention systems and complicated processes. It is fun to contemplate. But, unfortunately, it is all just wishful thinking. Despite years of concentrated thought and effort, nobody has a clue how to make it work!
There are just three ways known to authenticate identity:
- Using something you know
- Using something you have or
- Using something you are
When talking about authenticating yourself to a computer system, something you know is typically a user name, a password or an encryption key. I think all of us know that despite all efforts to keep these mechanisms secret and secure, it doesn’t prevent intruders from getting them. The problem is that people have to know them, they need to store them and they need to use them, and that makes them vulnerable. So something you know isn’t the answer.
Let’s go to the second mechanism: something you have. In the computer world this is usually a smart card, token or the like. Combined with a user name and password, this mechanism provides another layer of security that can be very effective. But it is far from perfect. Smart cards and tokens can be stolen or misplaced. Perhaps a certificate authority or token provider’s servers are compromised. Some mechanisms can be reverse engineered. So, the upshot is, you can add something you have, to something you know and get better, albeit far from perfect, identity authentication. But the cost you pay in dollars and personnel hours has just gone way up.
So let’s go to the final possible authentication mechanism: something you are. For computer systems this is presently typically finger prints or retinal scans, although other possible mechanisms include facial recognition, voice recognition, heuristics (behavior matching) and DNA matching. This mechanism, once again, provides added security to the identity authentication process, but still is not perfect. For one thing, this kind of authentication mechanism works best in person. If a fingerprint, for example, is transmitted it really travels as a series of electromagnetic signals and these can be spoofed. But even in person, this type of mechanism can possibly be spoofed. So adding something you are to something you have and something you know once again makes it much more difficult to spoof identity, but still doesn’t render it impossible. And imagine the added burden in money and inconvenience using all three mechanisms would mean to your organization! Seems like way too much just to protect some financial data or health information, huh?
So, please, let’s all of us spend some thought trying to find the perfect identity authentication mechanism. It may be like trying to come up with perpetual motion, but if you do manage it, I guarantee you the rewards will keep you and yours in clover for the rest of your lives!
|
What is residence time and why is it important in flow reactors?
We discuss the tangible benefits that understanding and controlling residence time can bring.
What is residence time distribution?
The residence time distribution of a reactor is an abstract concept to most chemists. In batch chemistry, residence time is simple – you add reactants and remove products. The time chemicals spent in the reactor is the residence time.
In a continuous flow system, however, there is rarely a single “time” the molecules spend in a reactor. Each molecule spends a slightly different time flowing through the reactor and this leads to a distribution of residence times.
Case Study # 01, rev 13,
2 Aug 2021, By Dr Samuel Adams
The mean residence time is the reactor volume divided by the volumetric flow rate. We can change the residence time by either changing the volume of the reactor or the flow rate.
Molecules do not spend this exact time in the reactor.
In a tube, for example, fluid close to the walls moves slower due to friction, and diffusion in the liquid can both cause the molecules to jump forwards or backwards within the flow. Therefore, chemicals that enter the reactor at one moment spend various amounts of time inside.
One way to think of this is as many parallel batch reactors that work for various amounts of time. The number of reactors (Y axis) and time on stream (X axis) is, in effect, the residence time distribution curve.
Batch reactors also suffer from variability in “reaction times” for other reasons. Particularly at large scale, heating, charging, quenching, cooling down, and emptying processes also determine the amount of time that reactants and products spend in the reactor.
Why residence time is important: Product quality
Residence time is vital for product quality. If you have a selective process that must be stopped at a precise moment, then the residence time distribution is critical.
If chemicals spend less time reacting, this has the effect of decreased conversion and a significant amount of reactants at the outlet. If the chemicals spend more time, over-reaction and decreased selectivity result in an increase in by-products at the outlet.
Even in seemingly non-selective reactions, a substantial impurity formation could be observed; ppm-level quantities of impurities may quickly rise to unacceptable levels. Hence, knowing residence time distribution is crucial for impurity control.
Why residence time is important: Reactor throughput
The residence time distribution also defines throughput (how many kilos of product per hour are obtained). Imagine a process where the reaction proceeds to completion in 2 hours. In a continuous reactor, all of the molecules need to spend at least 2 hours in the reactor for the outlet stream to give complete conversion.
In a batch reactor, we often stop the reaction well after 2 hours to obtain the products; then we spend hours on cleaning, validating, re-charging, and heating.
We could use the same reactor as a continuously stirred tank reactor (CSTR, also called semi-batch) constantly adding reactants and withdrawing products. No more wasted efforts in charging! But it does not work because of residence time distribution.
In a CSTR, you can see many molecules spend less than 2 hours in the reactor. These molecules are not reacted and you see low conversion at the outlet.
On the other hand, many reactants spend much longer in the reactor, without any benefit in product formation but a possibly of side-reactions.
With a broad residence time distribution, you are therefore fighting a losing battle for high conversion and are forced to use much longer mean residence times to achieve acceptable conversion.
In the SABRe reactor, we use a series of 10 CSTRs with a narrow residence time distribution. Because of this, we can use the 10-fold lower mean residence time compared with a single CSTR of the same volume. For example, a 1 kilo a day production in a single CSTR could be intensified to 10 kilo a day under the same conditions in SABRe. This means that SABRe shows a 10-fold higher throughput (kg/h) compared to a CSTR of the same volume.
Residence time distribution is vital for product quality and throughput. Combined with the intrinsic chemistry, heat and mass transfer, the residence time distribution defines the reactor performance.
The SABRe system (available in steel, Hastelloy or glass) is suitable for a wide range of chemical applications. Combining simplicity with superb reaction control, SABRe is the best choice for simple, safe and cost effective chemistry.
What can the SABRe do for you today? Get in touch and arrange a trial.
Other SABRe case studies:
Continuous flow (such as micro-reactors) are superior for exothermic reactions. How do you compute the thermal performance of a reactor?
How the SABRE system provides large gas-liquid area to maximise the reaction throughput and selectivity.
We showed superior performance of SABRe in the enzymatic (liquid-liquid) esterification.
The gas-liquid mass transfer coefficient is a key parameter in multiphase reactions. We studied it for the SABRe.
The SABRe provides precise residnece time control.
|
UNDP helps Mexican villagers face disaster and climate risks
Visitors to Yum Balam look at beauty products that use native medicinal plants.
Cancun, Mexico - Forty participants at international climate talks last week in Cancun, Mexico, traveled to the site of a project supported by the United Nations Development Programme (UNDP) that has helped as many as 900,000 people prepare to face the impact of disasters.
Representatives from governments, non-governmental organizations, academic institutes and the media taking part in the 16th Conference of Parties to the UN Framework Convention on Climate Change toured the Yum Balam protected area, north of Cancun, where community-level projects have helped prepare the predominantly indigenous Mayan population for the impact of hurricanes, floods and post-disaster fires.
The Yum Balam Wild Flora and Fauna Protected Area, along the Atlantic Hurricane Belt in the state of Quintana Roo, is an area of high fire incidence with 989 fires recorded to 2010. In 2005, two category five hurricanes (Emily and Wilma) destroyed thousands of trees causing a fire emergency in 2006, which affected 50,000 hectares.
Since 2005, communities in this high-risk zone have received loans through the Global Environment Facility’s Small Grants Programme and Mexico’s National Commission of Protected Areas to fortify their livelihoods and plan in advance for future disasters.
Some 85 grants helped residents of Yum Balam to set up eco-tourism and diving tours, launch a line of beauty products made from local forestland plants, and create tools for forest fire prevention, firewall construction, and heightened community awareness and preparedness.
Among the preparedness measures taken was a climate risk analysis and the creation of standard procedures for evacuation. As a result, the percentage of the population evacuated before hurricanes increased from below 50 percent to 97 percent between 2005 and 2007.
The approach has been replicated in other communities across the Yucatán peninsula and in the other Mexican states, including Chiapas, Oaxaca, and Puebla. Six other Latin American countries are also planning to launch similar projects in vulnerable areas.
|
The revolution in Iran happened in 1979 and paved the ground for the collapse of a monarchy system but many young people were killed for the revolution. One year later, 1980, a big war imposed by Iraq started against Iran which was named as the bloodiest classic battle of the century.
The war took 8 years and ended in 1988 by Iran's acceptance of the UN resolution 598.
During the long war, over 220,000 Iranian soldiers were killed. The occurrence of two important events of the Islamic Revolution and the Iraqi-imposed war against Iran as well as the existence of traditional and religious beliefs and important historical figures are among the effective elements in the Iranians' culture which make the issue of martyrdom bolder and turn the issue into a deep culture among the Iranian families.
The Iranians believe that anyone who resists on the right way and defends people and sacrifices himself is a martyr and remains alive. A martyr has saved many other lives by his death and therefore, the martyrs enjoy a great position in the Iranian society.
While they don’t have any bodies, their spirits are alive and always live with their families and people. Therefore, many Iranian people believe that martyrs are sacred and live with them during their daily life.
Behesht-e Zahra in Tehran is the biggest grave of the Iranian martyrs which hosts the bodies of over 33,000 Iranian nationals who have been killed in war. The people in Tehran go to the tombs of their martyrs every week and live with them. They even hold the most important events of their life beside the tombs of martyrs.
The Iranian people visit the tombs of martyrs when marrying, celebrating their birthday and new year and holding mourning ceremonies- anniversary of the martyrs' death- or when they have some problems they ask the martyrs to pray for them and help them in their life and intercede for them before God after death.
These are the most important events which have turned the martyrs' tombs into a little paradise… a paradise in which the martyrs are alive and host a large number of people every week.
|
How to Keep Kosher with Dairy Products
To keep kosher, you must follow Jewish dietary laws, which are a basic precept of Judaism. The rules of keeping kosher break foods into three main categories — dairy, meat, and pareve. Keeping kosher means never consuming meat and dairy products in the same meal. You must make sure that your dairy foods don’t contain any meat products.
Most dairy foods are kosher, unless they contain meat. Watch out for these non-kosher dairy items:
Hard cheese: A frequent step in the process of making hard cheeses, such as Parmesan, involves using a product called rennet to help coagulate the milk. Traditionally, rennet is an animal product and is considered meat, and therefore its presence makes cheese not kosher.
Gelatin: Gelatin is made from the bones of animals. Dairy products made with other forms of gelatin list kosher gelatin in their ingredient list, like in this figure.
|
Over 130 Hueneme Elementary School District teachers are 'coasting through their classrooms' and learning more about technology and teaching in COAST! Located in Port Hueneme, CA, the district includes nine elementary schools and two middle schools -- over 8,000 students. HESD has made it part of their mission to 'empower students by teaching them critical thinking skills through a rigorous academic experience in a digitally-rich environment.'
Behind that digitally-rich environment are innovative educators and TOSAs who are using technology to support and elevate learning opportunities. Since 2017, these educators have been playing COAST to explore and reflect on new technology teaching. COAST stands for Collaborative, Online, Activity-based, Self-paced Training. The 'Surfs-up' themed game is the brainchild of HESD's Technology Resource Specialist Liz Hoppe, who has developed missions and activities that align with the districts goals for learning, including: Digital Tools, Professional Learning, Pedagogy, Collaboration and SAMR.
Learn How Alludo Can Help You Deliver Effective PD
There are two versions of the COAST Game: Coast Through My Classroom for teachers, and Coast Like a Boss for administrators. Hoppe encourages players with fun incentives, like coffee and donuts for top players and schools, as well as level-based incentives like tee-shirts and upgraded staff iPads.
Check out some of the fun activities from the COAST game, and visit the HESD COAST Twitter page.
Have a Worksheetless Day
Collaborate with another school
Create and scan a QR code
TPACK vs. SAMR
And many, many more!
|
Carrots contain many nutrients, including beta carotene and antioxidants, that may support your overall health as part of a nutrient-rich diet.
The carrot (Daucus carota) is a root vegetable often claimed to be the perfect health food.
It is crunchy, tasty, and highly nutritious. Carrots are a particularly good source of beta carotene, fiber, vitamin K1, potassium, and antioxidants (
They also have a number of health benefits. They’re a weight-loss-friendly food and have been linked to lower cholesterol levels and improved eye health.
What’s more, their carotene antioxidants have been linked to a reduced risk of cancer.
Carrots are found in many colors, including yellow, white, orange, red, and purple.
Orange carrots get their bright color from beta carotene, an antioxidant that your body converts into vitamin A.
This article tells you everything you need to know about carrots.
The nutrition facts for two small-to-medium raw carrots (100 grams) are:
- Calories: 41
- Water: 88%
- Protein: 0.9 grams
- Carbs: 9.6 grams
- Sugar: 4.7 grams
- Fiber: 2.8 grams
- Fat: 0.2 grams
Carrots are mainly composed of water and carbs.
The carbs consist of starch and sugars, such as sucrose and glucose (
They are also a relatively good source of fiber, with one medium-sized carrot (61 grams) providing 2 grams.
Carrots often rank low on the glycemic index (GI), which is a measure of how quickly foods raise blood sugar after a meal.
Pectin is the main form of soluble fiber in carrots (8).
Soluble fibers can lower blood sugar levels by slowing down your digestion of sugar and starch.
Carrots are about 10% carbs, consisting of starch, fiber, and simple sugars. They are extremely low in fat and protein.
Carrots are a good source of several vitamins and minerals, especially biotin, potassium, and vitamins A (from beta carotene), K1 (phylloquinone), and B6.
- Vitamin A: Carrots are rich in beta carotene, which your body converts into vitamin A. This nutrient promotes good vision and is important for growth, development, and immune function (
- Biotin: A B vitamin formerly known as vitamin H, biotin plays an important role in fat and protein metabolism (
- Vitamin K1: Also known as phylloquinone, vitamin K1 is important for blood coagulation and can promote bone health (
- Potassium: An essential mineral, potassium is important for blood pressure control.
- Vitamin B6: A group of related vitamins, B6 is involved in the conversion of food into energy.
Carrots are an excellent source of vitamin A in the form of beta carotene. They are also a good source of several B vitamins, as well as vitamin K and potassium.
Carrots offer many plant compounds, including carotenoids.
These are substances with powerful antioxidant activity that have been linked to improved immune function and reduced risk of many illnesses, including heart disease, various degenerative ailments, and certain types of cancer (
Beta carotene, the main carotene in carrots, can be converted into vitamin A in your body.
However, this conversion process may vary by individual. Eating fat with carrots can help you absorb more of the beta carotene (
The main plant compounds in carrots are:
- Beta carotene: Orange carrots are very high in beta carotene. The absorption is better (up to 6.5-fold) if the carrots are cooked (
20, 21, 22).
- Alpha-carotene: An antioxidant that, like beta carotene, is partly converted into vitamin A in your body.
- Lutein: One of the most common antioxidants in carrots, lutein is predominantly found in yellow and orange carrots and is important for eye health (
- Lycopene: A bright red antioxidant found in many red fruits and vegetables, including red and purple carrots, lycopene may decrease your risk of cancer and heart disease (
- Polyacetylenes: Recent research has identified bioactive compounds in carrots that may help protect against leukemia and other cancers (
1, 25, 26).
- Anthocyanins: These are powerful antioxidants found in dark-colored carrots.
Carrots are a great source of many plant compounds, especially carotenoids, such as beta carotene and lutein.
Much of the research on carrots has focused on carotenoids.
Reduced risk of cancer
Diets rich in carotenoids may help protect against several types of cancer.
Women with high circulating levels of carotenoids may also have a reduced risk of breast cancer (
Lower blood cholesterol
High blood cholesterol is a well-known risk factor for heart disease.
For this reason, they may be a useful addition to an effective weight loss diet.
Eating carrots is linked to a reduced risk of cancer and heart disease, as well as improved eye health. Additionally, this vegetable may be a valuable component of an effective weight loss diet.
Organic farming uses natural methods for growing the crop.
However, conventionally grown carrots contain pesticide residues. The long-term health effects of low-grade pesticide intake are unclear, but some scientists have voiced concerns (
While no evidence suggests that organic carrots are more nutritious than conventionally grown ones, organic varieties are less likely to harbor pesticides.
Baby carrots are an increasingly popular snack food.
Two kinds of carrots are called baby carrots, which can be misleading.
One the one hand, there are whole carrots harvested while still small.
On the other hand, there are baby-cut carrots, which are pieces from larger carrots that have been machine-cut into the preferred size, then peeled, polished, and sometimes washed in small amounts of chlorine before packing.
There’s very little difference in nutrients between regular and baby carrots, and they should have the same health effects.
Baby carrots are whole carrots harvested before they grow large, while baby-cut carrots are pieces from larger carrots that have been machine-cut, peeled, polished, and washed before packing.
Carrots are generally considered safe to eat but may have adverse effects in some people.
Additionally, eating too much carotene can cause your skin to become a little yellow or orange, but this is harmless.
Carrot allergy is an example of cross-reactivity in which the proteins in certain fruits or vegetables cause an allergic reaction because of their similarity to the proteins found in certain types of pollen.
If you are sensitive to birch pollen or mugwort pollen, you might react to carrots.
Carrots grown in contaminated soil or exposed to contaminated water may harbor larger amounts of heavy metals, which can affect their safety and quality (
Carrots may cause reactions in people allergic to pollen. Additionally, carrots grown in contaminated soils may contain higher amounts of heavy metals, affecting their safety and quality.
Carrots are the perfect snack — crunchy, full of nutrients, low in calories, and sweet.
They’re associated with heart and eye health, improved digestion, and even weight loss.
This root vegetable comes in several colors, sizes, and shapes, all of which are great additions to a healthy diet.
|
Many different traditional universities are offering classes online now through Massively Open Online Courses (MOOCs). It’s a very noble quest – these universities are globally providing free access to their classes, professors and course content. Quite simply, this is really a great deal for certain students. Some very prestigious universities, such as Harvard, Stanford, Vanderbilt and MIT have chartered the pathway for an open learning environment, for certain subjects.
These courses are a great opportunity to learn about a subject that interests you – thought-provoking subjects like ancient Greek mythology, game theory or the business of sports. Top notch professors – some of the best in their fields – provide videos for you to learn for free. Generally, these also are excellent for those that would like a refresher in a certain subject, or simply would like to learn more about an unknown area.
But, there’s a catch – unfortunately, these courses do not currently provide credit and will not count toward a degree. Accreditation is critical to proving that you’ve attained a certain degree, which these courses do not provide. Also, many of these courses are abridged in nature, and do not provide full access to an entire curriculum – often they’re shortened to a 6-8 week period, rather than a full semester course.
Also, while there are significant collaboration tools available, lower accountability leads to significant classroom retention issues for students. Quite simply for some classes, many students do not finish. Many find that they would like to know more about certain subjects, but don’t have the time or energy to complete the course. Others feel that MOOCs are chaotic and demands significant time and effort from participants.
So, online, non-credit classes are a resource for some to learn about many different subjects. Great companies, such as Udacity, Coursera and edX are available to coordinate courses from top universities.
|
“Put a pillow under your knees when lying in bed.”
“You should ask your doctor to give you a blood thinner.”
Number 1 is correct.
Crossing the legs when sitting decreases blood flow, which may lead to clot formation. If the client is bed-bound, the nurse should tell the client to avoid crossing one leg over another while lying in bed. Telling the client to avoid car rides longer than two hours may not be practical; longer car rides can be done safely if regular breaks are taken to walk around for a few minutes. Putting a pillow under the knees restricts blood flow to the lower extremities and increases the risk of a clot. Not all clients are candidates for blood thinners; the nurse should focus on actions that the client can do to prevent clots. This engages the client in managing her own care.
|
How to Lead Your Competitors: The Stackelberg Model of Duopolies in Managerial Economics
Changing the assumptions of how firms react to one another changes the decision-making process. In the Stackelberg model of duopoly, one firm serves as the industry leader. As the industry leader, the firm is able to implement its decision before its rivals.
Thus, if firm A makes its decision first, firm A is the industry leader and firm B reacts to or follows firm A’s decision. However, in making its decision, firm A must anticipate how firm B reacts to that decision.
An example of such leadership may be Microsoft’s dominance in software markets. Although Microsoft can make decisions first, other smaller companies react to Microsoft’s actions when making their own decisions. The actions of these followers, in turn, affect Microsoft.
The primary difference between the Cournot and Stackelberg duopoly models is that firms choose simultaneously in the Cournot model and sequentially in the Stackelberg model.
The market demand curve now faced by the Stackelberg duopolies is:
where QD is the market quantity demanded and P is the market price in dollars.
Assume that firm A has a constant marginal cost of $20 and firm B has a constant marginal cost of $34. Derive the Stackelberg solution with the following steps:
Firms A and B provide the entire market quantity demand, QD.
Substitute qA and qB for QD in the market demand curve to yield
Because firm B reacts to firm A’s output decision, begin by deriving firm B’s reaction function.
Start by noting that total revenue equals price multiplied by quantity. For price, substitute the equation from Step 2.
Firm B’s marginal revenue equals the derivative of total revenue, TRB, with respect to qB.
Treat qA as a constant because firm B can’t change the quantity of output produced by firm A.
Firm B maximizes profit by equating its marginal revenue and marginal cost.
Remember that firm B’s marginal cost equals $34.
Rearrange the equation in Step 5 to solve for qB and to get firm B’s reaction function.
For the next step, the demand curve faced by firm A is
At this point, substitute firm B’s reaction function into firm A’s demand curve.
This is the critical difference from the Cournot duopoly. By substituting firm B’s reaction function in its decision-making process, firm A is anticipating firm B’s reaction to its output decision.
Firm A’s total revenue, TRA, equals price times quantity.
Firm A’s marginal revenue is the derivative of total revenue taken with respect to qA.
Firm A determines the profit-maximizing quantity of output by setting marginal revenue equal to marginal cost and solving for qA.
Remember that firm A’s marginal cost is a constant $20.
Substitute qA into firm B’s reaction function from Step 6 to determine qB.
Thus, the profit-maximizing Stackelberg duopoly has firm A producing 114 units of output and firm B producing 29 units of output. The illustration shows the Stackelberg duopoly.
Note that firm B has exactly the same reaction function as existed in the Cournot duopoly. On the other hand, firm A doesn’t have a reaction function. Firm A sets it output first, and then firm B reacts to that output. Thus, the horizontal line for firm A at 114 units of output indicates it has set its output before firm B reacts.
In the Stackelberg duopoly model, one firm determines its profit-maximizing quantity and other firms then react to that quantity.
In the Cournot model, firm A simply notes that the market demand is satisfied by the output produced by it and firm B. The two firms make simultaneous decisions. In the Stackelberg model, firm A substitutes an equation to represent how firm B reacts to its production decision. The model reflects sequential decisions.
The simultaneous decision-making associated with the Cournot model leads to different outcomes from the outcomes associated with sequential decisions of the Stackelberg model. The Stackelberg leadership model results in a higher market quantity and lower price for the good as compared to the Cournot model.
|
History of Agarthiya Maharishi
Shri Agasthiyar Maharishi
SHRI AGASTHIYAR MAHARISHI is the ancient maharishi who was one of the senior and powerful among the 18 siddhas as like as Agasthiyar, Idaikadar, Sattai Nathar, Pathanjali, Konganavar, Kuthambai Siddar, Kamalamuni, Vanmikar, Pambatti Siddar, Sundharanandar, Sivavakkiyar, Karuvoorar, Thanvandhri, Machamuni, Korakkar, Bogar, Thirumoolar, Ramadevar as below in the photos.
Shri Agasthiyar Maharishi’s Avathar
Once upon a time in the ancient period Ashurargal like Tharagan destroyed the whole world. So Lord Indiran thought to control or kill them with the help of Agni Deva and Vayu Deva gods. Agni came to the world. By seeing the Devas, Ashuras went inside the sea, Agni Deva thought that they died. But Ashuras came back and again they continued to destroy the world. By seeing this, Lord Indiran was angry with Agni Deva and told him to suck the sea and kill them. But Agni Deva told , “if I do this, whole universe will be disturbed. That karmic debit will be on me” and he refused. So Lord Indiran gave him a curse. “you refused my words so you will have a birth in the world as Kumbam (Acquiries) and you will have to drink the sea water.”
In the other side, Lord Maha Vishnu had a birth as Dharuman and in certain age he started Chant. To spoil his chant Lord Indiran had sent Apsaras. Although Dharuman did not infatuate on them. So that he created a beauty queen, named as Urvasi from his thigh. Urvasi married a person named Mitthiran. In this situation, seeing the beauty of Urvasi, Varunan was in infatuation and he loved her, he expressed his love to Urvasi. But Urvasi had already married. So she refused the love of Varunan. But Varunan requested her that “if you don’t expect my love, just think about me, when you are having sexual interaction with Mitthiran. That will be enough to me” he said. But without knowing about future Urvasi accepted Varunan’s request.
According to this one day when she was with Mithiran she thought of Varunan, knowing this Mitthiran gave her a curse that “you have to birth in world as a girl and you will have to be the wife of Purooravan.” But the sexual relation with Mitthiran-Varunan expressed in a kumbam (Acquiries) from that a person named Nimi had a birth.
One day Nimi was playing with many girls in that time Vasishtar passed on the way. But Nimi did not respond him. So Vasishtar got angry on him and gave a curse that you should not have body for your soul, Nimi also cursed the same to Vasishtar, because of the blessings of Bhramma.
Without the body, Vasishtar went inside the Mitthira-Varuna kumbam. After some periods from that kumbam, Vasishtar came out of it. And then shri Agasthiyar came with four hand and kamandalam. Because of the curse of Indiran, Agni Deva had a avatar as Agasthiyar in the world. His powerful chant and yoga sidhi’s he got all the powers. According to the indiran’s curse Agasthiyar sucked the sea water and killed the Ashuras with his sidhi power and also he destroyed the poison of sea. By seeing this, all mummoorthi’s (Lord Brahmma , Vishu and shiva) gave blessings to shri Agasthiyar to be as a siddar for billion years in future in the south.
How He Came To South
Lord Shiva & Parvathi had a arguement about the other birth for the human beings according to the karmic debit & credit. For that they choosed Agasthiyar and asked him to have research about it and to write about the life and birth. He made a research about human life and wrote in the palm leaf about their birth according to their karmic debit. To identify one’s leaf he classified the thumb impression as 108 names and he separated it like top, middle and low by certain lines.
When he was in research of human life in south he had seen many Rishi they are in the tree upside down. Seeing this Agasthiyar questioned them , they said our family wise named person Agasthiyar had not entered into family life and he is having a karmic debit so that our soul hadn’t got Moksham(Heaven).
By thinking of the above, Agasthiyar went to Vidharpa country and married Uloba Muthirai, who she was borned by the Yagam (prayer for Lord Agni)of that king. By that he rectified his karmic debit and his ancistors got moksham.
After this Agasthiyar went to pothigai hill and with the blessing of Lord Subramanya (Karthikeyan) he learned Tamil language and developed.But when we came to know the history of Agasthiyar there we will be able to know that there were many histories & stories about Agasthiyar life. In the name of him many of them were lived like Vadhapi Agasthiyar, Ulbo Agasthiyar, Podhiya Agasthiyar like that. By seeing the names we have a doubt that all will be the one, who is the Agasthiyar.
But if all this is in other side also when we come to know about Nadi Astrology, siddas medicines, we will be remembering Agasthiyar maharishi.Regarding nadi astrology and medicines now a days also we have many questions and doubts but in that ancient period it self with his power he had explained all about those in his palm leafs by which we predict you.By knowing all these details the Nadi Astrologers, who are from the valluvar’s family predicting in this century also.
|
What Your Dog’s Gum Color Tells You
Look at your dog’s gums while she is at rest. Lift your dog’s upper lip and look at the color of the gums above an upper canine tooth — the gums should be pink.
Do a capillary refill test by pressing on the gums with your finger. When you remove your finger, the gums should briefly be white but should return to their pink color within two seconds.
The appearance of the gums is very informative. If the gums are blue, the dog lacks oxygen. If they are white, the dog has lost blood, either internally or externally. If the gums are purple or gray and there is a slow capillary refill, the dog is probably in shock. If they are bright red, she may be fighting a systemic infection or may have been exposed to a toxin.
Some dogs have black-pigmented gums, which can make assessment difficult. For these dogs, you need to examine the pink tissue on the inside of the lower eyelid by gently pulling the eyelid down. In this case, you can only observe the color of the tissue — you can’t perform the capillary refill test — but colors mean the same thing in gums and inner eyelids.
|
Porcelain veneers, also known as dental veneers or porcelain laminates, are extremely thin, custom-made shells that cover the teeth for cosmetic purposes. These shells are made to give the appearance of real teeth and are bonded to the front of the teeth to provide a new, improved look to the mouth.
As the name implies, these veneers are often made from porcelain. They can also be constructed of resin composite materials. Porcelain tends to be the preferred choice, as they appear more natural and resist stains better. Resin veneers are a bit thinner than porcelain, and they don’t require as much of the tooth surface to be removed in order to be put in place. Each has different benefits for individual patients.
Reasons for Porcelain Veneers
Patients may seek veneers for a number of reasons, including teeth that are discolored, worn down, uneven, misaligned, or irregularly shaped. Dental veneers also correct and close gaps between teeth. The goal of most patients who choose this treatment is to have a straighter, whiter smile. Veneers are an effective method for achieving this without the pain and maintenance of braces.
Procedures for Applying Dental Veneers
Three visits are usually necessary when receiving porcelain veneers. First, a consultation is needed to determine the patient’s needs. The second and third trips are for making and applying the veneers. Patients can have one veneer or multiple ones applied during the same visit.
At the consultation, the patient can explain to the dentist what he or she hopes to achieve with the veneers. The dentist will then do an examination to determine what is possible and if dental veneers are the right course of treatment. The dentist will also likely take x-rays and teeth impressions.
Approximately a half millimeter of tooth enamel is removed from each tooth to prepare for the adhesion of veneers. A local anesthetic is used at the patient’s request. A model or impression of the teeth is made and sent off to a dental laboratory for construction of the veneers. This should take about a week or two for completion.
Before applying the veneer permanently, it will be temporarily placed to check for fit and color. Trimming can be done as needed, and the color can be corrected with various shades of cement. Veneers are adhered to the natural teeth with bonding cement. This special cement is cured with a light beam for permanent placement. Finally, excess cement will be removed, and the bite will be examined. At least one more visit is necessary as a follow-up in order to ensure the veneers are placed correctly and that there are no problems.
Advantages and Disadvantages of Porcelain Veneers
There are both advantages and disadvantages to receiving porcelain veneers. Veneers are a popular choice because they look natural and are gentle to gums and stain resistant. They require less shaping than crowns and are stronger, but they may not be for everyone. It is not a reversible process, and teeth may be sensitive due to removal of enamel. They are not a good choice for those with unhealthy teeth or who grind their teeth. Porcelain veneers are a cosmetic dentistry option for those wishing to have straighter, whiter smiles.
Schedule Your Consultation
At Andover Smiles in North Andover, Dr. Steven Rinaldi offers all of his patients the VIP treatments to help them get the smile of their dreams. During our no-wait appointments, our team will work with you to determine the most appropriate treatment for the issues you’d to like to correct. Contact Andover Smiles today to schedule your consultation and start your journey to your million dollar smile.
|
Health authorities in Ontario have confirmed 13 cases of listeriosis - a food-borne illness caused by the bacterium Listeria monocytogenes - stemming from processed meats produced by a Maple Leaf plant in Toronto.
Maple Leaf Foods Canada and the Canadian Food Inspection Agency are voluntarily expanding a recall of certain ready-to-eat packaged meat products and are warning people not to serve or consume the products as they may contain the harmful bacterium.
Products included in the recall are: Schneider's Deli Shaved Corn Beef, Schneider's Deli Shaved Smoked Meat, Schneider's Deli Shaved Smoked Ham, and Schneider's Deli Shaved Smoked Turkey Breast.
Nursing homes, deli counters, and restaurants, including McDonald's and Mr. Sub, are among the establishments where these meat products were distributed.
According to Ontario's acting chief medical officer, the number of cases of listeriosis could grow as the disease has an incubation period of three to seven days and reports from health units across the province are just starting to be submitted.
Listeriosis causes fever, muscle aches, a stiff neck, and some gastrointestinal problems like nausea or diarrhea. A blood test or spinal fluid test is used to diagnosis the disease which can then be treated with antibiotics.
Food contaminated with Listeria monocytogenes may not look or smell spoiled. To reduce the risk of getting listeriosis, avoid eating hot dogs, luncheon meats, or deli meats, unless they are reheated until steaming hot, and consume perishable and ready-to-eat foods as soon as possible.
For a full list of the recalled items visit http://www.mapleleaf.com/.
All research on this web site is the property of Leslie Beck Nutrition Consulting Inc. and is protected by copyright. Keep in mind that research on these matters continues daily and is subject to change. The information presented is not intended as a substitute for medical treatment. It is intended to provide ongoing support of your healthy lifestyle practices.
|
Welcome to my little corner of our Weston A. Price Foundation, Victoria BC Chapter page where I publish articles pertaining to all things food and health.
Written with love and hope for healthy futures
on a nourishing Earth.
I think we all need a break from all the fear mongering and crazy media frenzy around vaccination. Let's instead just look calmly and clearly at a couple of key things that all parents need to know. Grab a cuppa and read on for some reassurance about how to
keep your children healthy.
Building Natural Immunity With Food
“The decision not to vaccinate does not mean that parents can be careless about protecting their children from disease. While some of the illnesses we vaccinate for are extremely rare (tetanus, diphtheria), unlikely to cause harm to children (chicken pox, mumps, rubella) or not a threat to children (hepatitis B), others like measles or polio can have serious consequences in poorly nourished youngsters.
It’s up to parents to provide the kind of diet that will give their child robust natural immunity— that’s the same kind of diet that will give a child good health overall. It’s also a diet that can help your child recover from vaccination injuries.
Here’s a list of recommendations to keep your children healthy and strong:
Foods Rich In Vitamin A
Vitamin A is our number one protection against disease. The immune system cannot function without vitamin A. Two important points about vitamin A:
In addition to these foods, cod liver oil can provide vitamins A and D on a daily basis.
Before the advent of vaccinations, the medical profession knew
that the vitamin A in cod liver oil would protect children
against all sorts of infections.
Moms are recommended to take cod liver oil while pregnant and nursing, and to begin giving it to their children around two or three months. Use only cod liver oil containing natural vitamins. (See westonaprice.org/cod-liver-oil/ for more information and product recommendations)
Raw milk is a complete, highly digestible food for growing children. It is also a powerful immune builder. A key component of our immune system is antibodies, such as immunoglobins, which are found in raw milk.
Vaccines are supposed to work by stimulating the production of antibodies, but babies cannot make antibodies, including vaccine-induced antibodies, until they are at least one year old. Yet babies today get over a dozen vaccines before the age of one.
Babies get antibodies from their mother’s milk, or from the milk of another species. In fact, raw milk—whether human, cow, goat, sheep, camel, reindeer or water buffalo—contains all the components of blood except for red blood cells.
Raw milk creates the immune system in the infant, and nourishes that immune system throughout the period of growth. All of these valuable immune components, however, are destroyed by the heat of pasteurization.
Studies from Europe indicate that children who drink raw milk have fewer respiratory infections and less asthma, allergies and skin rashes compared to children who do not consume raw milk. (For more information, and to find raw milk, visit realmilk.com.)
(My edit to add: In BC, join the Facebook Group, Raw Milk British Columbia)
Fermented foods like raw sauerkraut, homemade kefir and aged raw cheese contain beneficial protective bacteria. Eaten on a daily basis, the bacteria in these foods colonize the intestinal tract where they provide powerful protection against pathogens.
During the past twenty years, scientists have learned that gut bacteria are critical to health. In fact, about 80 percent of our immune system comes from beneficial gut flora. In addition, the biofilm of good bacteria in the gut provides a barrier to heavy metals like aluminum and mercury.
Avoid Processed Foods
In this age of industrial food production it’s difficult to avoid processed foods, but parents will confer the great blessing of good health on their children by keeping them away from processed food. Especially avoid refined sweeteners like sugar, high fructose corn syrup and agave. Sugar uses up nutrients that the body needs to support the immune system.
Vegetable oils are also known to depress the immune system, while natural saturated animal fats support the immune system. Cook in animal fats like butter and lard, and give your children butter instead of margarine and spreads. Make your own salad dressing using olive oil rather than purchase ready-made dressings, which are made with the cheapest oils and loaded with additives.
In short, the recipe for protecting your children from disease and ensuring they will grow up healthy and strong is an old-fashioned, home-prepared diet rich in butter, eggs, cheese and nutrient-dense animal foods like liver and red meat. Fruits and vegetables can serve as vehicles for butter and cream!
The addition of raw milk, fermented foods, bone broths and, above all, cod liver oil to your child’s diet will compensate for the occasional junk food that cannot be avoided. This is the Wise Traditions diet—vastly superior than vaccinations for protecting your children from disease throughout their growing years.”
For further information: westonaprice.org
When diet is wrong, Medicine is of no use.
When diet is correct, Medicine is of no need.
~ Ayurvedic Proverb
How To Minimize Vaccine Injury, by Ted Kuntz
If you must vaccinate, you can reduce the risk of vaccine injury in your children with the following guidelines:
1. Consider delaying as long as possible. Some medical doctors recommend waiting at least two years until the child’s immune system is more developed.
2. Breastfeed your infant. Breastfeeding provides your infant with a sophisticated living immune system, which responds to pathogens that your child may be exposed to.
3. Give large doses of Vitamin C to your child before any vaccination. This will help to reduce the negative side effects.
4. Never vaccinate your child when he or she is sick.
5. Refuse any vaccine that uses mercury as a preservative.
6. Request vaccines with the lowest aluminium content.
7. Refuse to allow your child to receive multiple vaccines at once. Request single dose vaccines.
8. Space out the shots to allow you to monitor your child’s response to each vaccine.
9. Don’t continue to vaccinate if your child has a reaction to a shot. Educate yourself on what vaccine adverse events may look like.
10. Request and read the product information insert for each vaccine. Don’t rely on your doctor to inform you of the risks and contraindications.
11. Find a doctor who respects your questions and your concerns.
12. Consult with a Naturopath. Investigate homeopathic remedies and other immune supports. Homeopathic remedies can reduce the injury caused by vaccination.
13. Trust your intuition. If it doesn’t feel right, don’t do it. No one knows your child better than you do.
14. Do not be pressured into vaccinating your child simply because it’s the “recommended schedule.”
15. The fewer vaccines, the better. Research shows the risk of injury increases with the number of vaccines given and the earlier they are given.
16. Know your rights. Become a member of a vaccine choice advocacy organization. Most jurisdictions have exemptions for medical, religious and personal belief.
(Image courtesy of Vaccine Choice Canada)
Alternative Boomer Legacy on Facebook
|
Myelodysplastic syndrome (MDS) is a disease of the stem cells in the bone marrow, which disturbs the maturing and differentiation of blood cells. Annually, some 200 Finns are diagnosed with MDS, which can develop into acute leukaemia. Globally, the incidence of MDS is 4 cases per 100,000 person years.
To diagnose MDS, a bone marrow sample is needed to also investigate genetic changes in bone marrow cells. The syndrome is classified into groups to determine the nature of the disorder in more detail.
In the study conducted at the University of Helsinki, microscopic images of MDS patients’ bone marrow samples were examined utilising an image analysis technique based on machine learning. The samples were stained with haematoxylin and eosin (H&E staining), a procedure that is part of the routine diagnostics for the disease. The slides were digitised and analysed with the help of computational deep learning models.
The study was published in the Blood Cancer Discovery, a journal of the American Association for Cancer Research, and the results can also be explored with an interactive tool.
By employing machine learning, the digital image dataset could be analysed to accurately identify the most common genetic mutations affecting the progression of the syndrome, such as acquired mutations and chromosomal aberrations. The higher the number of aberrant cells in the samples, the higher the reliability of the results generated by the prognostic models.
Diagnosis supported by data analysis
One of the greatest challenges of utilising neural network models is understanding the criteria on which they base their conclusions drawn from data, such as information contained in images. The recently released study succeeded in determining what deep learning models see in tissue samples when they have been taught to look for, for example, genetic mutations related to MDS. The technique provides new information on the effects of complex diseases on bone marrow cells and the surrounding tissues.
“The study confirms that computational analysis helps to identify features that elude the human eye. Moreover, data analysis helps to collect quantitative data on cellular changes and their relevance to the patient’s prognosis,” says Professor Satu Mustjoki.
Part of the analytics carried out in the study was implemented using the Helsinki University Hospital (HUS) data lake environment, which enables the efficient collection and analysis of extensive clinical datasets.
“We’ve developed solutions to structure and analyse data stored in the HUS data lake. Image analysis helps us analyse large quantities of biopsies and rapidly produce diverse information on disease progression. The techniques developed in the project are suited to other projects as well, and they are perfect examples of the digitalizing medical science,” says doctoral student Oscar Brück.
“[This] study provides new insights into the pathobiology of MDS and paves the way for increased use of artificial intelligence for the assessment and diagnosis of hematological malignancies,” says PhD Olivier Elemento from the Caryl and Israel Englander Institute for Precision Medicine in his commentary to the article in Blood Cancer Discovery.
The study received funding from the Cancer Foundation and the Sigrid Jusélius Foundation, as well as state funding for university-level health research (VTR). The study was carried out under the iCAN Digital Precision Cancer Medicine Flagship funded by the Academy of Finland.
Oscar Brück, MD
Doctoral student, University of Helsinki, Hematology Research Unit Helsinki
Researcher, HUS Department of Hematology
Development manager, HUS IT
Satu Mustjoki, professor, University of Helsinki, Hematology Research Unit Helsinki
director, Translational Immunology Research Program
head of department, HUS Comprehensive Cancer Center
Phone: +358 40 552 1606
Original article: Machine Learning of Bone Marrow Histopathology Identifies Genetic and Clinical Determinants in MDS Patients. Oscar Brück, Susanna Lallukka-Brück, Helena Hohtari, Aleksandr Ianevski, Freja Ebeling, Panu E. Kovanen, Soili Kytölä, Tero Aittokallio, Pedro Marques Ramos, Kimmo Porkka and Satu Mustjoki. Blood Cancer Discov March 15 2021. DOI:10.1158/2643-3230.BCD-20-0162.
Read also comment to the article "Towards artificial intelligence-driven pathology assessment for hematological malignancies" by Olivier Elemento. DOI: 10.1158/2643-3230.BCD-21-0048
|
Cooper. (2007). About Face 3.
In Chapter 7 of his book, Cooper talks about taking the requirements from scenarios and using them to design. The designer needs to decide on what form the design will take, how it will be used, the input methods of the users, and elements and functions that are to be included. This is done by using information from previous stages and applying design principles to create low-fidelity models. It makes sense that detailed designs are to be avoided at this time, and I liked Cooper’s suggestion of using whiteboards to sketch and cameras to capture ideas for reference.
In general, the Framework phase is about defining the tone and types of interactions that will be in the design. The line between what you should focus on and the detail you should not include was different from what I had guessed, but Cooper does a decent job of defining it. I had thought something such as “visual language studies” would be saved for the refinement phase, but if this phase is focusing on the overall tone, then I suppose it would be included.
Sharp, Rogers, & Preece. (2007). Interaction Design.
Chapter 11: Design, prototyping, and construction
Other than the overall topic, this reading was similar to Coopers in various ways. They both discussed speaking with stakeholders about your ideas, understanding the interactions and functions you will include before designing, and considering interfaces to set the tone and suggest possible behaviors. One similarity that really stood out to me was the advantage of using low-fidelity prototypes – it not only is cost efficient and quick, but causes the designer to focus more on functions and user goals than pixels and widget design.
The chapter described low-fidelity prototypes as representations that doesn’t use any of the actual materials that would be on the final product. This reminded me of the Art and Design course I took last year where my partner and I made a prototype washing machine built from cardboard, Styrofoam, paper, tape, and a yoga ball. It was not at all what we intended the product to be, but it allowed us to test the dimensions of our design with actual people and target problems with it.
|
A parent guide to 21st Century learning
I've just been reading this new guide published by Edutopia, titled A parent's guide to 21st Century Learning. As with much of the material published on the Edutopia site, this is a really useful collection of tips, ideas and links for parents and educators alike (and I qualify on both fronts
The ideas in the booklet are grouped according to the age of the students, and use the “4Cs” from the Partnership for 21st Century Skills as a framework for emphasising the educative value of the learning resources that are shared.
- Collaboration: Students are able to work effectively with diverse groups and exercise flexibility in making compromises to achieve common goals.
- Creativity: Students are able to generate and improve on original ideas and also work creatively with others.
- Communication : Students are able to communicate effectively across multiple media and for various purposes.
- Critical thinking: Students are able to analyze, evaluate, and understand complex systems and apply strategies to solve problems.
Each resource is briefly described, followed by a section on 'how to get involved', providing practical suggestions for how to engage with and use the resource with you children.
The resources provide a range of engagements, from projects that promote particiation in social change and the development of digitial citizenship, to using online games and social media to promote collaboration and support project based learning – plus everything in between.
I like the section at the end titled Ten tips for bringing 21st century skills home which provides sompractical tips and links for parents wondering how to foster the 4Cs at home.
If you're not a member of the Edutopia site, here's a good reason to do so – it costs nothing to sign up, and the resource is free tod ownload to members.
|
Global Wind Patterns Wind is movement of air due to __________ and _________. Air always moves from areas of _________ atmospheric pressure to areas of ________ pressure. Air moves in great spirals from high to low pressure, creating giant wind systems around the Earth. Air also moves horizontally across the Earth’s surfaces. The differences in surface can also affect air movement. ____________ - At the equator, there is little horizontal movement of air. In this windless zone, the air is moving upwards, leaving seemingly windless area. ______ ________- During the day, land absorbs heat from the Sun. The warm air rises and cool sea air blows onto the land. ______ ________ - At night, the land loses it heat, and the warmer air over the water rises. Cooler land air moves out to sea to fill in the space. Wind and Wind Systems _______ __________- These dry, cold air currents move from northeast to southwest over higher latitudes (60° to 90°) of the Northern Hemisphere and from the southeast to northwest over the polar zones of the Southern Hemisphere (60° to 70° south latitude). __________ - In the middle latitudes (30° to 60° north or south of the equator), the prevailing winds blow from ________ to ________. _________ ___________ These winds blow toward the equator from a latitude of about 30° north and south of the equator. They blow from northeast to southwest in the Northern Hemisphere and from southeast to northwest in the Southern Hemisphere. ______ _______- These narrow belts of fast moving air flow in a westerly direction at the higher levels of the troposphere. The rotation of Earth causes air to move fastest at the equator. This creation of eastward winds just below the equator is called the ________ _____________. An instrument called an ____________ measures speed, or velocity, of the wind. Velocity is measured in miles or kilometers per hour. A scale for measuring wind force was developed in 1806 by Sir Frances Beaufort. This is the scale, with an added description of what action might be seen for each level of wind.
|
Concentrations of metals in the upper Animas River and its main tributaries, Cement and Mineral creeks, pose problems for invertebrates, fish and the animals that prey on them, an Environmental Protection Agency study finds.
The study is a draft, and the conclusions are conservative, the report says. The results, released this week, are based on dozens of surface water samplings from May 2009 and May 2012.
Samples were taken before the spring runoff (February to April), during runoff (May and June) and post-runoff (July to November).
Samplings of dissolved metals varied from stream to stream, but the list included aluminum, cadmium, copper, iron, zinc, lead and manganese.
Peter Butler, a coordinator of the Animas River Stakeholders Group that is trying to mitigate toxic-metal discharge from abandoned hard-rock mines around Silverton that reach the same waterways, didn’t know Friday if the group would respond to the EPA study.
“I think some individual members may,” said Butler, who also is chairman of the state Water Quality Control Commission.
The stakeholders group has an email list of 50, including public and private agencies and individuals, all of whom were notified, Butler said.
“The goal of the Screening-Level Ecological Risk Assessment was to select contaminants of potential concern and assess the potential for risk to different types of organisms exposed to mining-contaminated surface water, sediment and food,” the EPA said in its introduction.
Under study were the effects of dissolved metals on macro- and microinvertebrates, fish, birds that eat fish, and insects and mammals that feed on riparian plants.
Among its conclusions, the study found that:
Invertebrates would not survive in Cement Creek and would experience high stress in Mineral Creek and the Animas River in the vicinity of Silverton.
Fish also couldn’t live in Cement Creek and would be under high stress in Mineral Creek and the Animas around Silverton.
The level of dissolved metals has the potential to cause significant risk to animals such as the kingfisher, American dipper and the muskrat.
An estimated 800 gallons a minute of toxic metals flow into Cement Creek, a tributary to the Animas at Silverton. The runoff is from background sources and from mines that operated around Silverton from the late 1800s to 1991.
The Animas River Stakeholders Group formed in 1994 to reduce or eliminate toxic mine drainage. The group has secured or removed tons of mine tailings from harm’s way. But projects involving water present the risk of ongoing liability that no one is willing to chance.
|
Well, traumatic brain injury is a kind of obstacle in the normal functioning of the brain, and it can be caused because of many reasons. Such as a blow, bump, or jolt to the head when something hits the brain very strongly and can cause such issues. If you have doubt that you also have this injury, then you can get in touch with Dr. Amit Mittal, a renowned Neurologist in Ludhiana at Neuro Life Brain and Spine Centre.
Moreover, if someone does not get proper treatment then chances are maximum that it will cause further complications like memory loss and decreased consciousness.
Symptoms of traumatic brain injury
There are many symptoms that you can tell that you have a traumatic brain injury
- Continuous Headache
- Paralysis can occur
- Consciousness loss is also common
- Blurry vision or unable to tolerate bright light
- Difficulties with eye movement
- Blindness can also occur
- Dizziness and balance concerns
- Problems in breathing
- Slow pulse rate
- High blood pressure with a slow breathing rate
- Hearing can also get affected
- Inappropriate emotional responses
- Speech difficulties or unable to understand words
- Body numbness or tingling
- Facial weakness
- Loss of bladder control
Types of Injuries
This is a type of blood clot in the brain, and it can occur anywhere in the brain. And it can occur because of many reasons, but when something hits the head with high pressure that time chances of Hematoma are higher.
- Intracerebral Haemorrhage
Intracerebral haemorrhage describes the bleeding in any part of the brain and it can also describe other injuries that can occur when your head gets hurt. However, the size and location of the haemorrhage play an essential role in the surgery because if the size is big or the haemorrhage is at a very complex location so at that time it is not possible to remove that. At that time, medications are the only last option in order to deal with that problem.
- Subarachnoid Haemorrhage
This problem can occur because of extensive or minor bleeding in the subarachnoid. And it can cause several types of further complications, so at that time, you should not neglect your problems and get treatment as soon as possible.
- Diffuse injuries
Because of traumatic brain injury, minor issues can occur, and it is also impossible to diagnose them through a CT scan. And doctors try to detect this problem on the basis of symptoms only. But sometimes, it is possible to track them through a scan because spots of these injuries are not the same as brain tissue.
Neuro Life Brain And Spine Centre provide you with excellent treatment. And most of the time, they solve problems for their patients without any surgery.
|
The Evolution of the Biomagnetic Pair
What is The Biomagnetic Pair (BMP)?
Two magnets of different polarities (north and south) are placed on related points over the body, for a designated period of time, that causes the balancing of the body’s bioelectrics. This enables the body’s immune system to eliminate pathogens, treat dysfunctional glands and organs, and complex and chronic diseases that include degenerative, autoimmune, metabolic, psycho-emotional, tumoral, genetic disease, and poisonings.
Who Discovered the Biomagnetic Pair?
Dr. Isaac Goiz discovered this work in 1988, the same year his son, Dr. David Goiz, was born, who has now inherited the continued practice, research, and teachings.
In 1988, Dr. Isaac Goiz attended a colleague’s training session, Dr. Richard Broeringmeyer, demonstrating how he discovered that placing a magnet at certain points on the body caused the right leg to shorten for about 20 seconds. It was Dr. Isaac Goiz who brilliantly realized that one polarity magnet was not enough to generate a reaction – that a magnet of the opposite polarity was also needed.
It was during this time that a neighbor of Dr. Isaac Goiz said he was diagnosed with AIDS and was told he was dying. He asked Dr. Goiz if he could help him. Dr. Goiz replied, “Well, let’s try.” Dr. Goiz removed a pair of very strong magnets from his son’s audio speakers to do the work. This man is still alive and well today, practicing the Biomagnetic Pair.
Dr. Goiz continues to honor Dr. Broeringmeyer’s work as the first generation of the Biomagnetic Pair.
Dr. Goiz teaches that one BMP session can cause a “domino effect” that can last up to a year after the treatment. Therefore, a person can feel better and better over time. The changes can occur immediately, during the treatment, days, weeks, or months later.
Dr. Goiz also predicts that the Biomagnetic Pair will be used as preventative medicine. Everyone should schedule appointments at least two or three times a year to clear the body and balance their bioelectrics. The Biomagnetic Pair is for everyone.
Rachel Shea, certified by the Biomagnetism Research Institute (BRI), studied under both Dr. Isaac Goiz and his son, Dr. David Goiz, attending several week-long, intense training sessions across the country.
Please see “Links” below to learn more about the Biomagnetic Pair and its
The Biomagnetic Pair taught in graduate schools:
The Biomagnetic Pair technique is now being offered as a graduate study at Universities in mostly the Spanish speaking countries. The research is now being carried out by the graduate students and the Biomagnetism Research Institute (BRI).
Pathogens include virus, bacteria, fungus, and parasites. Dr. Goiz teaches that pathogens have a very distinct relationship to each other in the body. They create their own balanced ecosphere supporting each other, equalizing the acidic and alkaline pathogen ratio.
Most definitions of this reaction state that the body is detoxifying and that it is a sign that targeted pathogens have been eliminated creating toxins in the body that need to be removed. In working with the medical intuitive, we have been shown that this reaction occurs anytime an abrupt, significant chemical change in the body occurs. It does not necessarily mean that the targeted pathogen(s) was/were the cause and that they are/were necessarily in the body. It could also simply mean that the treatment changed the chemistry in the body severely enough to trigger the immune system to cause such a reaction.
The Evolution of the Biomagnetic Pair:
Rachel’s work with the Biomagnetic Pair has evolved fundamentally over the last three years. Since the summer of 2015, working with a medical intuitive has been instrumental, enabling a significant shift in the work, deepening on many levels, as new protocols are developed. The focus is no longer targeting pathogens. The entire body is addressed, creating balance and alignment of the emotional, psychological, spiritual, energetic and physical bodies. Occasionally, a pathogen is targeted depending on many variables. Borrelia, which causes Lyme, is one pathogen that is regularly targeted.
There has been discovery that we do not have to target pathogens for them to be eliminated from the body which can avoid any noticeable Herxheimer reaction. If the body is in alignment and balanced, the pathogens cannot remain. The reason the disease has been allowed to enter the body is because of this misalignment, this imbalance. Emotions play a major role in how disease is able to enter the body. As an example, trauma, not properly processed, causes emotional blockages that create the openings for the pathogens to enter the body and these obstructions interfere with the healing process. This work addresses those emotions.
Chronic Lyme Disease:
It has been shown to Rachel that Lyme comes to change a person’s life and to accelerate the evolution of the thymus gland, thus making our immune system stronger. Borrelia is the pathogen that causes Lyme disease. It is a very slow growing bacterium therefore it does not cause an abrupt chemical change in the body that sounds off the immune system response. Borrelia is also able to hide its antigen from our immune system therefore enabling it to continue growing for a very long period of time, without noticeable symptoms.
From Rachel’s experience, there is a point when this slow-growing Borrelia causes an opening for numerous co-infections to enter the body and create many different symptoms. These co-infections are not always detected in the body’s fluids. Rachel believes it is these co-infections that are responsible for many of the symptoms assumed to be caused by Borrelia.
There are several well-known co-infections of Borrelia and there are many more that are seen in this work. These co-infections are also addressed in the work of the Biomagnetic Pair.
The developed protocols for Lyme include special protocols for eliminating Borrelia in its active pairs and in its cyst form, causing very little, if any, Herxheimer reactions. Some of the clients have reported that they are not even feeling more fatigue after a session. Timing of results is dependent on many variables usually correlating to how long the individual has been suffering from this disease and the individual’s frame of mind. Those who have been seriously affected by this disease, giving rise to significant pain and suffering, may initially experience short periods of relief and then the return of symptoms until the next treatment session. These short periods of relief usually lengthen with each treatment.
Treatment for Lyme with the Biomagnetic Pair should ideally begin with as-close-as-possible to weekly sessions for approximately four appointments and a follow-up during the fifth appointment soon afterward. All the damage from the pathogens compromises the body’s health. Therefore, it is important that the client return for monitoring sessions, increasing the span between appointments depending on symptoms and how the individual client is feeling. Please know that more appointments could be needed if the client has been ill for many years.
Lyme Versus Borrelia and the Emotional Aspects:
The term Lyme sometimes includes not only the pathogen that causes Lyme, Borrelia, but also all its co-infections.
It is important not to identify with this disease, or become this disease as some would believe. Please refrain from speaking of symptoms as “My Lyme symptoms.” The emotional aspects of this disease are far reaching and can interfere with the healing process. We must always be careful with our chosen words.
Scheduling an Appointment:
To schedule an appointment, please go to the website at biomagpairvt.com. Select the radio button, “Schedule an Appointment.” You will be forwarded to the Full Slate website: biomagpairvt.fullslate.com. You can also go directly to the Full Slate website. There, you can schedule and manage your appointments.
Select “Biomagnetic Pair Technique” where you will then be able to choose an available appointment. If, at any time, there are no available slots that you can schedule, please contact Rachel by email (firstname.lastname@example.org), text, or a phone call (802-552-7101) to schedule an appointment outside of appointed times. If you send a text message, please identify yourself first since only a phone number is displayed in the text.
For the session, especially during the warm months of the year, please bring a pair of socks with you if you are not wearing any.
Directions and Treatment at 207 Berlin Street in Montpelier:
Coming from downtown, the yellow building is located on the right, set back from the road and easy to miss. Especially during the winter months, please park in front of the first garage bay, if available. If not, park in the parking area closest to the road, pulling up to the grass to allow easier maneuvering in the parking area.
Enter the building at the entrance closest to the garage, where the Bright Gnome is situated. Here you will be able to leave muddy/snowy boots. If you like, you can bring indoor footwear to walk on the floors. Take your personal items with you. Knock on the door to the right where you will be greeted.
The treatment room is quiet and kept very warm during the cold Vermont months. The session itself can last from 60 to 90 minutes, or longer, depending on how many Biomagnetic Pairs are treated. You will be asked to remove any belts, large jewelry, battery operated watches, anything that will interfere with the magnetic work. Cell phones should be turned off.
During treatment, you will be asked to remain quiet and relatively still. From Rachel, you will receive complete focus, necessary for the work. Please ask questions before or after the treatment.
Payment can be cash or check made out to, “Rachel Shea.” The cost of a session is listed at https://biomagpairvt.fullslate.com/employees/2.
|
From the ancient times, Ayurveda has been considered the Amrutam or elixir of life. It is a therapeutic gift of nature to mankind, where there is a cure for anything and everything. Ayurveda is almost magical, because of its unbelievable results to health and wellness. Even the little changes in one’s life, recommended by Ayurveda, a 5000 years old life science, can do wonders. Ayurveda is a holistic science of health, focusing on maintaining a physically and emotionally balanced state.
However, the truth of the matter is, Ayurveda is losing its true essence in the modern world, getting itself limited to fancy soaps and facial wash. After years of research and thoroughly reading the age-old texts, we have found that during those days, juices of various herbal plants were processed in an intensive scientific manner, with proper regulations of quantities of ingredients and the temperature at which it is heated. This process results into what is called काढ़ा in a regular Indian household, known as Malt or decoction in modern language.
Malt is usually black in color, concentrated potion resulting from heating or boiling a lot of herbs and spices as a part of medicinal preparation. Malts are considered to be the most authentic Indian recipes for effective healing, which eradicate the roots of the illness. In fact, preparation of plant-based remedies at the household level is often seen as a self-help measure.
However, a research study done by Daswani, Ghadge, Brijesh, and Birdi (2011), found out about the home-made remedy of the similar kind and revealed that preparation of a home-based remedy could be a limiting factor as it only uses the local medicinal plants that are easily available.
Amrutam has aimed to bridge that gap between the real Ayurveda and the real medicines through the guidance of Bhav Prakash Nighantu and other books. We have gone out of our ways to collect and grow herbal plants and spices from all across the country to prepare this healing potion for you. There are different malts prepared by Amrutam for different illnesses and wellness in general, as each malt has a different composition, different ingredients that has cured people from time immemorial.
You may wonder why other companies do not prepare malts if it’s claimed to be so effective. As the procedures to prepare malt is strict, hectic and extremely time-consuming. It takes more or less 3 months to prepare one type of malt. It does become challenging for us as a business company, but our major aim has been clear from the start, no matter how tough this path is, we believe in it and we know it holds in it the capacity to cure.
The procedures that we have referred to have been verified and credited by the Department of Ayurveda, Yoga and Naturopathy, Unani, Siddha and Homoeopathy, abbreviated as AYUSH, which is a governmental body in India purposed with developing, education and research in ayurveda (Indian traditional medicine), yoga, naturopathy, unani, siddha, homoeopathy, Sowa-rigpa (Traditional Tibetan medicine), and other Indigenous Medicine systems.
|
Music notation or musical notation is any system used to visually represent aurally perceived music through the use of written symbols, including ancient or modern musical symbols. Types and methods of notation have varied between cultures and throughout history, and much information about ancient music notation is fragmentary.
Although many ancient cultures used symbols to represent melodies, none of them is nearly as comprehensive as written language, limiting our modern understanding. Comprehensive music notation began to be developed in Europe in the Middle Ages, and has been adapted to many kinds of music worldwide.
Musical notation has been invented and re-invented several times, and has since gone through a rapid and accelerating process of evolution. From basic indications of a simple song line going higher and lower, the complexity of musical notation has grown so that it can now specify in detail all the music for a 100-strong symphony orchestra and chorus. In this article we look at some of the key stages in that evolution, from hand-written notation, through printing processes, specialist types of notation and the impact on music notation of electronic devices and computers. Using specialist score-editing software programmes, that same orchesteral musical score can now be quickly changed, edited, reformatted, split into multiple parts and printed with relative ease.
We know that music has been part of human culture for many years, and was probably part of the cultural explosion which took place in Europe between 60,000 and 30,000 years ago, though early people had undoubtedly experimented with natural sounds prior to this. Although ancient wooden artifacts tend to rot and decay over time, instruments made of bone last longer. Two simple flutes dated to 42,000 to 43,000 years ago were discovered in Germany. One was made from a bird's bone and another from mammoth ivory. It is safe to assume that the techniques of making instruments and playing music were passed via an oral tradition for many thousands of years, by people copying and sharing musical ideas across the generations. However without recording techniques or any form of musical notation, we have no idea what the music of these early periods sounded like.
Ancient Egyptian musical instruments including a harp, a lyre and other stringed instruments Many artistic relics from the world's great civilisations include depictions of music making, and it is clear that music was a normal part of life for the ancient Egyptians, Greeks, Romans and other people. The Greek mathematician Pythagoras studied certain aspects of music theory, particularly the mathematical nature of harmony and musical scales. He knew for example that the pitch of a note from a vibrating string was related to its length, and that simple ratios of length gave rise to harmonious notes (e.g. if you halve the length of a string, its note sounds an octave higher). These early Societies (e.g. Babylonians and Egyptians) used various forms of musical notation, such as indications about using particular strings on a lyre and how the lyres were tuned.
The Seikilos Epitaph - the song is written in the bottom half of the engraving However our knowledge of these is based on surviving fragments and is therefore incomplete. The earliest known example of a complete notated musical composition (a song complete with lyrics) used a method of notation developed by the ancient Greeks. This piece of music is called the Seikilos Epitaph, it is carved on a tombstone in Turkey, and it most probably dates from the 1st century AD. The Byzantine Empire, which grew from the Roman Empire with a new base at Constantinople, developed the equivalent of the Western "sol-fa" scale and a form of notation based on pitches being higher or lower than the previous one. The alternative to the "sol-fa" method of indicating the notes of a scale, is the letter system used today with notes represented by the letters A to G. This means of representing notes seems to have had its origin in "Boethian notation" developed by a Roman philosopher called Boethius in the 6th century AD.
|
Antarctica New Zealand is looking to space for help back on Earth! The Kiwi can-do attitude has teamed up with German space technology to successfully cross the world’s largest ice shelf.
This season, Antarctica New Zealand mounted its largest Antarctic traverse since the Commonwealth Trans-Antarctic expedition in the 1950’s. This time, it wasn’t Hillary leading the charge to the pole, but a small team of four - tasked with finding a way across an uncharted section of the Ross Ice Shelf, an area of ice the size of France.
Dan Price, University of Canterbury Glaciologist and part of the four-man team that proved the route, says the traverse took years to plan from a logistical perspective and its success was largely due to information provided by satellites.
“The main safety concern getting across the ice shelf was crevassing, so months were spent using satellites to plan a route that would avoid the worst areas” he says. “This really paid off in the field as we were able to use the imagery to work out exactly where we were in a completely featureless environment. We only had to alter the route we had planned once in the field. It would be impossible, or very, very lucky to avoid all the crevassing without the ability the satellites provided us with.”
Key information came from TerraSAR-X - a satellite mission operated by the German Aerospace Centre (DLR). This satellite takes an ‘x-ray’ of the ice shelf and exposes crevasses hidden beneath the snow.
“We can use this data to weave in and out of the hazards - without it we’d be navigating a minefield blind,” Dan says. “In the past, focus was placed on visual imagery which is essentially just photographs taken from space. This alternative technique, using the radar images, exposes crevasses that would otherwise be missed. That removes a lot of uncertainty when you’re driving into these regions.”
Dana Floricioiu of the German Aerospace Centre says many satellites are restricted in where they can observe, but TerraSAR-X allows a view much further south over the Antarctic
“This allowed the kiwi expedition to plan in detail to the required 84 degrees south, less than 700 km from the south pole. “TerraSAR-X sends out microwave energy that interacts with the surface. Some of this energy returns to the satellite and an image is constructed allowing features like crevasses to be observed in detail”.
The traverse was in support of science goals of the Ross Ice Shelf project which is attempting to better understand how the ice shelf will respond to a warming world. Sites have been identified for drilling through the ice shelf to obtain information about the ice, ocean and sediment on the seabed.
“The data the science teams collect will give clues about how the ice shelf has responded to climate shifts in the past, and help predict how it may evolve in the future” concludes Dan.
Next season, the traverse will use the newly established route to haul more than 60 tonnes of gear to the Siple Coast – 1100km across the Ross Ice Shelf from Scott Base.
Antarctica New Zealand
027 2205 989
|
The Wrong House is the title of an exhibition devoted to the architecture of film director Alfred Hitchcock on show at deSingel in Antwerp until December 16. A book of the same name accompanies the exhibition.
Author Steven Jacobs had a mischievous twinkle in his eye when he presented ‘a monograph about an non-existing architect’ in his opening speech. The book is issued by 010 Publishers, which obviously saw some substance in his vision. Its a claim worth considering: can we think of Hitchcock as an architect?
Why choose Hitchcock as the subject of study if youre interested in both film and architecture? Jacobs offers five reasons. (1) Hitchcock began his career as a set designer. (2) Hitchcock devotes a striking amount of attention to architectural elements such as windows, doors and stairways. (3) Famous structures play a dramatic role in many of Hitchcocks films. (4) Hitchcock made four ‘single set’ films that take place in one enclosed space. (5) Confinement is a key theme in Hitchcocks films.
Jacobs studied the houses that feature in Hitchcocks films. He watched the films, conducted archival research in different places, and worked through the very extensive volume of literature already devoted to the subject. This resulted in case studies of no fewer than 26 houses from 22 different films. All these houses were actually sets constructed in the studio. He reconstructed the floor plans of 17 of these houses, many of them solely on the basis of what we see in the films. Such drawings obviously didnt exist, or Jacobs didnt succeed in finding them, or else they were lost over the years. The reconstructions are often ‘incomplete’ plans, since the fictitious houses were usually only built in part.
In his opening remarks Jacobs made a number of statements worthy of consideration. Making exhibitions about architecture, he said, is ‘ridiculous’. Making exhibitions about film, he added, was equally ‘ridiculous’. Those two statements paved the way for his assertion that the exhibition about a combination of both was amazingly successful. Alas he was wrong. The Antwerp exhibition consists of a number of monitors showing loops of scenes from Hitchcocks films, while visitors can sit at tables to examine the reconstructed house plans. All you can really do is conclude that the reconstructions seem accurate enough at first sight. (deSingel also attempted to draw a parallel between the Austrian architecture firm Pauhof and Hitchcock, but that attempt is to use Jacobs own words fairly ridiculous.)
As always when it comes to architecture exhibitions (and often films too, by the way), the book is better. It offers a wealth of information about how the different film sets were made. Moreover, relevant comments about the films are gathered together from various sources. Setting it all in a somewhat theoretical exposé with references to sources such as Edgar Allan Poe, Walter Benjamin, Jeremy Bentham, Hermann Muthesius and Beatriz Colomina is very tentative, however, and in places rather perfunctory.
So how should we interpret the ‘architecture’ of Hitchcock? What has been Hitchcocks contribution to architecture? Jacobs is at his best when commenting on the most architecturally explicit masterpieces Rope, Rear Window and North by Northwest, but theres a lot of repetition when he discusses many other films: the house forms a trap for its occupant. Hmmm, we knew that already… Youd expect a monograph on the work of an architect to reveal more of the significance of and offer more insight into the work. Jacobs may have reconstructed the floor plans, but he didnt do that much with them. The analysis he offers would have been possible without those drawings.
That said, the importance of the study by Jacobs lies in the newly produced material: the reconstructed plans. A new generation of Hitchcock researchers can use these for further study. They can assess the value of the information offered and feast on the many titbits tucked away in the 342-page-thick book: no fewer than 25 sets were built in the studio of producer David Selznick for Rebecca; in The Lodger the home owners hear the footsteps of their tenant in the room upstairs, while the reconstruction of the plan shows that room to be on the other side of the house; the irregular stone walls of the Vandamm House in North by Northwest are not a reference to Frank Lloyd Wright but essential to the script: Cary Grant had to be able to clamber along the building; and so on. All this doesnt make the book an architecture monograph. Presenting The Wrong House as an architecture book is nice and surprising, but in the end its just a book about film.
|
WHAT IS DATA DESTRUCTION?
Data Destruction is the process of removing data from an electronic storage device and making it unrecoverable. This vital process protects your private information from being reconstructed and used in a malicious way.
You may be surprised to learn that simply deleting a file does not necessarily remove it from your system. There are several methods to securely delete files and this article discusses the various steps that you can take to protect yourself, your company and your private information.
simply deleting a file does not necessarily remove it .
WHY DO I NEED DATA DESTRUCTION?
Sensitive data may include bank account details, passwords, personal information, commercial and security information or even information relating to national defence. Whatever the nature of the data, it is important to ensure that it is secure; even after it has been deleted.
Most offices have a paper shredder and it is now common practice to shred anything containing personal information, why then do we not take the same precautions when removing files digitally. If your mobile laptop or tablet computer are lost or stolen it may be possible for someone to recover the deleted files using standard off the shelf file recovery software.
Simply deleting data does not permanently remove that information from the storage device, file information is kept in a directory on the hard disk and when a file is deleted, that file is removed from the directory only and marked as available space leaving the original data in place to be overwritten.
WHAT HOLDS MY DATA?
Your computer’s hard drive isn’t the only device that can hold sensitive Information. Other electronic equipment may hold sensitive data and even though it has been deleted, it may be recoverable.
- USB storage devices
- Mobile phones
- Printers and plotters
- Media Tablets
- Local Servers
- Voice mail machines
Smart phones can hold as much important information as a computer. They often store bank account details, emails, contact information and social media applications.
Many smart phones have a reset function built in to the system that will remove all data from the phone. However it may not securely format the drive. Much the same as a computer’s hard drive, the deleted files will be marked as available space but are still recoverable. The solution is to use the overwriting method by formatting the phone and repeatedly filling the hard drive with large files such as podcasts and movies. This has to be done manually and can be very time consuming. The more times this is repeated, the harder the data recovery will become.
HOW TO PROTECT YOUR PRIVATE INFORMATION
There are several ways you can permanently remove data depending on the recording medium used:
Overwriting works by replacing your data with random text, it repeats this task many times. Each overwrite is known as a pass. It is a popular and relatively low-cost option; however the more times that the information is overwritten the more secure the deletion but also the more time consuming. A very time consuming technique is “The Gutmann Method”, which is widely considered to be the most secure method by overwriting the data thirty five times with carefully selected data patterns. However the United States Department of Defence recommends that data should be overwritten only seven times. This has a decreased level of security but is much faster than the Gutmann Method and therefore more efficient.
Degaussing is a method of removing the magnetic fields from a hard disk or any other magnetic storage device. This method removes all data and often renders the hard disk inoperable. This can become very costly to replace hard disks but it is a good solution for out of date computers that are being discarded. Solid state hard drives and optical media devices such as CDs and DVDs do not rely on magnetic fields to store data so degaussing will not have any effect on these.
Physical destruction is simply destroying the device that holds information through force. This is the best method for low cost storage devices especially CDs, DVDs and USB memory sticks.
The Gutmann Method is widely considered to be the most secure method by overwriting the data thirty five times.
Information stored and deleted on an encrypted disk remains unreadable as data is stored in characters that can only be read by having the correct password to decrypt the hard drive making the information readable again. Without this password, deleted files remain as secure as the encryption used.
Physical destruction of storage devices is possibly the only way to completely remove information, however this is often not a feasible solution. You should evaluate the sensitivity of the data and determine an appropriate destruction procedure dependant on the level of security, cost and time available. Whichever method you choose, please remember that once deleted, data may be able to be recovered.
|
There are four long-term sources of financing for nonprofits – Fee for Service, Government Grants and Contracts, Donor Advised Funds, and Charitable Giving.
The 990 does not make this information available easily. On Page 1, they blend charitable donations with government contracts*, Schedule B is a report of all donors over $5,000 and frequently that report is simply submitted as ‘Restricted.’
It’s easy to miss the point on the 990 that the four funding sources are quite different from each other and virtually no agency in the study was skilled in attracting funds from all four sources.
Charitable Giving – Non profits began in the 1800’s with charitable gifts. Often, wealthy people formed a group and funded it with gifts for orphans, destitute, etc. The charity did not begin to match the needs at that time. As ethnic groups got larger, smaller nonprofits served particular groups from a language, religious, or cultural background. Slowly, many of the oldest nonprofits (universities, for example) built endowments that were powerful and independent sources of funds. Investment money flowed from charitable gifts.
Fee For Service – Hospital fees, tuition for universities, and other fees (excluding Medicare and Medicaid) make up almost half of nonprofit income. Since hospitals and higher education nonprofits have little in common with funding sources for other nonprofits, it’s fair to say that about 10% of nonprofit income is from fee for service.
Government Grants and Contracts – States, Localities, and Federal Government increased funding in the 1960’s. The first decades were slow increases with few regulations. With budget cutting in the 1980’s, governments started regime funding – close control of process, less volunteers, and more professionals. The administrative requirements of regime funding were not calculated in costing. The idea returned to the 1800s model that the social sector must be funded in part by charitable gifts.
Donor Advised Funds – The top 20% of the population is accumulating wealth and the top 1% even more so. This concentration is leading to Giving Clubs and Donor Advised Funds where gifts produce very specific purposes and outcomes. The benefit of these funds is that they empower agencies with clear agendas and the possibility of an independent voice. The benefit can also be a liability if agendas don’t uphold values such as equality and justice for all.
With that background, what does the study of 990s show?
- Healthy nonprofits augment government contracts with either charitable gifts or fee for service of at least 10% of total revenue. This additional financing can be used to pay for strategic investments and funds payroll when government is slow to pay.
- Nonprofits that started in the 1980-2000 years of growth in government funding often pay little attention to other sources of money. They tend to have smaller boards who may not have an individual mandate to contribute. With regime requirements increasing, the government funded nonprofits are close to merger, acquisition, or bankruptcy.
- Revenue is vanity. One nonprofit with revenue of $70 million and growing quickly is 1.5 payrolls behind. While they may use a line of credit to offset the immediate need, the growth and size do not give them protection for the long term. The funding mix is far more important than the size of revenue.
- Charitable gifts generally have a practical collection limit of $5 million in the nonprofits studied. Growth above $50 million in revenue requires a revenue stream from Fee for Service to keep government contracts revenue under 90% of total revenue.
- Two new nonprofits report charitable gifts of $11 and $14 million. These represent Donor Advised giving. Both nonprofits are growing above 20% per year and already have a major voice in education reform and biological diversity.
Government is a major force in financing the social sector. In most cases, the contract triggers agency wide changes to comply. Boards of directors become financial watchdogs instead of protectors of the vision. Ironically, the nonprofits which are failing are those who are the most compliant with government demands!
Healthy nonprofits have to overcome the barrier of multiple funding streams in order to thrive. 10% of total revenue from charitable gifts and fee for service almost guarantees that you won’t run out of cash. And cash is cash!
*Government contracts are considered donations because there is no exchange with the public. I would argue that improvement of a person and the taxes later received do create the exchange 😊
|
On February 11,1936, H.D. King, Commissioner of Lighthouses, wrote the Secretary of Commerce:
The extremely critical conditions due to prolonged and severe cold and resulting ice conditions along the North Atlantic seaboard have placed in serious jeopardy many aids to navigation, both fixed and floating, particularly in Chesapeake Bay and its tributaries . . .
King goes on to mention that the Janes Island Lighthouse, near Crisfield, Maryland was destroyed; however, the keepers had previously abandoned the station for their safety. Personnel were evacuated from Tangier Island, Point No Point, Ragged Point, Tue Marshes, Love Point, and York Spit Lighthouses. Sixty-one minor lights had been destroyed before the end of January.
King noted below the article shown here that a plane had been in contact with Solomon’s Lump Station and arrangements made for a distress signal that the keeper could display in an emergency. Also that “attempts are being made to reach station, both from Bay & over ice from land to take off the keeper.”
An article in the Baltimore Evening Sun, also dated February 11, reported that “Five Eastern shoremen tied together with ropes, yesterday crossed the ice to the Love Point light to bring the keeper ashore.” The lighthouse tender Violet was able to reach Seven-Foot Knoll and remove its keeper but had to return to Baltimore before nightfall without visiting any other lights.
A press release dated February 12, 1936, reported that on February 9th, the War Department sent a plane to survey conditions and communicate with keepers still at their stations. At that time a supply of food had been dropped for Keeper H.C. Stirling at Solomon’s Lump Light. Conditions were described as the worst since 1918, when several stations were swept away.
Source: National Archives Record Group 26 Entry 50, File 3655.
|
March 28, 1818 – April 11, 1902
Wade Hampton was born in Charleston, South Carolina, on March 28, 1818. He grew up in a wealthy family, receiving private instruction and was known in his youth for being an avid bear hunter, killing as many as 80 bears. In 1836 he graduated from South Carolina College and was trained for the law, although he never practiced. Hampton's father died in 1858 and the son inherited a vast fortune, the plantations, and one of the largest collections of slaves in the South.
Hampton first enlisted as a private in the South Carolina Militia; however, the governor of South Carolina insisted that Hampton accept a colonel's commission, even though he had no military experience at all. Hampton organized and partially financed the unit known as "Hampton's Legion.” They first saw combat in July 1861, at the First Battle of Bull Run, where he deployed his Legion at a decisive moment, giving the brigade of Thomas J. "Stonewall" Jackson time to reach the field.
Hampton was promoted to brigadier general on May 23, 1862, while commanding a brigade in Stonewall Jackson's division. At the Battle of Seven Pines on May 31, 1862, he was severely wounded in the foot, but remained on his horse while it was being treated, still under fire. Hampton returned to duty in time to lead a brigade at the end of the Seven Days Battles. During the winter of 1862, around the Battle of Fredericksburg, Hampton led a series of cavalry raids behind enemy lines, earning a commendation from General Robert E. Lee.
In the Gettysburg Campaign, Hampton’s brigade participated in Stuart's wild adventure to the northeast, swinging around the Union army and losing contact with Lee. Stuart and Hampton reached the vicinity of Gettysburg, Pennsylvania, late on July 2, 1863. In the remainder of the battle, Hampton was wounded three separate times, twice by saber and the final a piece of shrapnel to the hip which left him recuperating until November of that year.
During the Overland Campaign of 1864, Stuart was killed at the Battle of Yellow Tavern and Hampton was given command of the Cavalry Corps on August 11, 1864. He distinguished himself in his new role at the bloody Battle of Trevilian Station, defeating Philip Sheridan's cavalry, and in fact, lost no cavalry battles for the remainder of the war. In September, Hampton conducted what became known as the "Beefsteak Raid", in which his troopers captured over 2,400 head of cattle and over 300 prisoners behind enemy lines.
Hampton was promoted to lieutenant general on February 14, 1865, but eventually surrendered to the Union along with General Joseph E. Johnston's Army of Tennessee at Bennett Place in Durham, North Carolina. He returned to his estate to find it had been burned and ransacked in Sherman’s march, and his slaves freed.
After the war, Hampton served as the Governor of South Carolina and then as a two term Senator, before dying in April of 1902.
|
Embedded Systems Design
Here are some suggestions for projects. As well as providing ideas for projects, they also give guidance to the expected complexity level for a project.
The projects offer a range of challenges in terms of hardware and software, and in implementation some are more open ended than others. You should also consider the extent to which you will be able to work incrementally, or whether everything must work before you can get a useful outcome.
In some cases something similar has been been attempted before, but in others the idea is untried. In this case, there is no guarantee that the project will actually meet its specifications. You are expected to evaluate options and do calculations to check the limits of what is possible before starting implementation.
To make writing your report easier, take notes in a lab book as you progress, including feasibility calculations (for example how much storage is needed), design options, preliminary designs, tests of intermediate steps, and also what didn't work as expected, and why.
So far you have used prototyping boards, plus a printed Circuit Board (PCB) to interface to the LCD. Many of the projects require the same set of basic parts, so we have made a PCB available with the following:
Owing to time constraints, not all of the features of the ATMEGA series of devices have been specifically covered in the workbooks, for example:
However they may prove invaluable either in the suggested projects or if you have an idea for a project you would like to try.
When people try to use a projector with a laptop, it sometimes fails to work, and it is never clear which part is not working. A battery powered device which outputs a simple chequerboard pattern to a VGA connector would be really useful in this instance.
Create a battery powered signal generator to generate a test signal on a VGA connector, meeting the timing requirements of the relevant standards. Although the microcontroller cannot keep up with the high bitrate needed for a full VGA signal, a low resolution chequerboard pattern is sufficient for testing. Meeting the VGA standards will require careful timing design, and precise use of timers, but it is possible if the microcontroller uses the highest possible clock rate. See: http://en.wikipedia.org/wiki/Video_Graphics_Array#Signal and http://www.tinyvga.com/vga-timing/1280x1024@60Hz
RS232 communications can be difficult to debug, partly because of confusion over whether the Transmit and Receive signals need to be crossed or straight, and also because unless the baud rate is right, a terminal will probably display nothing.
Create a device to sample a bidirectional serial signal, deduce the baud rates from the pulse widths observed within the signal, decode the traffic and display it on a 2 line LCD, for example top line transmit, bottom line receive. There is a PCB available to do the voltage conversion part of the RS232 interfacing, and make swapping between straight and crossed connections easy. Microcontrollers with 2 serial ports are available, which will make the task easier, for example the ATMEGA644P.
We have a sensor which consists of three different colour detectors (R,G,B) on a small chip, and which outputs a frequency dependent on the light intensity for the selected colour. Because the output is a frequency, it works over a huge range of input intensities, but it cannot be used to detect rapid changes in colour, it is more suited to making colour measurements.
Can you think of an interesting way of making use of this sensor, for example by comparing the results of a colour chart such as http://www.w3schools.com/Html/html_colors.asp across a range of LCD displays.
We use motorized lenses in some of our research projects. These are very good quality, and can often be found very cheaply on the secondhand market because they require a control board. We have made a lens controller PCB for controlling these lenses, with all the hardware in place to control the lens zoom, focus and iris. The PCB uses the ATMEGA168 as its controller.
Write control software for the microcontroller to receive commands via serial to control the lens functions. The lens outputs analogue voltages representing the current zoom and focus positions, and has digital inputs to control the zoom and focus motors.
You have already used the MCP9700 temperature sensor in workbook 2. Using this device it is possible to make a very repeatable and reasonably accurate temperature sensor. Between 4 and 6 sensors need to be deployed in a PC to find where the heat flows are within the case, and which parts get hot in use. The ATTINY45 microcontroller has all the functionality you will need for the temperature sensor part, and as it is in an 8 pin package, it would need less soldering.
Combine several microcontrollers to find the hot spot in a PC. One might be a master, and communicate with several others over a shared serial line, or they might each have a simple SPI type interface back to the master.
We have sensors which are very sensitive to movement. Make a data logger to detect human movement and log it to decide whether the wearer has an active lifestyle. See: http://www.cl.cam.ac.uk/teaching/1011/P31/docs/MS24.pdf
Consider whether it is more efficient to log start/stop times, or active/inactive every second. Use sleep mode to extend the battery life. How long can the device work from a miniature coin cell, or even a large capacitor. Consider how the data might be stored and later offloaded, and how to tell the wearer whether they are being fit or a couch potato. The project PCB is probably ideal for this logger.
Use 2 microcontrollers and a radio transmit/receiver chip such as the ER900TRS to make a radio link, which would work as a serial extender. Using a real-time clock or the timers in the microcontroller, or more probably both, synchronise the transmit and receive so that a batch of data is sent every minute, and sleep the rest of the time. See: http://www.cl.cam.ac.uk/teaching/1011/P31/docs/ER900TS.pdf
We have a chess playing robot which uses two stepper motors and two arms to move the pieces using a magnet under the board. The current movement is a bit jagged, and could be improved by interleaving the pulses to the stepper motors. To make things more complex, the two arms have different step sizes. Ideally the movement should be as close to cartesian as possible. As an example of the movement, think of how you would draw a straight line if your wrist was in a plaster cast and you could only move your elbow and shoulder joints. The mechanical parts and stepper motor drive electronics are complete.
Program a microcontroller to execute smooth* linear movement for the arm and magnet assembly, by issuing appropriate stepper motor control signals.
This is a challenging project, and will need more time than just the remaining 4 sessions. You might want to look at the mechanism, and talk to Brian Jones about stepper motors before deciding to undertake this project.
*OK, maybe not that smooth when operating at the limit of reach.
Make a recorder for sound which adapts its sample rate to match the frequency of the sound being sampled. For example the flow of water in a pipe will produce a characteristic sound, and if it is assumed that the rate will not suddenly change, the recorder can adapt the sample rate to minimise the memory required to store the samples.
Make a recorder for sound which ignores quiet periods, but timestamps and logs louder sounds, including something like 1 second prior to the sound exceeding the threshold, and a few seconds after.
Some of the roadside cabinets for telecommunications are fitted with shock sensors to detect attempts at forcible entry. To avoid the alarm going off during authorized entry, the engineer might be required to enter a characteristic sequence of taps to the cabinet to disable the alarm. See: http://www.cl.cam.ac.uk/teaching/1011/P31/docs/MS24.pdf for details of a suitable shock sensor.
Create the electronics for a home or hotel digital safe. The lock mechanism would be driven by a servo, and you would need to implement a 4 or 6 digit key code to gain entry. These devices are battery powered, so it is important to sleep for nearly all the time, and to energise the servo for only a very short period. With a home safe, you enter the code, the door unlocks, then stays unlocked until you close the door, as detected by a microswitch. For a Hotel safe this almost guarantees that guests will leave the safe empty but locked with an unknown code when they leave, so the mechanism will need to be programmed differently.
Using a clamp meter which produces a voltage in proportion to the current in a wire, make a logger which calculates and logs power usage over time, with the ability to output logged data via serial, and the ability to show cumulative power used on an LCD.
Using the HMC**** chip which has an I2C interface, create a digital compass. You will need to devise a suitable display for the user to show them their heading. Then, make a logger which records the heading every few seconds, and which can calculate a very rough bearing back to the starting point, and display it to the user.
|
What is an endometrial biopsy?
Your healthcare provider can do an endometrial biopsy to take a small tissue sample from the lining of the uterus (endometrium) for study. The endometrial tissue is viewed under a microscope to look for abnormal cells. Your healthcare provider can also check the effects of hormones on the endometrium.
Why might I need an endometrial biopsy?
Your healthcare provider may suggest an endometrial biopsy if you have:
- Abnormal menstrual bleeding
- Bleeding after menopause
- Absence of uterine bleeding
Biopsy results may show cell changes linked to hormone levels, or abnormal tissues, such as fibroids or polyps. These can lead to abnormal bleeding. Your provider can also use endometrial biopsy to check for uterine infections, such as endometritis.
Your provider may also use an endometrial biopsy to check the effects of hormone therapy or to find abnormal cells or cancer. Endometrial cancer is the most common cancer of the female reproductive organs. Endometrial biopsy is no longer advised as a routine part of testing and treatment of infertility (not able to get pregnant).
Your healthcare provider may have other reasons to do an endometrial biopsy.
What are the risks of an endometrial biopsy?
Some possible complications may include:
- Pelvic infection
- Puncture of the uterine wall with the biopsy device, which is rare
If you are allergic to or sensitive to medicines, iodine, or latex tell your healthcare provider.
If you are pregnant or think you could be, tell your healthcare provider. Endometrial biopsy during pregnancy may lead to miscarriage.
There may be other risks based on your condition. Be sure to talk about any concerns with your healthcare provider before the procedure.
Certain things may interfere with an endometrial biopsy including:
- Vaginal or cervical infections
- Pelvic inflammatory disease
- Cervical cancer
How do I get ready for an endometrial biopsy?
- Your healthcare provider will explain the procedure and you can ask questions.
- You will be asked to sign a consent form that gives your permission to do the procedure. Read the form carefully and ask questions if something is not clear.
- Generally, you won’t need to do any preparation before the procedure. However, your healthcare provider may advise you to take a pain reliever 30 minutes before the procedure.
- If you are pregnant or think you could be, tell your healthcare provider.
- Tell your healthcare provider if you are sensitive to or are allergic to any medicines, iodine, latex, tape, or anesthesia.
- Tell your healthcare provider of all medicines (prescription and over-the-counter) and herbal supplements that you are taking.
- Tell your healthcare provider if you have a history of bleeding disorders or if you are taking any blood-thinning medicines (anticoagulants), aspirin, or other medicines that affect blood clotting. You may be told to stop these medicines before the procedure.
- Your healthcare provider may ask you to keep a record of your menstrual cycles. You may need to schedule the procedure for a specific time of your cycle.
- If your provider gives you a sedative before the procedure, you will need someone to drive you home afterwards.
- You may want to bring a sanitary napkin to wear home after the procedure.
- Based on your condition, your healthcare provider may call for other preparation.
What happens during an endometrial biopsy?
An endometrial biopsy may be done in a healthcare provider's office, on an outpatient basis, or as part of your stay in a hospital. Procedures may vary based on your condition and your healthcare provider’s practices.
Generally, an endometrial biopsy follows this process:
- You will be asked to undress fully or from the waist down and put on a hospital gown.
- You will be told to empty your bladder before the procedure.
- You will lie on an exam table, with your feet and legs supported as for a pelvic exam.
- Your healthcare provider will insert an instrument called a speculum into your vagina to spread the walls of the vagina apart to view the cervix.
- Your provider will clean your cervix with an antiseptic solution.
- Your provider may numb the area using a small needle to inject medicine, or he or she may apply a numbing spray to your cervix.
- A type of forceps may be used to hold the cervix steady for the biopsy. You may feel some cramping when it is applied.
- Your provider may insert a thin, rod-like instrument, called a uterine sound, through the cervical opening to find the length of the uterus and location for biopsy. This may cause some cramping. The sound will then be removed.
- Your provider will insert a thin tube, called a catheter, through the cervical opening into the uterus. The catheter has a smaller tube inside it. The healthcare provider will withdraw the inner tube creating suction at the end of the catheter. The healthcare provider will then gently rotate and move the tip of the catheter in and out to collect small pieces of endometrial tissue. This may cause some cramping.
- The amount and location of tissue removed depends on the reason for the endometrial biopsy.
- Your provider will remove the catheter and speculum. He or she will place the in a preservative and send it to a lab for study.
What happens after an endometrial biopsy?
After the procedure, you may rest for a few minutes before going home. If you had any type of sedative, you will need someone to drive you home.
You may want to wear a sanitary pad for bleeding. It is normal to have some mild cramping and spotting or vaginal bleeding for a few days after the procedure. Take a pain reliever as advised by your healthcare provider. Aspirin or certain other pain medicines may increase the chance of bleeding. Be sure to take only recommended medicines.
Don’t douche, use tampons, or have sex for 2 to 3 days after an endometrial biopsy, or for a time recommended by your healthcare provider.
You may also have other limits on your activity, including no strenuous activity or heavy lifting.
You may go back to your normal diet unless your healthcare provider tells you otherwise.
Your healthcare provider will tell you when to return for further treatment or care.
Tell your healthcare provider if you have any of the following:
- Excessive bleeding, or bleeding longer than 2 days after the procedure
- Foul-smelling drainage from your vagina
- Fever or chills
- Severe lower belly pain
Your healthcare provider may give you other instructions after the procedure, based on your situation.
Before you agree to the test or the procedure make sure you know:
- The name of the test or procedure
- The reason you are having the test or procedure
- What results to expect and what they mean
- The risks and benefits of the test or procedure
- What the possible side effects or complications are
- When and where you are to have the test or procedure
- Who will do the test or procedure and what that person’s qualifications are
- What would happen if you did not have the test or procedure
- Any alternative tests or procedures to think about
- When and how will you get the results
- Who to call after the test or procedure if you have questions or problems
- How much will you have to pay for the test or procedure
|
Inositol and polycystic ovary syndrome:
how much, how and why!
The use of inositols in clinical practice is closely related to PCOS or polycystic ovary syndrome, an endocrine-metabolic disorder widespread among women of childbearing age and with serious consequences for women’s health. Suffice it to say that PCOS among women is the leading cause of anovulatory infertility . Although widespread, the treatment of PCOS still remains a challenge.
Currently, among the most used therapies are the birth control pill and, although there is no specific indication by the Ministry of Health and AIFA, metformin.
However, the latter can produce significant side effects such as nausea, vomiting and gastrointestinal disturbances .
The poor compliance observed with metformin has prompted clinicians to seek new approaches for PCOS. The study of inositols in the context of PCOS stems precisely from the desire to find an alternative, effective and safe therapeutic response!
Let’s see together, starting from what is PCOS, what are the results achieved with inositol.
PCOS, what is it?
Polycystic ovary syndrome (PCOS) is a complex disorder with important effects on a woman’s fertility, her psychological health and metabolism.
It is a fairly common syndrome among women of childbearing age. According to the latest data reported by the ESHRE (European Society of Human Reproduction and Embryology), this affects between 8% -13% of women, while up to 70% of cases remain undiagnosed .
Signs and symptoms of PCOS
PCOS is a heterogeneous syndrome. Women who suffer from it have different characteristics:
- psychological disorders (anxiety disorders, depression);
- infertility (irregular menstrual cycles);
- signs of hyperandrogenism (alopecia, hirsutism, visceral fat);
- metabolic disorders (insulin resistance, metabolic syndrome, prediabetes, type 2 diabetes and cardiovascular risk).
Diagnosis and treatment
The diagnosis and treatment of PCOS still represent a challenge today, precisely because of the complexity and heterogeneity with which it occurs.
This diversity has led academics and researchers to identify four types (or phenotypes) of women with PCOS. In this regard, the Rotterdam criteria of 2003 laid the fundamental foundations and guidelines for the diagnosis of PCOS and for the distinction of the four types.
These establish that, to make a diagnosis of PCOS, the woman must have at least 2 out of 3 symptoms among them:
- multifollicular or polycystic ovary;
- menstrual cycle disorders;
- signs of hyperandrogenism.
Depending on the presence of these signs and symptoms, four types of PCOS can be identified:
- Phenotype A: hyperandrogenism + ovulatory dysfunction + polycystic ovary
- Phenotype B: hyperandrogenism + ovulatory dysfunction
- Phenotype C: hyperandrogenism + polycystic ovary
- Phenotype D: ovulatory dysfunction + polycystic ovary
In addition to the signs and symptoms listed above, it is good to remember that women with PCOS can present and, in most cases, present insulin resistance and metabolic disorders, even if these have not been included within the Rotterdam criteria.
In fact, among PCOS women, about 30% -40% of normal weight women and about 80% of obese women have insulin resistance .
What are inositols
Inositol is actually a group of 9 different stereoisomers, so it would be more correct to speak of “Inositols” in the plural.
Among these, myo-inositol and d-chiro inositol are the most important for the body’s physiological processes.
Myo-inositol is involved in the activation of transporters and the use of glucose, while D-chiro is mainly involved in the synthesis and storage of glycogen
The differences between the two molecules were investigated and studied in depth in an attempt to find a non-pharmacological therapeutic response to PCOS.
Both are able to act as second messengers of insulin.
As said before, that insulin resistance is a recurring trait in women with PCOS, regardless of weight.
Inositol metabolism in women with the syndrome is impaired. More specifically, in PCOS women there is an imbalance between myo-inositol and D-chiro inositol, to the detriment of the former.
Myo-inositol is also the second messenger of FSH, the follicle-stimulating hormone. In fact, many studies have shown how the integration of myo-inositol is able to improve the metabolic and hormonal parameters of women with PCOS, with benefits on the menstrual cycle and oocyte quality .
The results obtained by the integration of only d-chiro-inositol in PCOS at high dosages and for prolonged periods of time remain controversial .
To learn more read the article: Differences between inositol, myo-inositol and D-chiro-inositol
The benefits of myo-inositol in PCOS
Several clinical studies have shown how the constant intake of myo-inositol is able to restore both ovarian and metabolic function , .
A recent meta-analysis , which examined a series of studies involving a total of 496 women (including 247 with PCOS treated with myo-inositol and 249 controls), has shown that treatment with Myo-inositol significantly decreases the levels of insulin, HOMA-index and androgens (free testosterone), while increases those of sex hormone binding globulin (SHBG).
Previously, another Review had provided evidence that myo-inositol significantly improves the rate of ovulation and regulates the frequency of menstrual cycles.
To learn more read the article: Myo-inositol: the benefits for PCOS
Myo-inositol: when, how much and why
Studies have shown the efficacy of myo-inositol in treating PCOS with the intake of 2g, twice a day (for a total of 4g) for the powder formulations (sachets, tablets).
In addition to the importance of dosages, studies highlight two other key elements for the effectiveness of myo-inositol.
- the intake, for the powder formulations, away from meals in order not to interfere with their absorption;
- repeated administration throughout the day.
Kinetic studies have shown a 12-hour half-life of inositol . By using double administration, better coverage of the insulin response is obtained.
However, research has raised some concerns about the use of D-chiro-inositol at high dosages and for prolonged periods with a potential negative effect on the quality of blastocysts .
To learn more read the article: D-chiro inositol: what it is, use and benefits
The ovarian paradox in PCOS
For the purpose of treating PCOS what is important to know is that in women suffering from this condition, there is a deficiency of myo-inositol and an excess of D-chiro in the ovarian level.
This mismatch is due to the excessive activity of an enzyme, epimerase which depends on insulin and which regulates the conversion of myo-inositol to D-chiro-inositol.
The insulin resistance of PCOS women generates hyperinsulinemia. The body’s cells are practically “deaf” to the insulin signal and the body responds by producing more and more.
However, the ovary is never insulin resistant. In PCOS women, epimerase is overstimulated by the insulin signal, causing an excessive conversion of Myo to D-chiro inositol. In the PCOS woman the ovary becomes rich in d-chiro and deficient in myo-inositol, with a negative effect on the FSH signal and on the oocyte quality .
To learn more read the article: D-chiro-inositol: what it is, use and benefits
Myo and D-chiro 40:1
In a clinical study of 46 obese PCOS women, a combined therapy of Myo and D-chiro inositol was administered in the 40:1 ratio (in other words a treatment mainly based on myo-inositol and only a small part of d-chiro ) for 6 months.
The study authors noted that:
- insulin sensitivity had improved;
- improvement of ovulation;
- a decrease in LH and free testosterone;
- a significant reduction in the HOMA index.
In short, the study showed an improvement in hormonal, metabolic and ovulation parameters without side effects .
To learn more about this topic go to the section: The true story of MYO and D-Chiro 40:1
The Role of Polycystic Ovary Syndrome in Reproductive and Metabolic Health: Overview and Approaches for Treatment, Carrie C. Dennett and Judy Simon, 2015
Facchinetti, F, et al. Short-term effects of metformin and myo-inositol in women with polycystic ovarian syndrome (PCOS): a meta-analysis of randomized clinical trials. Gynecological endocrinology, 2019. https://doi.org/10.1080/09513590.2018.1540578.
International evidence-based guideline for the assessment and management of polycystic ovary syndrome (PCOS), European Society of Human Reproduction and Embryology (ESHRE), 2018
Baillargeon, JP & Iuorno, MJ & Nestler, JE. (2003). Insulin Sensitizers for Polycystic Ovary Syndrome. Clinical Obstetrics and Gynecology. 46. 325-340. 10.1097/00003081-200306000-00011.
Antonio Simone Laganà, Inositol in Polycystic Ovary Syndrome: Restoring Fertility through a Pathophysiology- Based Approach, Cell Press, 2018
Can high levels of D-chiro-inositol in follicular fluid exert detrimental effects on blastocyst quality? Eur Rev Med Pharmacol Sci K. Ravanos, G. Monastra, T. Pavlidou, M. Goudakou, N. Prapas, 2017
Unfer V. Et al. Myo-inositol effects in women with PCOS: a meta-analysis of randomized controlled trials. Endocr Connect. 2017 Nov;6(8):647-658
Pundir et al. Inositol treatment of anovulation in women with polycystic ovary syndrome: a meta-analysis of randomised trials. BJOG. 2018 Feb;125(3):299-308.
Monastra et al. Alpha-lactalbumin effect on myo-inositol intestinal absorption: in vivo and in vitro. Curr Drug Deliv. 2018;15(9):1305-1311.
Unfer V, Carlomagno G, Papaleo E, Vailati S, Candiani M, Baillargeon JP. Hyperinsulinemia Alters Myoinositol to d-chiroinositol Ratio in the Follicular Fluid of Patients With PCOS. Reprod Sci. 2014;21(7):854-8
Benelli E, Del Ghianda S, Di Cosmo C, Tonacchera M. A Combined Therapy with Myo-Inositol and D-Chiro-Inositol Improves Endocrine Parameters and Insulin Resistance in PCOS Young Overweight Women. Int J Endocrinol. 2016;2016:3204083.
Other topics that may interest you
|
What is a CASA?
In 1976, Superior Court Judge David Soukup of Seattle, WA., saw a recurring problem in his courtroom:
"In criminal and civil cases, even though there were always many different points of view, you walked out of the courthouse at the end of the day and you said, I've done my best; I can live with this decision," he explains.
"But when you're involved with a child and you're trying to decide what to do to facilitate that child's growth into a mature and happy adult, you don't feel like you have sufficient information to allow you to make the right decision. You can't walk away and leave them at the courthouse at 4 o'clock. You wonder: do I really know everything I should? Have I really been told all of the different things? Is this really right?"
To ensure he was getting all the facts and the long-term welfare of each child was being represented, the Seattle judge came up with an idea that would change America's judicial procedure and the lives of thousands of children: he obtained funding to recruit and train community volunteers to step into courtrooms on behalf of the children: the Court Appointed Special Advocate (CASA) volunteers.
This unique concept was implemented in Seattle as a pilot program in January 1977. During that first year, the program provided 110 trained CASA volunteers for 498 children in 376 dependency cases.
CASA Across the Country
In 1978 the National Center of State Courts selected the Seattle program as the "best national example of citizen participation in the juvenile justice system." This recognition, along with a grant from the Edna McConnell Clark Foundation of New York City (one of CASA's earliest and strongest supporters), resulted in the replication of the Seattle CASA program in courts across the country.
As CASA projects developed, each new local program director made an on-site visit to the original Seattle host program for observation and training.
By 1982 it was clear that a national association was needed to direct CASA's emerging national presence. The National Court Appointed Special Advocate Association was formed that year.
CASA/GAL programs now exist in all 50 states.
CASA or GAL?
A CASA is a Court Appointed Special Advocate. A GAL is a Guardian Ad Litem. GALs are oppointed in certain types of cases. Often, the terms are used interchangeably.
CASA in Clallam County
In 1983 Clallam County started a CASA program. Merle Watson, a business man from Beaver, WA. took the Seattle CASA training and went to a National Meeting at his own expense to get our program up and running. When he left the Program in 1987 it had won the respect of the local agencies and the court.
Current Numbers and Needs
Today, in 2014, 24 community volunteers are assigned to 207 youngsters who are under the protection of the Court due to alleged abuse or neglect. The community volunteers are of all ages and walks of life.
183 volunteers are currently needed to serve all of the dependent youth in the County. The only requirement to become a CASA/GAL is a good, moral character and common sense.
Training is available for those interested in volunteer service. Independent study options exist for those with time limitations.
Please contact the CASA Office to pursue volunteer opportunities in your community!
In 2013 the CASA Program and local volunteers held various events to gather support for youth in our community. 2013 highlights include:
- Volunteer Dinner and Auction
- "Rock 'n' Roll Bike Show" with Roughnecks Motorcycle Club Victim Support Group
- Sunland Golf Tournament
- "Kicks for Kids" Shoe Drive (gathered 207 pairs of shoes--one for each child in foster care in the County)
- Winter Coat Drive by Knights of Columbus
- 16 New Volunteers Trained
(Third party site not maintained by Clallam County)
|
From presidents to retirees, more than 17 million people over the age of 50 golf regularly. Knee osteoarthritis, which causes swelling, pain and difficulty moving the joint, is one of the leading causes of disability in this age group.
It may seem intuitive that golfers with knee osteoarthritis should stay off their feet and ride in a golf cart. But new research from the Shirley Ryan AbilityLab and Northwestern Medicine has found, for the first time, that walking the course provides significantly higher health benefits and is not associated with increased pain, cartilage breakdown or inflammation.
This study is the first comparing the health benefits of walking the golf course versus using a cart, as well as the first to use a blood-based biomarker analysis in knee osteoarthritis during a prolonged sporting event. The findings will be presented April 28 at the Osteoarthritis Research Society International Annual Meeting in Liverpool, England.
The health benefits of golf have decreased as the number of people who ride the course has increased over the past 20 years. In the late 1980s, 45 percent of all rounds of golf were played with a golf cart. By 2006, 69 percent of rounds were played with a cart. During this same time period, activity has decreased among Americans, while obesity has increased.
“Individuals with knee osteoarthritis are often concerned about pain and may be more likely to use a golf cart,” said lead study author Dr. Prakash Jayabalan, a physician scientist at the Shirley Ryan AbilityLab and an assistant professor of physical medicine and rehabilitation at Northwestern University Feinberg School of Medicine.
“However, through sophisticated blood-based biomarker analysis, this study has shown that golfers with knee osteoarthritis do not need to be concerned about worsening their disease through walking the course. In fact, walking provides the best health benefit,” Jayabalan said.
The study, completed in partnership with the Glenview Park District Golf Course in Glenview, Illinois, involved 15 participants — 10 who had knee osteoarthritis and five who were of similar age but did not have the disease. Participants played 18 holes (one round of golf) walking the course and, on a separate day, the same individuals played a round riding a golf cart. The research team compared their heart rates to determine the intensity of exercise performed and took blood samples during each round to measure markers of cartilage stress and inflammation.
The researchers found that, prior to starting either round, the golfers with knee osteoarthritis had an average pain score of 1.3 (on a scale of 0-10). When they played the round walking the course, they had an average 2.1-point increase in pain score. When they played the round using the golf cart, they experienced on average a 1.5-point increase, a difference that is not clinically significant.
The research team also measured blood-based biomarkers of cartilage stress and inflammation. Although both methods of transportation caused an increase in these markers (as would be expected with regular walking), there was no difference between the rounds.
When walking the course, golfers with knee osteoarthritis spent more than 60 percent of the round with heart rates in the moderate intensity heart rate zone. When driving on a cart, golfers spent 30 percent of the round in this range. While this figure is lower, it still fulfills daily exercise recommendations.
Although walking the course offers the most significant health benefits, the study found that riding the course with a golf cart during a round — and the requisite moderate walking that comes with it — still offers cardiovascular benefits and helps fulfill daily exercise guidelines.
“Bottom line: walking the course is significantly better than using a golf cart, but using a golf cart is still better than not exercising at all,” said Jayabalan.
|
Beginning iOS 4 Application Development
Publication date: September 2010
Digital Book format: ePub (Adobe DRM)
The ideal starting point for creating apps using iOS 4
Written by an experienced Apple developer and trainer, this full-color reference serves as an ideal jumping point for creating applications for Apple’s iOS 4 that runs on the iPhone, iPod Touch, and iPad. In addition to in-depth coverage of the iOS SDK, the book walks you through the various core aspects of iPhone and iPad development. You’ll learn how to take advantage of the tools provided by XCode and you’ll benefit from a solid introduction to Objective-C, which allows you to have a smooth transition to iPhone development from another platform.
- Offers a solid foundation for creating applications for Apple’s iOS 4
- Covers all the new features of iOS 4 and provides a new applications template for developing iPad and iPhone apps
- Addresses the new PopoverView for iPad apps
- Shows how to develop background applications, which is one of the new features in iOS 4
Beginning iOS 4 Application Development is your ultimate resource for creating applications for Apple's iOS 4.
|
Plain Dealer article written by Bob Rich and published on September 17, 1995
WHEN CLEVELAND ALMOST WENT A BRIDGE TOO FAR
Author: Bob Rich
Like two Balkan nations, Cleveland and Ohio City existed in a state of uneasy truce in 1837; but there was big trouble brewing, and it was coming to a head over a bridge.
In 1822, when the Cuyahoga River could only be crossed by boat, the towns jointly built a float bridge from the foot of Detroit Ave. to the foot of Superior St. That was the end of their cooperation, however.
A few years later, the Ohio Canal opened and created a boom for both communities. The river banks were lined with forwarding and commission houses, ship chandlers, merchants and artisans. Hundreds of wagons of produce from the south and west would run along Pearl Rd. and pass through Ohio City before crossing over the jointly owned float bridge at the foot of Detroit to ship their goods out of the port of Cleveland.
West Side merchants and saloons prospered as much as their East Side counterparts when more than 1,900 sailing vessels and steamboats would weigh in at Cleveland Harbor in a year’s time.
Cleveland grew to a population of 6,000 by 1836, with little Ohio City at 2,000, but when both communities raced to become the first city incorporated in Cuyahoga County, the West Side won the title by a few days. All the old bitterness emerged.
There were other needles under East Siders’ skins: West Side developers were planning an 80-acre development in the Flats and were talking of digging another channel from the river so they could have their own harbor. They built a fine five-story hotel, the Ohio City Exchange, which came to dominate the whole area socially. The hotel’s dome lights were kept lighted all night, serving as a landmark and a guide for ships coming into Cleveland Harbor.
Some East Siders, with an appalling lack of civic loyalty, were scheduling banquets and balls in the great new edifice. New arrivals in the Western Reserve were bypassing the East Side and buying desirable West Side lots just like in the old pioneer days.
Then two buccaneering real estate speculators brought things to an explosive head. James Clark and his partner, Cleveland’s first city mayor, John Willey, bought up land ringing Ohio City to the south and west, built improvements on it, and extended Columbus St. from the West Side to the Cuyahoga River south of the Detroit Ave. float bridge.
There, for $15,000, they built a roofed, enclosed drawbridge. The city director proclaimed, “This splendid bridge was presented to the corporation of Cleveland by the owners with the express stipulation that it should remain forever free for the accommodation of the public …”
Traffic from the south could now be led up to Ontario and Prospect streets, where the partners had built commercial properties called Cleveland Centre. This may have had something to do with their high-minded community spirit.
To encourage the traffic bypass even more, Cleveland City Council (remember, Willey was the mayor) directed the removal of the Cleveland half of the Detroit Ave. float bridge.
“This act was performed one night while the Ohio citizens lay dreaming of future municipal greatness,” historian James Kennedy wrote 100 years ago. “And when the morning mists arose from over the valley of the Cuyahoga, they saw their direct communication gone, and realized that to reach the courthouse and other points of interest in Cleveland, they would be compelled to travel southward, and make use of the hated Columbus St. bridge.”
At dawn the first morning the bridge section was gone, horse-drawn wagons from the West Side had to be desperately reined in before they plunged into the river.
Now the dogs of war were let loose. “Two bridges or none!” became the West Side war cry. The Ohio City marshal and his deputies tried to dynamite their end of the Columbus St. bridge; when that fizzled, 1,000 West Siders descended on it with picks, axes, clubs and muskets, and were busily ripping up planks when the Cleveland militia arrived to join the melee.
Shots were fired, heavy blows exchanged. Fortunately, the Cuyahoga County sheriff called a halt to the battle before anyone was killed.
The courts eventually settled the matter in favor of two bridges, and both towns have mixed freely ever since.
|
I have a 5th and 6th grader this year, who wanted to have a 100th Day of School Celebration. There are so many ideas out there--if you are a 1st grade teacher!!! I had to really dig to come up with some age appropriate ideas for my two upper elementary/middle schoolers. We had some friends over just to make it more fun!
At the beginning of the school year, we made a 100 poster and attempted to record one thing a day. This kinda fizzled out, but was a good idea anyway.
We started by seeing who could hold their breath for 100 seconds...well, or who got the closest.
Everyone had to close their eyes and raise their hands when they thought 100 seconds had passed. Daniel was spot on and Tera was one second later.
I found this download here on pinterest:
What can I do in 100 seconds?
How many times can you....clap your hands...say the alphabet....count to ten...hop....stand/sit...blink....write your name. Each student records their estimate before they stop, then the actual count afterwards.
|
Craft the Perfect Holiday MessageDecember 3, 2019
Martin Luther King Jr. Education PosterDecember 31, 2019
“Setting goals is the first step in turning the invisible into the visible.”
- Tony Robbins, Motivational Speaker
It’s that time where we reflect on the past twelve months and begin to look forward to the new year. If you already know where you’d like to improve, we hope that you can set and achieve your 2019 goals.
If you need a little inspiration, check out these five resolutions you can implement to improve your school.
School Administrator’s Resolutions for the New Year
1| Creating a Positive School Climate
There is strong evidence between school climate and student achievement. Creating a safe and supportive learning environment begins in the classroom.
You can create a positive school climate by focusing on the overall needs of students and faculty. Students should feel safe everywhere on campus. It is a school-wide initiative to create a positive school climate and here are some things you can do to kickstart this goal:
- Establish a culture of inclusion and respect
- Take opportunities to model kindness in and outside the classroom
- Practice safety drills with students, faculty, and staff
- Promote the education of diversity and different cultures
- Encourage students to seek out help for emotional support from a trusted adult
Take a look at Edutopia's 20 Tips for Creating a Safe Learning Environment.
2| Supporting Teachers
Teachers are your number one asset in the classroom. Support teachers by providing the right resources and tools so they feel prepared throughout the year. Resources can be as simple as links to lesson plans ideas or valuable tips for classroom management. There isn’t a classroom that is not in need of tools or materials. Listen to your teachers and take stock of what they need to successfully do their job.
4| Promoting Innovation
Education is continuing to evolve, especially with the introduction of technology into the classroom. Make innovation a goal by promoting the use of makerspaces in classrooms. If you are tight on budget or resources, you can create a central makerspace in the school library. However you choose to bring makerspaces to your school, there are numerous benefits to implementing a maker culture at your school.
5| Listening with a Mindful Ear
Regardless if you know exactly what your school needs, as an administrator, you should always be listening to the school community.
Respectfully listen to faculty, students, parents, and volunteers. Listening shows that you care about what the other person is saying. It also gives you, as the listener, an opportunity to understand where there are weaknesses and places for improvement.
Listening to teachers should be part of every administrator's New Year’s resolutions. The reason being that teacher turnover is high because teachers do not feel like their voices are valued. Show teachers you value their opinions by respectfully listening to their ideas and concerns.
Setting Goals for the Year
Setting goals are the stepping stones to student achievement and faculty success. What resolutions are you making this year?
Sign up for the newsletter!
When you subscribe, you’ll be the first to see our:
School Safety Tips
Volunteer Management Resources
Inspirational Blog Posts
and much more!
|
What are Polar and Cartesian Coordinates?
Until this point, we've strictly been using Cartesian Coordinates where X, Y, and Z represent distances from part zero (absolute coordinates) or from the current position (relative coordinates). Most g-code programming is done using Cartesian coordinates, but for some problems a system called Polar Coordinates can make the problem much simpler to tackle. With polar coordinates, we use an angle and a distance relative to the origin. Depending on the control, we may have both absolute part zero and current position origins to choose from. This diagram shows us a comparison of the two coordinate systems:
Cartesian vs Polar Coordinates
When using Polar Coordinates, the angle is expressed in degrees counter-clockwise from the 3 o'clock position as shown in the diagram.
Switching between Cartesian and Polar Coordinates
Switching between Cartesian and Polar Coordinates is very simple. G16 is used to switch to polar, and G15 is used to switch back to Cartesian coordinates.
Which Mode Does My Controller Use as the Default?
Interestingly, most controls will startup in relative/incremental mode (G91). This is done because it is thought to be safer if the mode is not what you expect. In other words, if you expected absolute it is thought to be safer to start in incremental than if you expected incremental and start in absolute. The truth is, not being in the mode you expect is not safe any way you look at it because the machine will do something unexpected. Therefore, make sure one of the first things you do in your program is to set it to either G90 or G91 so it does what you expect!
When Should We Use Polar Coordinates?
While we don't use them very often, polar coordinates can really simplify some problems. Suppose you want to make a bolt circle, a very common operation. You could pull out your calculator and use trigonometry to figure the coordinates of each bolt on the circle. Or, you could use G-Wizard's Bolt Circle Calculator to do the same thing for you. But if your control offers polar coordinates, you have a really easy way to program your bolt circle.
Consider the following example which creates a bolt circle of radius 8 having 6 holes spaced equally around the circle:
O2000 (G15-G16 Polar Coordinate Example)
( Safe Starting Conditions )G0 G40 G49 G50 G80 G94 G90
N3 G00 X0 Y0 S900 M03 (center point)
N4 G43 Z1.0 H01 M08
N5 G16 (polar coordinates on)
N6 G99 G81 X8 Y0 R0.1 Z-0.163 F3.0
N7 X8 Y60.0
N8 X8 Y120.0
N9 X8 Y180.0
N10 X8 Y240.0
N11 X8 Y300.0
N12 G15 (polar coordinates off)
N13 G80 M09
N14 G91 G28 Z0 M05
N15 G28 X0 Y0
After establishing some safe starting conditions, the program uses a G00 move to the center point of the bolt circle. For simplicity, we've made that point be 0, 0.
On block N5, we turn on the polar coordinates and on the next line we start our G81 canned cycle. Note the coordinates given: X8 and Y0. We're in polar coordinate mode, so X is the distance from the origin (8 inches) and Y is the angle (0 degrees). That origin is at X0Y0 and was established because as part of the safe conditions, we have set absolute coordinates using G90, so our origin will be at 0, 0.
Now each successive hole is easy--we just keep giving the X8 radius and step around the circle by giving the degrees values using Y. We use 60, 120, 180, 240, and 300 degrees.
Here is our g-code program as simualted by G-Wizard Editor:
It's hard to imagine a simpler way to visualize or program a bolt circle and you didn't have to do a lick of trigonometry!
This is the sort of thing polar coordinates are good for.
Configuring G-Wizard Editor for Polar Coordinates
GWE uses the following post options to configure Polar Coordinates:
1. Write a g-code program that uses polar coordinates in some way.
2. Dig out the programming book for your controller and read the chapter on polar coordinates so you can see how they work.
3. Configure G-Wizard Editor properly for your controller's use of polar coordinates.
Try the Free Trial Version of G-Wizard G-Code Editor...
No credit card required--just your name and email.
Next Article: Canned Drilling Cycles
|
Depression is not governed by feelings of momentary sadness but in fact persistent spells of despair and hopelessness. Sufferers usually experience a lack of sleep and appetite, difficulty in concentration, loss of interest and libido and in extreme cases, suicidal thoughts. If this condition is left untreated you can put your health at the risk of more serious illnesses that might not be reversible.
Over the years, very little research input has been made for assessing the importance of the role of support groups in treating depression. The main reason for their lack of popularity can easily be pointed towards an absence of knowledge of how powerful support groups can be in treating depression.
What is a depression support group?
A support group can be described as a small gathering of people who share a common objective, usually related to the treatment of a particular disorder or health issue.
Groups can be formed to focus on the feelings of the sufferers of various conditions such as post-natal depression, bi-polar depression, drug addiction or breast cancer. Members freely express their feelings and ideas about their conditions and are not judged by anyone.
Support groups are different from group therapy. In group therapy several people are brought together facilitated by a healthcare professional that facilitates the interpersonal interactions between the members so they can learn about their blind spots and increase their awareness about their behaviours and relationships.
Support groups on the other hand can be formed by anyone, such as a family member as means of help for depression for their loved one. They can also be formed by a mental-care clinic or non-profit organisation.
Barriers to support groups
- Let’s address some of reasons as to why depression support group have not been able to carve a popular niche of followers:
- Hesitance in sharing personal feelings with a group of strangers.
- Groups compromises of individuals from different ethnic backgrounds, which leads towards suppression of feelings of the minorities.
- Lack of support from loved ones.
- Lack of awareness regarding the success of support groups.
- Fear of addition to depression listening to other people’s problems.
- Rural communities might feel reluctant due to geographical distance.
- Considering joining a support group is seen as a sign of weakness .
The benefits of joining a support group
1. Source of support and motivation: People suffering from depression often cut themselves off from friends and family. In such gloomy circumstances finding helping hands, going through a similar situation gives one feelings of assurance that they are not alone in the world. They will be motivated to see how in a similar situation other people handle it with a positive mind frame. Members also become role models for other members of the support groups, which has a positive effects on the group-as-a-whole.
2. Evading the clutches of isolation: Depression is a state of mind where the sufferer tends to bottle up his feelings. What aggravates this state of low mood and energy is that the people around him are not aware of his condition and fail to be of any assistance. The biggest hurdle in the cure of depression is suppressing your feelings and experiencing hesitancy in sharing them. Patients successfully climb the first step on the ladder of depression treatment by sharing their true feelings with their support group’s members. With the passage of time the sufferer will feel much more comfortable accepting his condition and feel more confident in expressing himself.
3. From a victim to an advocate: A sense of well-being and pride takes over the sufferer when he sees himself as a ray of hope for other patients. By listening to their worries one plays a significant role in consoling the depressed co-member of the group. This also engenders a sense of empowerment from making a difference through caring for other members of the group.
4. Feeling empowered: A major sign of positive outcome of your support group treatment is to find one’s feelings under control once again. You will have an optimistic approach towards life and negative thoughts of death and hopelessness will vanish.
How to find the right support group for yourself
It is perfectly normal to experience feelings of disinclination in sharing your personal feelings with a group of strangers. To make the support group for depression more effective it’s wise to look for the following key points in your choice of group:
- A hospitable, secure and safe ambience.
- Confidentiality of members is top priority.
- Have a strong code of ethics.
- Respect views of minority and discourage any disrespectful behaviour.
- Have a proper organisational framework.
- Meetings are held on a regular basis, free of charge.
- See the patients are human-beings and treat them with respect.
- Encourage active support and active participation of members.
- Have extensive links in the community.
- Invite reputable counsellors and health professionals as guest speakers.
- Stresses on emotional support and practical coping skills and strategies.
Joining a support group for depression can be a powerful way of helping you reduce the depressive symptoms, receive support, encouragement, advice and information that can help you begin to move out of depression.
If you or someone you know suffers from depression, Australia Counselling has counsellors who specialise in the treatment of depression and run depression support groups in locations such as Sydney, Canberra, Melbourne, Adelaide, Brisbane and regional areas of Australia. Visit our depression area of practice page to find a depression counsellor or psychologist near you.
|