id
string
concept_name
string
domain
string
content_type
string
text
string
quality_score
float64
information_density
string
complexity_level
int64
token_count
int64
prerequisites
list
builds_to
list
cross_domain_connections
list
quality_assessment
dict
1e6e90ac-d924-4002-9598-0f930a0d6aef
Rebecca Jackson Intranets2014: chance
interdisciplinary
practical_application
Rebecca Jackson Intranets2014: A chance to rock with the intranet community This year was my fourth year at Intranets2014 and I was pleased to be there in a somewhat official capacity to capture the proceedings in tweets, blog posts and visually through sketchnotes. This year’s theme of rocking intranet was carried from the opening keynote to close. Ready to sketch As with previous years there was a great selection of local and international speakers and a balance between theoretical and practical presentations. But without fail, the best part is to meet and learn from other intranet managers. In our day to day we are behind the firewall and the chance to talk shop and share rockstar stories. Enjoying Intranets2014 with Jo McBain and Josh Patel. For more on the conference check out the posts I wrote for the Step Two Designs blog: See the Step Two Designs website for slides from this year’s conference. Intranets2015 is already scheduled for 20-22 May 2015, get it in your diary now. Advertisements Leave your thoughts Fill in your details below or click an icon to log in: WordPress.com Logo You are commenting using your WordPress.com account. Log Out / Change ) Twitter picture You are commenting using your Twitter account. Log Out / Change ) Facebook photo You are commenting using your Facebook account. Log Out / Change ) Google+ photo You are commenting using your Google+ account. Log Out / Change ) Connecting to %s %d bloggers like this:
0.65
medium
3
394
[ "foundational knowledge" ]
[ "advanced concepts" ]
[ "science", "technology", "arts_and_creativity" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
6de27504-380e-49eb-be41-fa5a9ad6864d
English 正體中文 简体中文 Items
interdisciplinary
data_analysis
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78111/78111 (100%) Visitors : 30529968      Online Users : 268 RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team. Scope Tips: • please add "double quotation mark" for query phrases to get precise results • please goto advance search for comprehansive author search • Adv. Search HomeLoginUploadHelpAboutAdminister Goto mobile version Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/63288 Title: 氣候變遷調適科技整合研究計畫-區域地質監測與分析;Regional Geological Monitoring and Analysis Authors: 李錫堤;葉高次 Contributors: 國立中央大學應用地質研究所 Keywords: 大氣科學;地球科學 Date: 2012-12-01 Issue Date: 2014-03-17 14:26:16 (UTC+8) Publisher: 行政院國家科學委員會 Abstract: 研究期間:10111~10210;Regional Geological Monitoring and Analysis In the context of global warming, extreme climate and sea level rise may increase the size and frequency of geological hazards, and further accelerate the evolution of the land surface. This subproject intends to collect topographic, hydrologic, geologic, and land use data, as well as landslides, debris flows inventories, and to monitor drainage basins, hills, villages, and urban areas under the effect of extreme weather, especially landslide, debris flow, and terrain evolution. Analyzing the occurrence, cause, and characteristics so as to construct prediction models for landslide and debris flow, and further evaluate sediment yield of the whole drainage basin. The prediction models will be validated by actual cases before they become workable. Relation: 財團法人國家實驗研究院科技政策研究與資訊中心 Appears in Collections:[應用地質研究所] 研究計畫 Files in This Item: File Description SizeFormat index.html0KbHTML377View/Open All items in NCUIR are protected by copyright, with all rights reserved. 社群 sharing ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 ::: DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明
0.8
high
6
876
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology" ]
{ "clarity": 0.7, "accuracy": 0.6, "pedagogy": 0.6, "engagement": 0.55, "depth": 0.45, "creativity": 0.45 }
f3e4cbe9-e7c3-4f70-a983-9e4f064668b7
WALLIS, TEXAS
social_studies
historical_context
WALLIS, TEXAS. Wallis is at the junction of the Southern Pacific and the Atchison, Topeka and Santa Fe railroads ten miles southeast of Sealy in extreme southeastern Austin County. Anglo-American settlement on the narrow strip of land west of the Brazos and east of the San Bernard River began in the late 1830s. The community was first known as Bovine Bend, and a post office by that name was established in 1873. After 1880, when the Gulf, Colorado and Santa Fe Railway constructed its Galveston-Brenham spur through the vicinity, the settlement became known as Wallis Station, in honor of J. E. Wallis, director of the Gulf, Colorado, and Santa Fe. The name of the post office was changed to Wallis Station in 1886 and to Wallis in 1911. The San Antonio and Aransas Pass Railway, building east from Kenedy toward Houston, reached Wallis Station in 1887, and beginning around 1890 a number of Czech immigrants took up residence in the area. In 1904 the population was an estimated 631. There were 100 pupils enrolled at the Wallis school by 1918. In 1925 the population was 800, and in 1943 the town had 900 residents and thirty-nine businesses. The population declined to an estimated 690 in 1949 but began to climb thereafter, reaching an estimated 1,075 in 1966. By 1975 the town had eight churches, two schools, a bank, a public library, and a weekly newspaper, the Wallis News Review. In 1991 Wallis had a population estimated at 1,411 and fifteen rated businesses. According to the U.S. Census, the population was 1,311 in 2000. National Alliance of Bohemian Catholics of America, History of the Czech-Moravian Catholic Communities of Texas (Waco: Texian Press, 1974; trans. by V. A. Svrcek of Nase Dejiny [Granger, Texas: Nasinec, 1939]). The following, adapted from the Chicago Manual of Style, 15th edition, is the preferred citation for this article.Charles Christopher Jackson, "WALLIS, TX," Handbook of Texas Online (http://www.tshaonline.org/handbook/online/articles/hjw02), accessed May 28, 2015. Uploaded on June 15, 2010. Published by the Texas State Historical Association.
0.65
medium
5
597
[ "domain basics" ]
[ "expert knowledge" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
b4fcd7fc-eb01-4b98-b7fa-f5b17bffb802
time when all overwhelmed
technology
tutorial
At a time when we all are overwhelmed by tragic news. This campaign provides people with a way to take action and make a real difference in young people’s lives. Los Angeles, CA (Vocus) November 4, 2010 This week, the Gay, Lesbian and Straight Education Network (GLSEN) launches their Safe Space Campaign, designed to promote visible support for LGBT students in American middle and high schools. The Campaign aims to place a Safe Space Kit in every middle and high school in the United States – more than 100,000 schools. The kits contain Safe Space stickers and posters along with a guide for steps that individual school staff members can take to build support for vulnerable students and reduce anti-lesbian, gay, bisexual and transgender (LGBT) bullying and harassment in their school. “At a time when we all are overwhelmed by tragic news,” says country music star Chely Wright, who came out in May and is serving as a Safe Space Campaign spokesperson, “this campaign provides people with a way to take action and make a real difference in young people’s lives. GLSEN’s Safe Space Kits – and the all-important sticker – make it possible for teachers and school staff to let students know that they are not alone, and they have somewhere to turn.” Watch a PSA from Chely and other celebrities at http://www.youtube.com/SafeSpaceKit. Individuals can purchase Safe Space Kits at http://www.safespacekit.com and send them to middle or high schools of their choice. They can also view a PSA featuring Wright and GLSEN Student Ambassadors from around the country. So far, in addition to individuals’ engagement in the program, Safe Space Kits have already been successfully distributed in partnership with Los Angeles Unified School District (LAUSD) and New York City Department of Education (NYCDOE), as well as every Maine secondary school via GLSEN Downeast Maine and Southern Maine chapters. The Safe Space sticker at the heart of the Kit has long been one of GLSEN’s most popular resources. By placing a sticker in their classroom or office, members of a school’s faculty or staff can let students know that they have support and are in a safe space with respect to anti-LGBT bullying and harassment. Research demonstrates that LGBT students who can identify supportive adults in their school are less likely to feel unsafe at school, are more likely to plan to graduate and go on to college, and are more likely to feel connected to the school community. “Every student deserves a safe space in school,” GLSEN Executive Director Eliza Byard said. “The single most important line of defense for young people in crisis at school is a network of visibly supportive adults. I am so grateful to Chely Wright for helping us get the word out about the Safe Space Campaign and one way we can all make a difference in young people's lives.” Unfortunately, nearly nine out of 10 LGBT students experience some form of harassment in school each year because of their sexual orientation; nearly half report being physically harassed and almost a quarter report being physically assaulted. Three out of five LGBT students feel unsafe at school because of their sexual orientation. ABOUT ‘SAFE SPACE KIT’ The Safe Space Kit provides a program for action that school staff can take to create a positive learning environment for every child. Each kit contains 10 Safe Space Stickers, two posters and a 42-page Guide to Being an Ally to LGBT Students that gives concrete strategies for supporting LGBT students, including how to educate about anti-LGBT bias and teaching respect for all. To order a Safe Space Kit and for more information visit http://www.SafeSpaceKit.com. GLSEN, the Gay, Lesbian and Straight Education Network, is the leading national education organization focused on ensuring safe schools for all students. Established in 1990, GLSEN envisions a world in which every child learns to respect and accept all people, regardless of sexual orientation or gender identity/expression. GLSEN seeks to develop school climates where difference is valued for the positive contribution it makes to creating a more vibrant and diverse community. For information on GLSEN's research, educational resources, public policy advocacy, student organizing programs and educator training initiatives, visit http://www.GLSEN.org.
0.65
medium
6
906
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "social_studies", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.35 }
861055a6-8d3b-44d4-b4ac-3ae1679c8054
- What is number pattern
interdisciplinary
concept_introduction
- What is number pattern? - What is the rule for the pattern of numbers? - How do patterns help us in life? - What is the pattern? - What makes something a pattern? - What is a pattern for preschoolers? - How do you learn patterns? - How do you know if a pattern is good? - What comes next pattern? - What are the 7 different learning styles? - What are the types of patterns in math? - What are the 4 learning patterns? - What is an example of a pattern? - Where do we use patterns in real life? - What is called sequential learning? - What are three types of patterns? What is number pattern? Number pattern is a pattern or sequence in a series of numbers. This pattern generally establishes a common relationship between all numbers. For example: 0, 5, 10, 15, 20, 25, … To solve the problems of number pattern, we need first to find the rule being followed in the pattern.. What is the rule for the pattern of numbers? To establish a rule for a number pattern involving ordered pairs of x and y, we can find the difference between every two successive values of y. If the difference pattern is the same, then the coefficient of x in the algebraic rule (or formula) is the same as the difference pattern. How do patterns help us in life? Patterns provide a sense of order in what might otherwise appear chaotic. Researchers have found that understanding and being able to identify recurring patterns allow us to make educated guesses, assumptions, and hypothesis; it helps us develop important skills of critical thinking and logic. What is the pattern? The Pattern is a free mobile application that provides users with personalized astrological readings based on their natal chart. The app analyzes users’ “personal patterns,” to help them gain insight into their personality traits, emotions, and life paths. What makes something a pattern? A pattern is a regularity in the world, in human-made design, or in abstract ideas. As such, the elements of a pattern repeat in a predictable manner. A geometric pattern is a kind of pattern formed of geometric shapes and typically repeated like a wallpaper design. What is a pattern for preschoolers? Patterns are arrangements of things that repeat in a logical way. … Those arrangements of colors, shapes, gestures, sounds, images, and numbers are a crucial concept for young kids and contributes heavily to their early math understanding. How do you learn patterns? How to learn Pattern printing easily?Step 1: First of all, analyse the pattern for any lines of symmetry. … Step 2: Now associate each cell i.e. element with a row and column no. … Step 3: In this step, try and find a relation between the value of C(i, j) with i and/or j.More items…• How do you know if a pattern is good? How to Recognize PatternsActively Look for Patterns. … Organize the Pieces. … Question the Data. … Visualize the Data. … Imagine New Possibilities. What comes next pattern? In a recursive pattern, repetition of a rule or procedure can be used to extend the sequence or to find the values of any terms missing from the sequence. What are the 7 different learning styles? The Seven Learning Styles – How do you learn?Visual (Spatial)Aural (Auditory-Musical)Verbal (Linguistic)Physical (Kinesthetic)Logical (Mathematical)Social (Interpersonal)Solitary (Intrapersonal) What are the types of patterns in math? There are different types of number patterns in Mathematics….They are:Arithmetic Sequence.Geometric Sequence.Square Numbers.Cube Numbers.Triangular Numbers.Fibonacci Numbers. What are the 4 learning patterns? The interaction of cognition, conation, and affectation forms four patterns of learning behavior: sequential, precise, technical, and confluent. What is an example of a pattern? The definition of a pattern is someone or something used as a model to make a copy, a design, or an expected action. An example of a pattern is the paper sections a seamstress uses to make a dress; a dress pattern. An example of a pattern is polka dots. An example of a pattern is rush hour traffic; a traffic pattern. Where do we use patterns in real life? Here are some things you can point out:the brick pattern on a building or home.the pattern on the sidewalk or driveway.the tree rings.the patterns on a leaf.the number of petals on flowers.the neighborhood house colors, shape, size.the shadows of people, trees, buildings. What is called sequential learning? Sequential learning is a type of learning in which one part of a task is learnt before the next. Serial organization is fundamental to human behaviour. What are three types of patterns? Design patterns are divided into three fundamental groups:Behavioral,Creational, and.Structural.
0.6
medium
4
1,075
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "mathematics", "science", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.3 }
47ebc3dd-237f-4872-a04a-1e4e6e6815c3
couple months ago, wrote
science
practical_application
A couple of months ago, we wrote about Abu Dhabi's Future Energy Company and their plans to build a huge solar power plant as part of the Masdar Initiative, a multi-part agenda for promoting and developing renewable energy and sustainability in the UAE. A few days ago they announced the next big thing to roll out of their master plan: a walled city in the Emirates desert which will purportedly be "the first zero carbon, zero-waste city in the world." Perhaps the only other sustainable urban projects of comparable scale and ambition are Dongtan and Huangbaiyu in China (by ARUP and William McDonough + Partners, respectively) which in some ways share a similar context to this project, in that they are each situated at the edge of a burgeoning 21st century metropolis, and at the crest of dramatic cultural transformation. The Abu Dhabi development -- called "Masdar" -- will be designed by the celebrated architecture firm, Foster + Partners, and will house the Future Energy Company's headquarters, as well as a new university. As Foster + Partners describes the project: The principle of the Masdar development is a dense walled city to be constructed in an energy efficient two-stage phasing that relies on the creation of a large photovoltaic power plant, which later becomes the site for the city’s second phase, allowing for urban growth yet avoiding low density sprawl. Strategically located for Abu Dhabi’s principal transport infrastructure, Masdar will be linked to surrounding communities, as well as the centre of Abu Dhabi and the international airport, by a network of existing road and new rail and public transport routes. Rooted in a zero carbon ambition, the city itself is car free. With a maximum distance of 200m to the nearest transport link and amenities, the compact network of streets encourages walking and is complemented by a personalised rapid transport system. The shaded walkways and narrow streets will create a pedestrian-friendly environment in the context of Abu Dhabi’s extreme climate. It also articulates the tightly planned, compact nature of traditional walled cities. With expansion carefully planned, the surrounding land will contain wind, photovoltaic farms, research fields and plantations, so that the city will be entirely self-sustaining. Foster + Partners isn't the only celebrity firm planning large-scale architectural installations in the desert. Rem Koolhaas' OMA has plans for a whole new city on the edge of the northern emirate, Ras Al Khaimah. It's not a stretch to suggest there's something to the opportunity uniquely offered by the UAE's combination of sprawling undeveloped space and overflowing wealth. For a starchitect, it's the next level of seduction -- why have a single building as your chef d'oeuvre when you can make a whole city? Ideally, the Foster + Partners city will be a model for sustainable development and thereby a valid, and maybe even bar-raising, use of space. But having recently been to the UAE (more on this still to come), I'd say it's absolutely clear that whether or not architects choose to build green there, they will build, and build fast. Skyscrapers sprout like mushrooms well before there are occupants sufficient to fill them, based on a Field of Dreams-style faith that once it's all built, the rest will follow. Looking at Dubai's booming tourism, it seems reasonable to expect an influx of residents and visitors for as long as there is new infrastructure to entice them. And while there's a very low murmur of concern about the environmental impact and lack of foresight involved in the building frenzy, the dominant tenor is one of excitement and anticipation about what feels like a theme park-in-progress. One hopes that projects like Masdar will be successful and attractive enough to spur greater support for a development ethic that considers the UAE's natural capital and prioritizes sustainability. These green minded plans have a lot of potential in countries that are still developing quickly. It seems that these countries must take stock of their resources and ration them with care if thier population is to continue to grow, and their economy can support that. Compared to Canada, where the population and government together are taking their natural resources for granted, India, Korea, and countries in South America and the UAE all have such forsight. We could take a page out of that book... This idea of a walled sustainable city sounds somewhat flawed by this requirement of a wall. These cities will be only for Princes and Kings of middle eastern countires. Wall is what'll keep it sustainable, otherwise it would entice unsustainable amounts of people.
0.6
medium
6
948
[ "intermediate science", "statistics" ]
[ "specialized research" ]
[ "technology" ]
{ "clarity": 0.4, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.45 }
aa5e20d2-5b3d-4c64-bac7-b8b46289ace2
You’ve probably seen many
science
data_analysis
You’ve probably seen many images over the years that represent a black hole, but none of them are actually images of a real black hole (including the one above). They’re all artist’s renderings, or possibly a real image of the superheated gas around a black hole. Astronomers around the world have banded together and flipped the switch on a project called the Event Horizon Telescope. The international team hopes they’ll generate the first ever image of a black hole by linking up the data from radio telescopes all over the world. There are a number of problems that have prevented scientists from seeing a black hole. For one, there aren’t any close by, which is actually a good thing if you don’t like being torn apart by tidal forces and sucked into oblivion. Black holes are also physically smaller than you’d expect, despite their high mass. It’s the high density that gives a black hole such incredible gravitational pull. There’s also the matter of all the electromagnetic waves being pulled into a black hole instead of emitted where we can see them. Astronomers will use the Event Horizon Telescope to look at two different supermassive black holes. One is the black hole in the center of our own galaxy, which is known as Sagittarius A* (pronounced “Sagittarius A Star”). The other is at the center of a nearby galaxy called M87, also known as Virgo A. It’s one of the largest galaxies in the local universe, and is famous for having a gigantic jet of matter blasting out from the black hole at its center. The Event Horizon Telescope consists of eight radio telescopes around the world, all of which will cooperate by observing the same objects. Using radio frequencies will allow astronomers to peer through the shroud of dust and gas that usually obscures black holes. The target is a halo of superheated gas believed to circulate above the event horizon as it’s pulled in. Just one telescope wouldn’t be able to pull in enough clean data to produce an image of that halo, but a network of telescopes spanning the globe might. Observations for this project began on April 4th and will run through April 14th. The data acquired by each site will then be transported to labs at the Max Planck Institute in Germany and MIT’s Haystack Observatory. Combining the data should help cancel out the noise and reinforce the event horizon halo’s signal. That process is likely to take several months, though. So, in a few months, we could finally see what the event horizon of a black hole looks like. This may help answer a number of long-standing questions about physics and the nature of galactic evolution.
0.6
medium
4
552
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
7848d70c-15da-47ab-8282-f61c2b46d3b1
Aim aim guide provide
interdisciplinary
data_analysis
Aim The aim of this guide is to provide the practitioner with detailed examples of the various stages involved in the lardering process to enable efficient and simple carcass processing. The order in which the various stages are carried out will be a matter of preference, however, the outcome of the task should be to produce a carcass that is well presented and has been handled in such a way as to minimise the risk of contamination. The BP guide Carcass Inspection should be viewed as an essential accompaniment to this guide. Planning Ensure that any waste produced can be disposed of in accordance with BPG Larder Hygiene and Waste Disposal. Ensure there is capacity to larder, hang and store the number of carcasses expected, and make alternative arrangements if necessary.* Notify venison dealer at the earliest opportunity of the number of carcasses likely to be collected. Facilities Larder Ensure that the premises used are of adequate specification to enable safe and efficient lardering, hanging and storage of the maximum number of carcasses normally handled.** ©DCS 2008 • download version SNH 14/03/2017 Equipment: Health and safety: Suitable protective clothing (e.g. apron, chain-mail glove, disposable gloves)| First aid kit | Potable water (hot to 82 o C is also recommended). Cutting: Sharp knife, with a fixed blade of no less than 4" and a non slip plastic handle | Plastic scabbard | Sharpening stone/steel | Butchering saw. Ensure that the required equipment is ready for use prior to lardering. You will need some or all of the following: Carcass handling: Scales of sufficient range to cover all species of deer culled | Bench/Table |Hoists/Pulleys | Stainless steel gambrels | Chest-spreaders | Stainless steel hooks. Record-keeping: Trained hunter declaration tag to be securely attached to carcass | A tag will be required if jaws are removed for ageing | Record sheets. Inspection and recording Be clear and consistent in gathering and recording weight data. Determine whether hill (gralloched, with head legs and pluck attached), or larder (gralloched, with head, legs and pluck removed) weight will be recorded. Inspect carcasses and follow procedures detailed in BPG Carcass Inspection. Label carcasses using tags to ensure traceability. Tags should ideally be attached into the flank as attaching it into the skin means that it may be accidentally removed further on in processing. Maintain records as per BPG Cull Records. Carcass handling and processing Pre-lardering Remove from vehicle. Weigh now if weighing for hill weight (i.e. gralloched, with head, legs and pluck attached). Use hoists or winch to manoeuvre carcass onto a clean bench or surface. Chest Cut the skin along the sternum by inserting the knife with the sharp side of the blade facing upwards. Cut through the remaining flesh by repeating the stroke with the sharp side of the knife blade facing downwards. Take care to avoid hair coming in contact with exposed meat. Saw along cut on sternum from bottom to top. To find the centre line insert your thumb into the bleed hole. With knife cut open skin from top of sternum all the way up the neck (this may have already been partly done during gralloching). *** ©DCS 2008 • download version SNH 14/03/2017
0.65
medium
4
789
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "life_skills" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
9e73b890-b962-431c-a9fc-97e087f110e7
Ben twice many books
technology
ethical_analysis
Ben Had twice has many books as Eve.If Ben gave Eve 20 books,Ben will have 10 more books than EVe. How many books do they have altohether? Click here to post comments Join in and write your own page! It's easy to do. How? Simply click here to return to Ask Grade 3 Question or Post Answer/Comment. Would you prefer to share this page with others by linking to it? Copyright © 2006-2021. All rights reserved. www.teach-kids-math-by-model-method.com.
0.4
medium
4
131
[ "programming fundamentals", "logic" ]
[ "system design" ]
[]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
b5b2a80c-e678-42bb-b48a-314537e4ed8a
Astronomers pinpointed source series
science
research_summary
Astronomers have pinpointed the source of a series of mysterious cosmic signals to a distant dwarf galaxy 3 billion light years away. This discovery marks the first time scientists have been able to trace the signals to a specific location in the sky and offers new ways to study what is causing them. Fast radio bursts (FRBs) have captivated the attention of researchers in the CIFAR Cosmology & Gravity program since their discovery in 2007. FRBs last only a few thousandths of a second but are far brighter and more powerful than any known short flashes, such as pulses from radio pulsars, a form of neutron star. FRBs’ brief nature combined with technological constraints have made them difficult to detect. Now, researchers have zeroed in on the location of one of 18 known FRBs using a network of telescopes and special imaging and timing technologies. Associate Fellow Scott Ransom (National Radio Astronomy Observatory) and R. Howard Webster Foundation Fellow Victoria Kaspi (McGill University) were part of the scientific team that published their findings in Nature with companion papers in The Astrophysical Journal Letters “We finally know at least one of these is coming from another galaxy at a very far distance,” says Ransom from NRAO headquarters in Charlottesville, VA. A composite image of FRB 121102, which astronomers identified as the source of bursts of radio waves.(Credit: Gemini Observatory/AURA/NSF/NRC) Previously, FRBs could be traced to a region in the sky, but not to any of the hundreds or even thousands of galaxies within that region. In order to narrow this scope, scientists used the Very Large Array (VLA), a multi-antenna radio telescope system, to produce a high-definition image of the sky. They focused on a particular FRB that the more sensitive Arecibo telescope had detected earlier and which had been shown to be the only known repeater. “What really surprised us with the first burst we saw at the VLA is that it was whopping bright,” Ransom recalls. “After watching it for several months, it started bursting brightly and often in the fall.” The FRB that was located has led to many astronomical firsts. FRB121102 was discovered Nov. 2, 2012 in the Arecibo Pulsar ALFA survey . It was the first FRB that was not detected by the Parkes radio telescope in Australia, which addressed concerns that previous bursts may have been technological or environmental flukes. In 2016, the same team including Ransom, Kaspi and Senior Fellow Ingrid Stairs (University of British Columbia), detected 10 additional bursts coming from FRB121102. This made it the first repeating fast radio burst and largely ruled out the possibility that it was caused by a cataclysmic event like the creation of a black hole. Now, scientists believe the source likely involves a young neutron star, possibly a highly-magnetic magnetar. After years of research, Ransom is happy to see FRB121102 is no longer being misidentified as a strange object in our galaxy. “This FRB is the one people thought was the oddball but it is now unambiguously at far distances, unambiguously coming from another galaxy,” says Ransom. “That makes you wonder that if this one has been promoted to the gold standard, have we missed other FRBs? Maybe they’re all repeating and we just haven’t been lucky.” That question is a point of active research for Ransom and many other astronomers. “There is still a lot of work to do to unravel the mystery surrounding FRBs,” Kaspi said. “But identifying the host galaxy for this repeating FRB marks a big step toward solving the puzzle.” The Canadian Hydrogen Intensity Mapping Experiment (CHIME) could help answer remaining questions, she notes. The all-Canadian initiative involves a number of CIFAR researchers and is based in B.C. Although the radio telescope was designed to study how the universe assembled itself, CHIME is also an ideal tool for detecting FRBs. Kaspi is the principal investigator for the CHIME extension which will be used to study transient radio signals. Once CHIME comes online in the spring, it will measure more than half of the sky each day as the Earth turns. CHIME could potentially detect as many FRBs in a day as previous telescopes have detected over the last decade. “Once we understand the origin of this phenomenon, it could provide us with a new and valuable probe of the universe,” Kaspi said. “A direct localization of a fast radio burst and its host ” was published in Nature on Jan. 5, 2017.
0.6
medium
4
1,011
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.4 }
2ccb6803-1a07-4bdd-b255-7f5b43a406f7
Intel XScale (PXA, IXC, IOP, IXP) processors
technology
historical_context
Intel XScale (PXA, IXC, IOP, IXP) processors Introduction: February 2002 The XScale, a microprocessor core, was Marvell's (formerly Intel's) implementation of the fifth generation of the ARM architecture, and consisted of several distinct families: IXP, IXC, IOP, PXA and CE. The PXA family was sold to Marvell Technology Group in June 2006. The XScale architecture was based on the ARMv5TE ISA without the floating point instructions. XScale used a seven-stage integer and an eight-stage memory superpipelined RISC architecture. It was the successor to the Intel StrongARM line of microprocessors and microcontrollers, which Intel acquired from DEC's Digital Semiconductor division as the side-effect of a lawsuit between the two companies. Intel used the StrongARM to replace their ailing line of outdated RISC processors, the i860 and i960. All the generations of XScale were 32-bit ARMv5TE processors manufactured with a 0.18 µm process and have a 32KB data cache and a 32KB instruction cache (this would be called a 64KB Level 1 cache on other processors). They also all have a 2KB mini-data cache. The PXA family The PXA210 was Intel's entry-level XScale targeted at mobile phone applications. It was released with the PXA250 in February 2002 and comes clocked at 133MHz and 200MHz. The PXA25x family consists of the PXA250 and PXA255. The PXA250 was Intel's first generation of XScale processors. There was a choice of three clock speeds: 200MHz, 300MHz and 400MHz. It came out in February 2002. In March 2003, the revision C0 of the PXA250 was renamed to PXA255. The main differences were a doubled bus speed (100MHz to 200MHz) for faster data transfer, lower core voltage (only 1.3V at 400MHz) for lower power consumption and writeback functionality for the data cache, the lack of which had severely impaired performance on the PXA250. The PXA26x family consisted of the PXA260 and PXA261-PXA263. The PXA260 was a stand-alone processor clocked at the same frequency as the PXA25x, but features a TPBGA package which was about 53% smaller than the PXA25x's PBGA package. The PXA261-PXA263 were the same as the PXA260 but had Intel StrataFlash memory stacked on top of the processor in the same package; 16MB of 16-bit memory in the PXA261, 32MB of 16-bit memory in the PXA262 and 32MB of 32-bit memory in the PXA263. The PXA26x family was released in March 2003. The PXA27x family (code-named Bulverde) consisted of the PXA270 and PXA271-PXA272 processors. This revision was a huge update to the XScale family of processors. The PXA270 was clocked in four different speeds: 312MHz, 416MHz, 520MHz and 624MHz and was a stand-alone processor with no packaged memory. The PXA271 was clocked to 312MHz or 416MHz and had 32MB of 16-bit stacked StrataFlash memory and 32MB of 16-bit SDRAM in the same package. The PXA272 was clocked to 312MHz, 416MHz or 520MHz and has 64MB of 32-bit stacked StrataFlash memory. The PXA27x family was released in April 2004. Along with the PXA27x family Intel released the 2700G embedded graphics co-processor. In August 2005 Intel announced the successor to Bulverde, codenamed Monahans. They demoed it showing its capability to play back high definition encoded video on a PDA screen. The new processor was shown clocked at 1.25GHz but Intel said it only offered a 25% increase in performance (800MIPS for the 624MHz PXA270 processor vs. 1000MIPS for 1.25GHz Monahans). An announced successor to the 2700G graphics processor, code named Stanwood, has since been canceled. Some of the features of Stanwood are integrated into Monahans. For extra graphics capabilities, Intel recommends third-party chips like the NVIDIA GoForce chip family. In November of 2006, Marvell Semiconductor officially introduced the Monahans family as Marvell PXA320, PXA300, and PXA310. PXA320 was shipped in high volume, and was scalable up to 806MHz. PXA300 and PXA310 delivered performance "scalable to 624MHz", and were software-compatible with PXA320. The PXA90x was released by Marvell and combines an XScale Core with a GSM/CDMA communication module. The IXC family The IXC1100 processor featured clock speeds at 266, 400, and 533MHz, a 133MHz bus, 32KB of instruction cache, 32KB of data cache, and 2KB of mini-data cache. It was also designed for low power consumption, using 2.4W at 533MHz. The chip came in the 35 mm PBGA package. The IOP family The IOP line of processors was designed to allow computers and storage devices to transfer data and increased performance by offloading I/O functionality from the main CPU of the device. The IOP3XX processors were based on the XScale architecture and designed to replace the older 80219 processor and i960 family of chips. There were ten different IOP processors available: IOP303, IOP310, IOP315, IOP321, IOP331, IOP332, IOP333, IOP341, IOP342 and IOP348. Clock speeds ranged from 100MHz to 1.2GHz. The processors also differed in PCI bus type, PCI bus speed, memory type, maximum memory allowable, and the number of processor cores. The IXP family The XScale core was utilized in the second generation of Intel's IXP network processor line, while the first generation used StrongARM cores. The IXP network processor family ranged from solutions aimed at small/medium office network applications , IXP4XX, to high performance network processors such as the IXP2850, capable of sustaining up to OC-192 line rates. In IXP4XX devices the XScale core was used as both a control and data plane processor, providing both system control and data processing. The task of the XScale in the IXP2XXX devices was typically to provide control plane functionality only, with data processing performed by the microengines, examples of such control plane tasks included routing table updates, microengine control, memory management. XScale microprocessors could be found in products such as the popular RIM BlackBerry handheld, the Dell Axim family of Pocket PCs, most of the Zire, Treo and Tungsten Handheld lines by Palm, later versions of the Sharp Zaurus, the Motorola A780, the Acer n50, the Compaq iPaq 3900 series and many other PDAs. It was used as the main CPU in the Iyonix PC desktop computer running RISC OS, and the NSLU2 (Slug) running a form of Linux. The XScale was also used in devices such as PVPs (Portable Video Players), PMCs (Portable Media Centres), including the Creative Zen Portable Media Player and Amazon Kindle E-Book reader, and industrial embedded systems. Apple's iPhone and recent iPod models also use XScale processors. At the other end of the market, the XScale IOP33x Storage I/O processors were used in some Intel Xeon-based server platforms. On June 27, 2006, the sale of Intel's XScale PXA mobile processor assets was announced. Intel agreed to sell the XScale PXA business to Marvell Technology Group for an estimated $600 million in cash and the assumption of unspecified liabilities. The move was intended to permit Intel to focus its resources on its core x86 and server businesses. The acquisition was completed on November 9, 2006. Intel was expected to continue manufacturing XScale processors until Marvell secures other manufacturing facilities, and continued manufacturing and selling the IXP and IOP processors, as they were not part of the deal. Source: Wikipedia, the free encyclopedia.
0.65
medium
6
2,136
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "science" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
cdec721b-8080-4b9d-aa46-75c66e97fa2a
โซล 1544: การวินิจฉัยและระยะไกล ทีมบรรณาธิการวิทยาศาสตร์ของนาซา
interdisciplinary
concept_introduction
โซล 1544: การวินิจฉัยและระยะไกล ทีมบรรณาธิการวิทยาศาสตร์ของนาซา 8 ธันวาคม 2016 บทความ แผนสำหรับโซล 1544 รวมถึงการวัดระยะไกลและการวินิจฉัยเพิ่มเติมเพื่อแก้ไขข้อผิดพลาดของระบบขับเหวี่ยง แผนสำหรับโซล 1544 รวมถึงการวัดระยะไกลและการวินิจฉัยเพิ่มเติมเพื่อแก้ไขข้อผิดพลาดของระบบขับเหวี่ยง แผนเริ่มต้นด้วยการสังเกตการณ์ท้าวและขอบล้อมหลุมดินเพื่อติดตามฝุ่นในบรรยากาศ จากนั้นเราจะได้รับข้อมูลการสังเกตจาก ChemCam ของ "Aunt Betty Pond" และ "Kebo Mountain" เพื่อประเมินองค์ประกอบของหิน Murray และเส้นริ้วรอย ในช่วงบ่ายเราจะทำซ้ำการสังเกตการณ์ท้าวและขอบล้อมหลุมดินของ Mastcam อีกครั้ง เราจะถ่ายภาพจาก Hazcam ด้านหลังอีกเล็กน้อยเพื่อติดตามการเคลื่อนที่ของวัสดุละเอียดในเวลาต่างๆ ของวัน สำหรับข้อมูลเพิ่มเติมเกี่ยวกับความผิดปกติของระบบขับเหวี่ยงและขั้นตอนที่เราใช้เพื่อแก้ปัญหา โปรดดูข่าวประชาสัมพันธ์ล่าสุดนี้ ขณะนี้ ทีมวิทยาศาสตร์กำลังเตรียมตัวสำหรับการประชุมสังคมธรณีวิทยาอเมริกันในสัปดาห์หน้า ดังนั้นเราจึงมีการนำเสนอการอภิปรายด้านวิทยาศาสตร์ที่ดีมากซึ่งจะถูกแบ่งปันในการประชุมในสัปดาห์หน้า! โดย Lauren Edgar -- Lauren เป็นนักธรณีวิทยาวิจัยที่ศูนย์วิทยาศาสตร์อสตราจีโอโลยีของ USGS และเป็นสมาชิกของทีมวิทยาศาสตร์ MSL คำถาม: ข้อใดไม่ได้ถูกกล่าวถึงเป็นส่วนหนึ่งของแผนสำหรับโซล 1544? A. การสังเกตการณ์ท้าวและขอบล้อมหลุมดินของ Mastcam B. การถ่ายภาพจาก Hazcam ด้านหลัง C. การซ่อมแซมข้อผิดพลาดของระบบขับเหวี่ยงด้วยตนเอง D. การสังเกตการณ์จาก ChemCam คำตอบ: C. การซ่อมแซมข้อผิดพลาดของระบบขับเหวี่ยงด้วยตนเอง คำถาม: วัตถุประสงค์หลักของแผนสำหรับโซล 1544 ตามข้อความคืออะไร? คำตอบ: วัตถุประสงค์หลักคือการแก้ไขข้อผิดพลาดของระบบขับเหวี่ยงผ่านการวัดระยะไกลและการวินิจฉัยเพิ่มเติม
0.45
high
6
2,625
[ "intermediate understanding" ]
[ "research" ]
[]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
52174fd0-5c71-48a8-9289-ff24bdbf4eb3
Reducing impact transboundary animal
social_studies
case_study
Reducing the impact of transboundary animal diseases and zoonoses on livelihoods and public health in Africa. To catalyse the management of TADs and zoonoses in Africa by facilitating the development and implementation of a continental agenda for improved governance of veterinary services. 1. The context Africa suffers a huge burden of endemic TADs and zoonoses which represent a constant threat, both for the continent and for the rest of the world. TADs are of significant economic, trade and/ or food security importance for a considerable number of countries and can spread easily to reach epidemic or even pandemic proportions. Effective management of TADs and zoonoses requires cooperation among countries: veterinary services for these purposes are therefore an international public good. The economic, trade and food security importance of TADs and zoonoses relate to mortality and morbidity of livestock, costs of treatment and implementation of disease control measures, loss of market access, reduced product quality and shortage of valuable animal products. Furthermore, some TADs and zoonoses directly impact on public health through infection of humans, and indirectly through the food supply chain. The intensification of food production and the increased volume and speed of international travel and transportation of people, animals and animal products favour transmission of TADs. Changing land-use systems and climate change have produced conditions favourable to the emergence and transmission of infectious animal diseases, of which zoonoses represent a dominant part. Holistic approaches are therefore needed for effective prevention and progressive control of these diseases. - Next >>
0.7
medium
6
304
[ "intermediate understanding" ]
[ "research" ]
[ "life_skills" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.45, "creativity": 0.35 }
da5bcb84-6b04-4c1d-b9cd-f9831ec46e59
Teen Pregnancy: How Prevent
life_skills
ethical_analysis
Teen Pregnancy: How to Prevent Our Babies from Having Babies Between 1991 and 2013, the teen birth rate declined 57% nationwide declining in all 50 states and among all racial and ethnic groups. This is the good news but, teens are still getting pregnant and having to utilize public assistance resources at a substantial cost to the public. Teen pregnancy has a number of foundational reasons but it is closely linked to other social issues such as poverty and income, overall child well-being, out-of-wedlock births, irresponsible fatherhood, health issues, education, child welfare and various risky behaviors. Nearly all teen pregnancies are unplanned but childbearing during adolescence does not come without consequences to all of those who are involved. It negatively affects the young parents and their families, their children and society as a whole. Teen girls who have babies are less likely to finish high school and more likely to have to rely on public assistance. Teen mothers are more likely to be poor as adults and to have poorer educational, behavioral and health outcomes over the course of their lives as opposed to women who wait to have children. Most parents of teens confess an intimidation or uneasiness around the very idea of discussing sex, STDs, and pregnancy protection with their children while some may deny altogether the possibility that their children will be sexually active. Parents of teens also may depend too much on the school system to provide their children with sex education while in many school districts, the sex education programs aren’t always up to date or complete. Parents who do choose to discuss protection with their teens may recommend abstinence to their kids or will be more pro-active: getting birth control for their daughters or purchasing condoms for their sons. Some parents feel discouraged about competing with the vast amount of information their teens are exposed to in the media and online; from sexually charged television shows and movies, video games and exposure to graphic sex and readily available pornography on the Internet. With sex being more prominent in today’s society, many parents expect their children will learn about sex on their own and that their efforts will be in vain. But Bill Alber, chief program officer of The National Campaign To Prevent Teen and Unplanned Pregnancy says; “But one truth remains: “The research shows,” says Albert, “that parents who brave their own discomfort and talk with their children about relationships, love, sex, and contraception, who express honest caring and concern about these issues, and are clear about what they think and why, greatly reduce their children’s risk of teen parenthood.” So how can you educate your child about sex, pregnancy, and protection in a way that inspires them to be careful with their own bodies, their partners bodies, and to take responsibility for the very real potential consequences that come with having sex? You know your teen best, and you know your own communication style – the following are simply suggestions to help you get the conversation started and the lines of communication about sex open between you and your child. Use your own best judgment when discussing sex and pregnancy with your teenager. - Stay close. Studies show that teens who feel connected to their parents delay sexual intercourse longer than those who don’t. - Talk to your teen about dating and relationships. Talk to your child about what a healthy relationship looks like. Explain that respecting their date or partner as a person first is one of most important ways they can build and have a good, solid, long-term relationship. Let them know that a loving relationship with another person doesn’t have to include sex right away, or even at all. - Be prepared. Decide what message you want to send to your child about sex and intimacy. If applicable, remind them of your religious beliefs. - Focus on education. Explain to your teen why you want them to finish their education before they have children of their own. Help them understand what life as a parent, and especially life as a young, uneducated parent, would really look like for them. - Know the facts. Make sure you offer facts and solutions, not judgments or opinions. Discuss contraception, STDs, HIV/AIDS and how to protect against pregnancy and sexually transmitted infections. If your teen is going to have sex anyway, help them understand how to have a healthy sexual relationship. If you need help locating facts and information about sexuality, intercourse, and teen pregnancy Planned Parenthood websites offer plenty of straightforward resources and articles. - Provide the tools. If you are comfortable providing contraception to your teen, plan a visit to your child’s doctor for more information and the tools your teen will need to stay safe should they engage in sexual activity. - Stay vigilant. Talk to your teen about their friends, crushes, and any newly developing relationships. Pop in to the room unexpectedly if your child has a friend over. Take snacks with you as a cover if you have to, but make it known you are there and aware. If there is a party at another person’s home, call and ask the parents how it will be monitored and if a parent will be around to supervise. - Encourage healthy passions. If your child has a talent, skill, or enjoys sports, encourage and get them involved with programs to further develop their skills and encourage their passions. By keeping them active in something they are passionate about your teen will develop a stronger sense of self-worth, which will help them make better decisions, ultimately steering them away from risky behaviors. - Expose them to babies. Arrange for your teenage son or daughter to babysit. One on one time with an infant is often enough to help a youth realize the sheer amount of responsibility a child will bring. Many schools even offer hands-on classes on parenthood using a computerized baby that is programmed to cry, need feeding and need diaper changing on a real infant’s schedule. Studies show that these types of programs increase awareness among teens and youth of just how demanding and relentless parenting is. - Offer potential responses to sexual situations. Tell your teen it is okay to say “No.” to sex. Help your teen find words and phrases that feel comfortable for them to say if they are ever pressured to have sex. Discussing various replies in a calm and sincere gives your teen more confidence should they find themselves pressured to have sex. And remember it’s not just girls who will find themselves in these situations. Help your teen boy find ways to say “No.” too. - Communicate constantly (and don’t give up!). Keep the conversation going with your teen and let them know you care and are available to answer any questions they have or discuss any situations that might come up for them. Building trust is your number one goal in the teen/parent dynamic, and keeping the lines of communication open will help facilitate that trust. The rate of teen pregnancies and abortions have significantly dropped over the past few years in all states. Thanks to a six-year program in Colorado, the Colorado Family Planning Program distributed free long-acting reversible contraceptives to 30,000 teens and young women. They distributed intrauterine devices or hormonal implants that can prevent pregnancy for up to ten years, and it resulted in a 40% decline in birthrates among teen moms and a 35% decline in teen abortions from 2009 to 2015 as reported in an article by CBS news.
0.7
medium
6
1,507
[ "intermediate understanding" ]
[ "research" ]
[ "technology", "social_studies" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.65, "creativity": 0.35 }
cef8c9e2-1b81-4183-90bc-b39591631a70
McDonalds begins testing recyclable
science
worked_examples
McDonalds begins testing recyclable paper cups in the United States McDonalds, the land of burgers and fries usually attributed to obesity is now looking for ways to turn green. To start off with, the golden arched fast food chain will begin using recyclable paper cups in the near future. 2,000 restaurants of the chain making up for 15% of those in the United States alone will begin testing phases, ditching the previously used polystyrene cups for recyclable paper cups. Also, given that these cups are recyclable further ups the green quotient. Composting paper cups leads to the emission of harmful greenhouse gases. Reusing the cups instead helps save o0n paper, resulting on fewer trees being cut down. We wish McDonalds on its endeavor and hope to see polystyrene cups completely phased out in the future.
0.6
medium
5
176
[ "introductory science", "algebra" ]
[ "research methodology" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
fc385545-7b3a-447c-944e-b7ed00e63c74
Now you’re ready combine
technology
historical_context
Now you’re ready to combine your keywords. The words and symbols used to combine them are called Boolean Operators. There are only a handful that you need to learn. Using them in combination will enable you to get the most relevant results and filter out the irrelevant results. |Operator||Example||How It Works| |” “||“siamese cats”||Searches a phrase. Tells the database to search for the keywords together in that order.| |AND||cats AND dogs||Tells the database to find search results that have both keywords. Fewer, more specific search results.| |OR||cats OR dogscats OR felines||Tells the database to find search results that contain either keyword. Increases number of search results. |NOT||cats NOT siamese||Used to exclude certain keywords. Fewer, more specific search results.| |*, ! or ?||veterinar*||Wildcard symbol is used to get different word endings. Different databases use different symbols. Explore the Boolean Machine by Rockwell Schrock. It explains AND, OR and NOT in a visual, intuitive way. You can use those few operators to combine multiple concepts, each with its own set of keywords. To do that, you need to use parentheses. Click through the mini-slideshow to learn how to combine multiple sets of keywords using parentheses. If the text is too small to read, click the menu icon in the lower left corner of the box and select “view full screen.” For a fun, visual explanation, see the Advanced Boolean Searching demonstration from Colorado State University Libraries. Practice these search techniques using the Search Query Magnets game from the University of Washington Libraries. Try several queries and compare your solution to the explanation they provide. One more thing – you can also use these techniques in Google and Google Scholar!
0.6
medium
4
384
[ "programming fundamentals", "logic" ]
[ "system design" ]
[ "science" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
e15f30e5-e2c8-4315-ba6a-73019ef8ed3e
Synthesis: Continuum Mechanics within
interdisciplinary
practical_application
## Synthesis: Continuum Mechanics within the Knowledge Network Continuum Mechanics (CM) serves as a foundational, bridging discipline in the applied sciences. Its core function is to model the mechanics of materials that are assumed to be continuous, ignoring their discrete molecular structure. Strategically, CM translates the microscopic reality of matter into macroscopic, solvable mathematical frameworks. --- ### Knowledge Network Position Continuum Mechanics occupies a critical nexus between fundamental physics and engineering application. It acts as the necessary mathematical bridge that allows the abstract laws of conservation (derived from physics) to be applied to tangible, engineered systems. **Connections to Five Distinct Domains:** 1. **Domain: Classical Physics (Core Foundation)** * **Connection:** CM is the direct application of Newton's Laws of Motion (specifically the conservation of linear and angular momentum) and the First Law of Thermodynamics to a continuous medium. * **Nature of Link:** CM formalizes these laws into differential equations (e.g., Navier-Stokes equations for fluids, Cauchy's equations of motion for solids), providing the constitutive relationships (stress-strain, viscosity) that link the forces to the resulting deformation or flow. 2. **Domain: Mathematics (Toolbox)** * **Connection:** CM relies heavily on Tensor Analysis and Differential Geometry. * **Nature of Link:** Tensors are essential for describing quantities (like stress, strain rate, and material properties) that must be independent of the coordinate system chosen. The resulting governing equations are partial differential equations (PDEs) that require advanced mathematical techniques for analytical or numerical solution. 3. **Domain: Materials Science (Constitutive Modeling)** * **Connection:** CM provides the kinematic framework, while Materials Science provides the *constitutive laws* that define how a specific material responds. * **Nature of Link:** CM defines *what* stress and strain are; Materials Science defines the relationship $\sigma = f(\epsilon, T, \text{history})$. Without accurate constitutive models (e.g., plasticity, viscoelasticity), the CM equations remain unsolvable for real-world scenarios. 4. **Domain: Computational Science (Numerical Implementation)** * **Connection:** Modern CM is almost entirely solved using numerical methods, primarily the Finite Element Method (FEM) and Finite Volume Method (FVM). * **Nature of Link:** CM provides the continuous governing equations; Computational Science provides the discretization strategy to transform these complex PDEs into large, solvable algebraic systems that computers can process. 5. **Domain: Geophysics (Large-Scale Application)** * **Connection:** The mechanics of the Earth’s mantle, crustal deformation, and fluid flow in porous media (hydrogeology) are modeled using CM principles. * **Nature of Link:** CM principles (specifically elasticity and viscous flow) are scaled up to planetary dimensions, requiring specialized considerations like large strain, high pressure, and the Coriolis effect, demonstrating the scalability of the core framework. --- ### Real-World Applications Continuum Mechanics enables the quantitative prediction of physical behavior across numerous engineering disciplines: 1. **Aerospace Engineering (Fluid Dynamics):** Modeling airflow over wings and fuselage (Computational Fluid Dynamics - CFD) to optimize lift and minimize drag. *Example: Predicting shockwave formation on supersonic aircraft.* 2. **Civil/Structural Engineering (Solid Mechanics):** Analyzing the load-bearing capacity and stability of structures. *Example: Calculating the stress distribution in a reinforced concrete bridge under seismic loading.* 3. **Biomedical Engineering (Bio-fluid/Bio-solid Mechanics):** Modeling blood flow through arteries or the mechanical response of soft tissues. *Example: Designing artificial heart valves based on the viscoelastic properties of biological tissue.* 4. **Mechanical Engineering (Manufacturing/Tribology):** Simulating metal forming processes or wear between moving parts. *Example: Optimizing forging parameters to ensure uniform material density without inducing plastic failure.* --- ### Advanced Extensions Continuum Mechanics is not an endpoint but a foundational layer that supports sophisticated, specialized fields: 1. **Nonlinear Mechanics:** Deals with large deformations (large strain theory) or nonlinear material response (hyperelasticity, plasticity). This is crucial for modeling rubber, biological tissues, and extreme structural failure. 2. **Thermomechanics:** Integrates thermal effects (heat transfer) with mechanical deformation (Thermoelasticity, Thermo-plasticity). Essential for analyzing components subjected to rapid temperature changes (e.g., jet engine blades). 3. **Micromechanics/Multiscale Modeling:** Attempts to bridge the gap between the continuum assumption and the discrete nature of matter by incorporating information from molecular dynamics or homogenization techniques into the macroscopic constitutive laws. 4. **Computational Fluid Dynamics (CFD):** The high-fidelity numerical solution of the Navier-Stokes equations, enabling detailed simulation of turbulent flows, chemical reactions within flows, and multiphase systems. --- ### Limitations & Open Questions The primary limitation of CM stems directly from its core assumption: the continuum hypothesis. 1. **The Continuum Assumption Failure:** CM breaks down when the characteristic length scale of the problem approaches the mean free path of the molecules (e.g., rarefied gases in high vacuum, very small micro-electromechanical systems (MEMS), or the initial stages of fracture). 2. **Constitutive Law Incompleteness:** While the framework for describing motion is robust, the constitutive models remain empirical or semi-empirical. Open questions persist in accurately modeling complex phenomena such as: * **Damage and Fracture Initiation:** Predicting exactly *when* a material will fail under complex, time-varying loads remains an area of active research (e.g., characterizing fatigue crack propagation). * **Complex Viscoelasticity:** Developing universal, predictive models for materials that exhibit both viscous and elastic behavior across vast time scales (e.g., polymers under creep). --- ### Historical Context The development of Continuum Mechanics was a systematic progression, solving one layer of complexity at a time: * **Foundational Era (17th–18th Century):** The initial concepts emerged from the study of fluids and elastic solids. **Leonhard Euler** established the basic equations of fluid motion. **Daniel Bernoulli** advanced the understanding of fluid pressure. * **Formalization of Elasticity (19th Century):** **Augustin-Louis Cauchy** formalized the mathematical description of stress and strain using tensors, establishing the rigorous mathematical foundation for solid mechanics. **George Green** developed the theory of linear elasticity. * **Fluid Mechanics Maturity (19th–Early 20th Century):** **Claude-Louis Navier** and **George Gabriel Stokes** independently derived the governing equations for viscous fluid flow (the Navier-Stokes equations), completing the classical description of Newtonian fluids. --- ### Systems Integration Continuum Mechanics functions as a **Level 2 Abstraction Layer** in the hierarchy of physical modeling: $$\text{Quantum Mechanics} \rightarrow \text{Statistical Mechanics} \rightarrow \mathbf{\text{Continuum Mechanics}} \rightarrow \text{Engineering Design}$$ * **Relationship to Lower Levels:** CM *averages* the results of Statistical Mechanics (which deals with molecular collisions and ensembles) over a volume large enough to ignore individual particles. It abstracts away the microscopic noise to provide macroscopic governing equations. * **Relationship to Higher Levels:** CM provides the input variables (forces, pressures, displacements) necessary for **System Dynamics** or **Control Theory**. For instance, in modeling a robotic arm, CM calculates the stiffness and inertia of the links, which are then used as parameters in the higher-level control algorithms that dictate motor torque. --- ### Future Directions The evolution of Continuum Mechanics is intrinsically linked to advances in computation and material discovery: 1. **Integration with Machine Learning (ML):** Using ML techniques to discover or refine complex, high-dimensional constitutive models directly from experimental data, bypassing the need for purely theoretical derivation. This addresses the "Constitutive Law Incompleteness" limitation. 2. **Adaptive Resolution Modeling:** Developing frameworks that automatically switch resolution—from molecular dynamics (MD) to CM—based on local gradients or areas of interest (e.g., near a crack tip or phase boundary), creating truly multiscale simulations without manual intervention. 3. **Smart and Active Materials:** Extending CM to incorporate internal material actuation (e.g., piezoelectricity, shape memory alloys), requiring the development of coupled electro-thermo-mechanical constitutive frameworks that allow materials to actively respond to external stimuli based on continuum principles.
0.7
high
6
1,802
[ "intermediate understanding" ]
[ "research" ]
[ "mathematics", "science", "technology" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.65, "creativity": 0.45 }
ae7b8d89-349e-4c86-bd96-87e2a1949144
Children enjoy opportunities creative
interdisciplinary
practical_application
Children enjoy opportunities to be creative and to express themselves all year round. Activities such as building cities with blocks or boxes, composing original lyrics to familiar tunes, creating masterpieces on paper, or telling imaginative stories are all examples of creative expression. As children work to develop these life skills, we can offer simple materials to aid in sharing their ideas, enhancing their experiences and developing problem solving aptitudes. Simple supply list: - Child-size scissors (for cutting paper and playdough) - Pencils (short stubby pencils help children learn to grasp the pencil for more control) - (simple pencil sharpener – hand held works) - Transparent tape - (extra fun – painters tape, masking tape, craft tape, etc.) - Craft glue and glue sticks - Watercolor paint and variety of brushes/sponge tips - Washable markers - Paper (variety: copy, tissue, construction, cardstock, wrapping, wax, tracing) - Paper plates (endless uses!) - Wooden sticks - Recycle materials – clean – plastic bottles, lids, newspaper, boxes, magazines This rather inexpensive list is a place to start. You can pick up items at a discount store, resale shop, garage sale, or digging through the drawers at your house, and asking friends or family to clear out their drawers at home. Find a drawer, box closet to keep the supplies handy. Establish rules about when and where the supplies are used. Especially the more messy materials. As an early childhood educator for 27 years, let me add one more important piece to offering children opportunities to express themselves. Let them do it. Provide assistance when asked. PROCESS OVER PRODUCT Allows children to build confidence in themselves and their abilities. What other materials would you add to this list? Later this week, I’ll share some of my favorite children’s books related to creativity and some books to help teach children about giving and perseverance. I hope you’ll join ME.
0.75
medium
6
420
[ "intermediate understanding" ]
[ "research" ]
[]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.45, "creativity": 0.45 }
37ecad30-37e4-457c-898d-1acc4f550cf0
EXCLUSIVE VIDEO: Kinney County
technology
practical_application
EXCLUSIVE VIDEO: Kinney County Texas Sheriff Coe Announces Historic Plan to Deputize Local Citizens to Fight Back Against Biden Border Invasion — LIVE ANNOUNCEMENT AT 8:30 Eastern! Kinney County Texas Sheriff Brad Coe will make a historic announcement tonight live on 100% Fed Up and The Gateway Pundit. Sheriff Coe is deputizing at least 10 local citizens to rescue the county from Joe Biden’s illegal border invasion. Kinney County will also build a fence around the county to protect the border county from the invasion. The people have had enough. You can watch live on YouTube. This was an exceptional conversation.   Thanks for sharing!
0.65
medium
4
167
[ "programming fundamentals", "logic" ]
[ "system design" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
59e40c40-a473-4099-b39a-dad6c492961b
Silent Spring at 50
interdisciplinary
historical_context
Silent Spring at 50 Sources and References The Fourth Report on Human Exposure to Environmental Chemicals has measured 212 chemicals in people's blood or urine–75 of which have never before been measured in the U.S. population. The term environmental chemical refers to a chemical compound or chemical element present in air, water, food, soil, dust, or other environmental media (e.g., consumer products). Biomonitoring is the assessment of human exposure to chemicals by measuring the chemicals or their metabolites in such human specimens as blood or urine. A metabolite is a chemical alteration of the original compound produced by body tissues. Blood, serum, and urine levels reflect the amount of the chemical that actually gets into the body by all routes of exposure, including ingestion, inhalation, and dermal absorption. The measurement of an environmental chemical in a person's blood or urine is a measure of exposure; it does not by itself mean that the chemical causes disease or an adverse effect. Widespread Exposure to Some Industrial Chemicals Findings in the Fourth Report indicate widespread exposure to some commonly used industrial chemicals. - Polybrominated diphenyl ethers are fire retardants used in certain manufactured products. These accumulate in the environment and in human fat tissue. One type of polybrominated diphenyl ether, BDE–47, was found in the serum of nearly all of the NHANES participants. - Bisphenol A (BPA), a component of epoxy resins and polycarbonates, may have potential reproductive toxicity. General population exposure to BPA may occur through ingestion of foods in contact with BPA–containing materials. CDC scientists found bisphenol A in more than 90% of the urine samples representative of the U.S. population. - Another example of widespread human exposure included several of the perfluorinated chemicals. One of these chemicals, perfluorooctanoic acid (PFOA), was a byproduct of the synthesis of other perfluorinated chemicals and was a synthesis aid in the manufacture of a commonly used polymer, polytetrafluoroethylene, which is used to create heat–resistant non–stick coatings in cookware. Most participants had measurable levels of this environmental contaminant. For most of the environmental chemicals included in the Fourth Report, more research is needed to determine whether exposure at the levels reported is a cause for health concern. The Fourth National Report on Human Exposure to Environmental Chemicals 2009 and the Updated Tables, February 2011. Centers for Disease Control and Prevention. www.cdc.gov Chlordane remains in the food supply because much of the farmland was treated with chlordane in the 1960s and 1970s, and it remains in some soil for over 20 years. However, since chlordane has been banned, the levels in soils would be expected to decrease with the passage of time. Chlordane may also be found in fish and shellfish caught in chlordane–contaminated waters. Agency for Toxic Substances and Disease Registry (ATSDR). 1994 Toxicological profile for chlordane. Atlanta, GA: U.S. Department of Health and Human Services, Public Health Service. Polybrominated diphenyl ethers (PBDEs) are man–made chemicals found in plastics used in a variety of consumer products to make them difficult to burn. Very little is known about the health effects of PBDEs in people, but effects have been reported in animals. …We do not know whether PBDEs can cause cancer in humans. Rats and mice that ate food with decabromodiphenyl ether (one type of PBDE) throughout their lives, developed liver tumors. Based on this evidence, the EPA has classified decabromodiphenyl ether as a possible human carcinogen. PBDEs with fewer bromine atoms than decabromodiphenyl ether are listed by the EPA as not classifiable as to human carcinogenicity due to the lack of human and animal cancer studies. Agency for Toxic Substances and Disease Registry (ATSDR). 2004. Toxicological Profile for Polybrominated Biphenyls and Polybrominated Diphenyl Ethers. Atlanta, GA: U.S. Department of Health and Human Services, Public Health Service. Levels of DDT and DDE in the U.S. Population In the Fourth National Report on Human Exposure to Environmental Chemicals (Fourth Report), CDC scientists measured DDT and its metabolite DDE in the serum (a clear part of blood) of at least 1,956 participants aged 12 years and older who took part in CDC's National Health and Nutrition Examination Survey (NHANES) during 2003–2004. Prior survey periods of 1999–2000 and 2001–2002 are also included in the Fourth Report. By measuring DDT and DDE in the serum, scientists can estimate the amounts of these chemicals that have entered people's bodies. - A small portion of the population had measureable DDT. Most of the population had detectable DDE. DDE stays in the body longer than DDT, and DDE is an indicator of past exposure. - Blood serum levels of DDT and DDE in the U.S. population appear to be five to ten times lower than levels found in smaller studies from the 1970s. Finding measurable amounts of DDT and DDE in serum does not mean that the levels of these chemicals cause an adverse health effect. Biomonitoring studies of serum DDT and DDE can provide physicians and public health officials with reference values so that they can determine whether people have been exposed to higher levels of DDT and DDE than are found in the general population. Biomonitoring data can also help scientists plan and conduct research on exposure and health effects. DDT, DDE, and DDD have been found in at least 442 of the 1,613 current or former NPL sites. However, the total number of NPL sites evaluated for these substances is not known. As more sites are evaluated, the sites at which DDT, DDE, and DDD are found may increase. This information is important because exposure to these substances may harm you and because these sites may be sources of exposure. Based on all of the evidence available, the Department of Health and Human Services has determined that DDT is reasonably anticipated to be a human carcinogen. Similarly, the International Agency for Research on Cancer (IARC) has determined that DDT is possibly carcinogenic to humans. EPA has determined that DDT, DDE, and DDD are probable human carcinogens. Agency for Toxic Substances and Disease Registry (ATSDR). 2002. DDT, DDE y DDD). September 2002
0.7
medium
6
1,424
[ "intermediate understanding" ]
[ "research" ]
[ "science", "social_studies" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
fa72caa1-2b00-4c75-93a8-dafa5dca81bb
summary European public assessment
technology
technical_documentation
This is a summary of the European public assessment report (EPAR). Its purpose is to explain how the assessment done by the Committee for Medicinal Products for Veterinary Use (CVMP) on the basis of the documentation provided, led to the recommendations on the conditions of use. This summary cannot replace a face-to-face discussion with your veterinarian. If you need more information about your animal’s medical condition or treatment, contact your veterinarian. If you want more information on the basis of the CVMP recommendations, read the scientific discussion (also part of the EPAR). - What is CaniLeish? CaniLeish is a vaccine. It is available as a powder and solvent that is made up into suspension for injection. It contains Leishmania infantum excreted secreted proteins. - What is CaniLeish used for? CaniLeish is used to vaccinate dogs from six months of age to reduce the risk of developing an active infection and clinical disease after contact with Leishmania infantum. Leishmania infantum is a parasite that causes leishmaniosis. It is widespread in countries bordering the Mediterranean Sea. The parasite is transmitted from an infected dog to a non-infected dog by the bites of sand flies. Dogs that have been infected may show no signs of infection, but some do (fever, hair and weight loss, skin sores) and in the latter case, the outcome of active infection can be fatal. Infected dogs play a central role in the accidental transmission of parasites to humans. CaniLeish is to be used only in ‘Leishmania-negative’ dogs. The detection of Leishmania infection using a rapid diagnostic test is recommended before vaccination. The vaccine is given to dogs as three injections, three weeks apart, under the skin. The first injection can be given from six months of age, the second injection is given three weeks later and the third is given three weeks from the second one. Afterwards, a single ‘booster’ should be given every year to maintain the vaccine’s effect. Veterinarians should assess the benefit-risk balance before vaccinating dogs in areas with little or no Leishmania infantum. - How does CaniLeish work? CaniLeish is a vaccine that contains a number of proteins that are released from the Leishmania infantum parasite during its growth. CaniLeish is a vaccine. Vaccines work by ‘teaching’ the immune system (the body’s natural defences) how to defend itself against a disease. When CaniLeish is given to dogs the immune system recognises the proteins as ‘foreign’ and make defences against it. In the future, if the animals are exposed to Leishmania infantum parasite, the immune system will be able to respond more quickly. This will help to protect against the disease. CaniLeish contains an ‘adjuvant’ (a highly purified fraction of Quillaja saponaria) to enhance the immune response. - How has CaniLeish been studied? The safety of the vaccine was studied in two main laboratory safety studies carried out in Leishmania-free dogs (overdose and single and repeated administration) and one field trial. The vaccine was generally well tolerated as shown by the absence of major adverse reactions. The efficacy of the vaccine was studied in one main field trial that lasted for two years involving vaccinated and control dogs submitted to natural exposure to infection in zones where there is a high risk of infection. A number of laboratory trials where dogs were submitted to experimental infection were also presented. - What benefit has CaniLeish shown during the studies? The studies showed that the vaccine is safe for both Leishmania-negative and Leishmania-infected dogs. The benefit of the vaccination was assessed in zones with a high risk of infection where it has been shown in Leishmania-free dogs to decrease the risk of developing an active infection and a symptomatic disease after contact with the parasite. The number of dogs developing an active infection and a symptomatic disease was significantly reduced in the vaccinated group. The efficacy of vaccination in dogs already infected was not investigated and therefore cannot be recommended. In dogs developing leishmaniosis (active infection or disease) despite vaccination, proceeding with vaccine injections showed no benefit. The risk of vaccine-induced infection can be excluded since the vaccine does not contain parasites. - What is the risk associated with CaniLeish? After injection, some dogs can have moderate and temporary local reactions, such as swelling, nodule (hardening), pain on palpation or erythema (reddening). These reactions resolve spontaneously within two days to two weeks. Other temporary signs commonly seen following vaccination can also occur such as hyperthermia (raised body temperature), apathy (lack of vitality) and digestive disorders lasting one to six days. Allergic-type reactions are uncommon and if a dog shows signs of an allergic reaction, they should be given appropriate symptomatic treatment. After vaccination transient antibodies against Leishmania detected by immunofluorescence antibody test may appear but do not reflect an active infection. - What are the precautions for the person who gives the medicine or comes into contact with the animal? In case of accidental self-injection, the advice of a doctor should be sought immediately. - Why has CaniLeish been approved? The CVMP concluded that the benefits of CaniLeish outweigh the risks for the active immunisation of Leishmania-negative dogs from six months of age to reduce the risk to develop an active infection and clinical disease after contact with Leishmania infantum and recommended that CaniLeish be given a marketing authorisation. The benefit-risk balance may be found in the scientific discussion module of this EPAR. - Other information about CaniLeish The European Commission granted a marketing authorisation valid throughout the European Union, for CaniLeish to Virbac S.A. on 14 March 2011. Information on the prescription status of this product may be found on the label / outer package. This EPAR was last updated on 14/01/2016 . 07/01/2016 CaniLeish -EMEA/V/C/002232 - R/0004 - Annex I - Summary of product characteristics - Annex IIA - Manufacturing-authorisation holder responsible for batch release - Annex IIB - Conditions of the marketing authorisation - Annex IIIA - Labelling - Annex IIIB - Package leaflet Please note that the size of the above document can exceed 50 pages. You are therefore advised to be selective about which sections or pages you wish to print. For the active immunisation of Leishmania-negative dogs from six months of age to reduce the risk to develop an active infection and clinical disease after contact with Leishmania infantum. The efficacy of the vaccine has been demonstrated in dogs submitted to multiple natural parasite exposure in zones with high infection pressure. Onset of immunity: 4 weeks after the primary vaccination course. Duration of immunity: 1 year after the last re-vaccination. Changes since initial authorisation of medicine |Name||Language||First published||Last updated| |CaniLeish : EPAR - Procedural steps taken and scientific information after authorisation||SV = svenska||2013-01-21||2016-01-14| Initial marketing-authorisation documents This medicine is approved for use in the European Union
0.7
medium
6
1,594
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "science", "life_skills" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
65d1f1dc-1ee5-42cb-acfb-bd45db427b69
Close Home » Romance » V
language_arts
tutorial
Close Home » Romance » V.A. Dold » Cade December 27 , 2007 Cade Le Beau, #1 Overview NOTE: Complete novel. No cliffhanger. Dual POV. New Orleans Billionaire Wolf Shifters with plus sized BBW for mates. This is an adult paranormal romance with erotic content. It also includes vampires, voodoo priestess, and magic. These are stand alone books that create a series. They do not need to be read in order. Of course, the story is richer if the books are read in order.  #1 Cade  #2 Simon  #3 Stefan  #4 Thomas  #4.5 Cade & Anna  # 5 Lucas  Anna James is single again, finally. In her opinion, men are self-centered and will never love her for who she is, a beautiful, plus sized woman. All except the fantasy man that she’s been meeting in her dreams for five years.  She just never expected her fantasy to be a real live alpha shifter...  Cade Le Beau isn't what he seems. He’s a billionaire wolf. A Shifter. He laments his missed chance six months ago to meet his fantasy woman in the flesh. Just as his second chance presents itself, his fantasy woman, his mate, is threatened by the local mob boss and her ex-husband. Now, he has forty eight hours to deal with this threat once and for all or chance losing her again.  Is it Anna who’s in danger, or the humans who unwittingly threaten her?  The heat is on the moment they lay eyes on each other. Neither, age, children, horrid ex-husbands, nor mob bosses will stop this love affair. Download epub2 Read also Escape Down the Roman Road Escape down the four Roman roads; Jesus is the way. Roman Road 1 “For all have sinned, and come short of the glory of God.” (Romans 3:23) Roman Road 2 “For the wages of sin is death; but the gift of God is eternal life through Jesus Christ our Lord.” (Romans 6:23) Roman Road 3 “But God commendeth his love toward us, in that, while we were yet… Show more... How to download book Buy this book You can buy this book now only for $0.00. This is the lowest price for this book. Buy book Download book free If you want to download this book for free, please register, approve your account and get one book for free. Register After that you may download book «Cade»: Download Adobe DRM: how-to-epub-cade.pdf Download ePub: how-to-epub-cade.epub
0.7
low
3
646
[ "foundational knowledge" ]
[ "advanced concepts" ]
[]
{ "clarity": 0.7, "accuracy": 0.5, "pedagogy": 0.6, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
af4337e8-3651-4c95-b4d6-b3d134a2a6e1
cornea lens eye focus
technology
technical_documentation
The cornea and lens of the eye focus rays of light by bending them in a process called refraction. The light rays form an image on the retina in the back of the eye, much as a camera lens focuses images onto film. The figure above shows an ideal eye with perfect focus. All the rays of light traveling through the eye focus as a single image on the retina. When the path through an eye contains imperfections, light is refracted onto the retina abnormally, and the resulting image is distorted or blurred. This condition is known as a refractive error. Many people with refractive error need corrective lenses or laser vision correction to help them see more clearly. The three most common refractive errors are: • Myopia (nearsighted) • Hyperopia (farsighted) Nearsightedness occurs when the cornea is too steep or the eye is too long. Light passes through the eye but focuses before it reaches the retina, and it is out of focus on the retina, causing blurred distance vision. Patients are usually able to see objects at near but not at distance. Glasses, contact lenses, or excimer laser reshaping of the cornea diverge light rays to focus on the retina, bringing distant object into focus. Astigmatism occurs when the cornea is more curved in one direction than the other, shaped like a football rather than a basketball. Light passes through the eye and focuses in more than one plane, causing distorted vision. In one plane, light is focused on the retina but not in another. Patients with significant astigmatism often have blurred vision at both distance and near. Most patients have some astigmatism associated with the myopia or hyperopia. Glasses, contact lenses or the excimer laser correct astigmatism by bringing the astigmatic plane into focus on the retina. Farsightedness occurs when the cornea is too flat in relation to the length of the eye. Light passes through the eye and comes into focus past the retina and is out of focus on the retina, causing blurred distance vision. Patients with mild hyperopia are able to see both distance and near when young. As they age, they first lose near vision, then distance vision. Glasses, contact lens or excimer laser reshaping of the cornea converge light rays to focus on the retina, bringing distant objects into focus. Return to Top
0.6
low
3
472
[ "programming basics" ]
[ "software development" ]
[]
{ "clarity": 0.5, "accuracy": 0.4, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.25, "creativity": 0.3 }
21040e66-e384-4785-b04a-520b6f6d7058
Our preschool class taken
technology
creative_writing
Our preschool class has taken quite a liking to the “5 Little Ducks” finger play. They have consistently asked to sing this song throughout the whole school year. Recently, in one of our centers, we gave the children a chance to retell the story in a little more of a hands-on way. There are quite a few different ways to sing this song, and in our class we sing it like this… “5 little ducks went out one day, over the hill and far away. Mother duck said ‘Quack, quack, quack,’ but only 4 little ducks came back!” We continue to repeat the song as the number goes down each time, and by the end, there are “zero little ducks” to sing about anymore! Throughout the morning, the children would stop by the table to retell the story using the illustrations in Penny Dann’s adaptation of “Five little ducks”. The students found it helpful to have one person flip through the pages as the other person told the story using our rubber duck family… And when someone else wanted to tell the story, we could open the book right up and help each other along as we sing the song together!
0.6
low
5
250
[ "data structures", "algorithms basics" ]
[ "architecture patterns" ]
[]
{ "clarity": 0.4, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
7070dfa8-c29e-4c98-94fc-e7b7934c6c13
BIM-BASED DATA MINING APPROACH
interdisciplinary
practical_application
BIM-BASED DATA MINING APPROACH TO ESTIMATING JOB MAN-HOUR REQUIREMENTS IN STRUCTURAL STEEL FABRICATION Xiaolin Hu Ming Lu Simaan AbouRizk Department of Civil & Environmental Engineering University of Alberta 9105-116 Street Edmonton, AB T6G 2W2, CANADA ABSTRACT In a steel fabrication shop, jobs from different clients and projects are generally processed simultaneously in order to streamline processes, improve resource utilization, and achieve cost-effectiveness in serving multiple concurrent steel-erection sites. Reliable quantity takeoff on each job and accurate estimate of shop fabrication man-hour requirements are crucial to plan and control fabrication operations and resource allocation on the shop floor. Building information modeling (BIM), is intended to integrate multifaceted characteristics of a building facility, but finds its application in structural steel fabrication largely limited to design and drafting. This research focuses on extending BIM's usage further to the planning and control phases in steel fabrication. Using data extracted from BIM-based models, a linear regression model is developed to provide the man-hour requirement estimate for a particular job. Actual data collected from a steel fabrication company was used to train and validate the model. 1 INTRODUCTION Steel has long been the most important component to the construction sector for its strength, durability, flexibility, efficiency, sustainability, and versatility (SteelConstruction.info 2014). The production of steel pieces, which includes a variety of operations of detailing, fitting, welding, and surface processing, is a complex and critical process for a typical steel construction project. Most steel construction projects use off-site structural steel fabrication shops to support the erection sites in order to increase the productivity, gain better control over quality, and reduce the total cost of the projects (Eastman and Sacks 2008). A steel fabrication shop usually makes use of shift work and serves multiple steel erection sites at the same time to keep the business economical. Efficient planning is substantial to steel fabrication to ensure a streamlined and delay-free production process. Figure 1 shows the structure of a typical construction project (Dozzi and AbouRizk 1993). Personnel, materials, equipment, and management are consumed by the system as resources to produce the construction units. As the foundation of further planning and scheduling, estimating plays a critical role to every construction project. Quantity takeoff is the most time-consuming yet extremely important task in estimating. The following project scheduling and control would benefit a great deal if quantity takeoff could be done accurately and in a timely manner. For example, it can be used to foresee and plan the construction activities during the pre-construction stage; in the process of construction, quantity takeoff can be used as a measurement of the project progress or for economic control of the project (Monteiro and Poças Martins 2013). The measurement unit for workload for steel fabrication projects can be the number of steel pieces, weight of the final product, project duration, or monetary value. With the nature of steel fabrication being labor-intensive, man-hours are normally used as the major input for the steel fabrication processes (Dozzi and AbouRizk 1993). The other resources, such as labor, equipment, and overhead costs, are also closely correlated to man-hours. Therefore, it is most suitable to set the output of quantity takeoff as the manhours needed to complete the project. In addition, the ratio of man-hours and the overall steel weight could be an excellent measure of production efficiency, i.e. productivity. This paper presents an approach using regression analysis to extend BIM's usage further to the planning and control phase. The paper is organized as follows. The next section provides a literature review in construction estimating, application of regression analysis in construction, the current adoption of BIM in the steel industry, and background introduction. The research methodology is then presented, which is followed by model validation and evaluation. As defined by National Building Information Model Standard Project Committee (2014), BIM is "a shared knowledge resource for information about a facility forming a reliable basis for decisions during its life-cycle." The concept of BIM has been rapidly gaining popularity and acceptance since Autodesk released the BIM white paper (Autodesk 2003). Ideally, the vitality of a BIM-based model spans the entire life-cycle of a project, from earliest conception to completion, supporting processes like planning, design, cost control, construction management, etc. This relatively new technology has also been adopted by the steel fabrication industry, but only to find its use limited mostly to design and drafting (Sattineni and Bradford 2011). Most of the advantages that BIM offers, such as increased coordination of documents and effective information communication, are not exploited. BIM-based models are utilized solely as 3D visualization in most cases. The collaborating steel fabrication company for this research uses BIM software Tekla to build 3D models for structural visualization, and generate 2D drawings for the fabrication shop. 2 BACKGROUND To perform quantity takeoff, several methods are available in the construction industry. Traditional estimators do their takeoffs manually with printed drawings. They would use colorful markers to keep track of different materials and enter relevant information onto leger sheets or spreadsheets for calculation. Some estimators adopt simple annotation software to view electronic drawings, do colorcoding, etc., but the process is still manual in essence (Vertigraph, Inc. 2004). Special estimating software is another approach, but its input still relies heavily on human interpretation. As stated by Saurabh Tiwari et al. (2009), "Model-based cost estimating is the process of integrating the object attributes from the 3D model of the designer with the cost information from database of the estimator." Adopting BIM for managing the design and construction process of projects has proven to be a shared understanding (Aranda-Mena et al. 2009). According to Monteiro and Poças Martins (2013), BIM-based quantity takeoff is "one of the potentially most important and profitable applications for BIM." Yet, it is still generally underdeveloped. A series of interviews with the estimators and project managers in the steel fabrication industry revealed that the current estimating practice followed by most steel fabricators is a manual process using spreadsheets and 2D drawings generated by computer aided design (CAD) software or exported from BIM-based models. Even with the availability of BIM, estimators use it as a visualization tool to help them with reading the 2D drawings. Estimators use their experience to evaluate the project complexity and estimate the workload. The factor of human interpretation in the process determines its errorproneness. Artificial intelligence has long been adopted by researchers for modeling and solving problems in the construction industry. Modeling techniques such as artificial neural network (ANN), regression models, and decision trees have been introduced to study the relationships between all kinds of factors in construction processes using historical data. Song and AbouRizk (2008) used ANN to model the relationship of influencing factors and steel drafting and fabrication productivities. Jason B. Portas (1996) developed a neural network system to provide support in the labor productivity estimation for concrete formwork. ANN has also been used to model the relationship between influencing factors and construction productivity in trades like earthmoving equipment productivity (Karshenas and Feng 1992), concrete construction productivity (Sonmez and Rowings 1998), and pipe spool fabrication and installation productivity (Lu 2001). Hu and Mohamed (2012) explored artificial intelligence planning and dynamic programming to solve the automation problem in sequencing decision making in construction. Fayek and Oduba (2005) used fuzzy logic expert systems to predict productivity of pipe rigging and welding. Smith (1999) applied regression-based models to study earthmoving productivity. Lee et al. (2013) use regression analysis to develop a quantity prediction model for reinforced concrete and bricks in education facilities. Linear regression is also used to develop condition prediction models of oil and gas pipelines in order to provide decision support to practitioners in planning for pipeline maintenance (ElAbbasy et al. 2014). The collaborating company is a leader in the steel fabrication and construction services industries, offering services of procurement, engineering, 3D modeling, fabrication, coating, module assembly, erection, etc. They use Tekla software (Tekla 2014) to create 3D models from a customer's drawings, and produce erection and fabrication drawings. As shown in Figure 2, a job is divided into one or more divisions, which is of the proper size to manage and to be processed in different shops. Shops are equipped with different equipment and labor settings. For example, shop "A" is equipped with a 40-ton overhead crane, making it suitable to handle super assembly structures; shop "B" is set up to handle frames. A division is normally about 20 – 50 tons, consisting of multiple pieces. It is the basic unit for the estimators and project managers to do their jobs. The estimators or fabrication shop managers use their experience to evaluate the division complexity and come up with a labour productivity value measured by man-hours per tonne, which is to be multiplied by the overall weight of steel in order to get the man-hours budget needed to complete the work. The effectiveness of this practice depends to a great extent on personal experience and knowledge, and may not always be consistent and reliable. The abundant information contained in BIM, such as predefined or user-defined material properties, is not exploited properly. Furthermore, job compositions of steel fabrication projects can vary greatly from one another. Even within the same job or division, the labour requirements per unit weight of different material types are generally different. For example, a piece demanding extensive welding obviously requires more man-hours than a super-assembly connected by bolts. This paper presents an approach to the prediction of fabrication man-hour requirements for structural steel projects by analyzing and learning from the historical schedules and cost information stored in the company's central database for the benefits of detailed estimating. 3 RESEARCH METHODOLOGY BIM software has the functionality to create all kinds of reports of the information included in the models. Tekla Structures, used by the collaborating company, creates reports in the format of "*.xsr" files. The reports include lists of drawings, bolts, parts, etc. (Tekla 2014). Since the reports come directly from the model, the information is always accurate and reliable. This study makes use of the material parts report generated from Tekla. The essential attributes at the level of materials, as well as the summary level of divisions, are collected and analyzed for 298 jobs and 1605 divisions completed by the collaborating steel fabricator from 2009 to 2013. Only jobs that include supply work are considered because erection is a process almost completely separate from shop fabrication. The first stage of this research is to design a meaningful data structure to sort out and organize the data at different levels, and to collect necessary information from the large database. After historical data are collected, a regression model is developed. The basic attributes of different material types are defined as independent input variables. The man-hours needed to fabricate a division are defined as the output variable. An open-source software, WEKA (The University of Waikato 2014), is chosen to complete the data mining task because of its wide collection of machine learning algorithms and various regression functions. The selection of contributing factors and the optimization of the variables through iterative experiments are all done by WEKA. At the third stage, the developed model is verified through an independent dataset and the prediction results are compared with the forecast made by personal judgment. 4 CASE STUDY 4.1 Data Preparation A customized report template (*.rpt) is used in Tekla to create reports containing necessary information from the BIM models. A report contains too much information, so only part of the structure is shown in Figure 3 as an example. The whole process of preparing the dataset for machine learning is summarized in Figure 4. Figure 4: Data preparation framework. Start Collect Material Data Is fabrication in scope? Aggregate on Material Types Summarize at Division Level End Parse Report into Central Database Export Division Reports (*.xsr) from Tekla Aggregate & Format to Feed into WEKA N Y In the central database, the production-related data is scattered over several tables. A general illustration of the object relations is illustrated in Figure 5. The physical steel materials are not directly associated with each division, but rather as parts of pieces and fabrication drawings. In order to study the productivity and schedule data at the division level, the detailed data of all the materials within the same division need to be collected and then aggregated to the division level based on material types. Divisions are assigned to different shops to be processed according to the characteristics of the division and the shops' capacities. Therefore the shop name is included as a nominal input of the model. The unit weight per unit length, for example "kg/m," and quantity are the two most basic attributes of steel materials. For major materials such as beams, columns, and channels, the fabrication man-hours required are positively correlated with the material length and weight, but for the various kinds of bolts and nuts used in the shop, quantity would be a much more meaningful factor to be considered. The length of a bolt plays no role in determining the handling time of the piece it is attached to. Whether the bolt is long or short, it is the quantity that truly matters. Table 1 lists part of the 45 material types examined in this study, according to the collaborating company's information library. Materials such as miscellaneous assemblies are excluded since their amounts and fabrication requirements are too small to make a difference. Table 1: Part of material types and key attributes. | W – Wide Flange Beams | Length | |---|---| | L – Equal or Unequal Legs | Length | | C – Channels | Length | | HS – Hollow Steel Sections | Length | | STD.PIPE – Standard Pipe | Length | | M-BOLT – M Type Bolts | Quantity | | H-NUT – Hex Nuts | Quantity | The basic attributes were collected at the level of each material type. Then the total quantity or length, and weight is summarized at the division level. The characteristics of one of the divisions to be fed into WEKA are shown in Table 2. Table 2: Sample data of a division. 4.2 Model Selection It is generally believed that the more materials a job requires, the more man-hours it will cost. Accordingly, linear regression could be a suitable technique to use for quantitative man-hour prediction. Various types of models were investigated during the stage of model building and training. To get statistically meaningful results, 10 runs of 10-fold stratified cross-validation were performed on the training dataset (production data from 2009 to 2012) using different schemes. 10 iterations of 10-fold cross-validation means 100 calls of each scheme with the same dataset (Remco Bouckaert et al. 2014). An evaluation summary of the different models is shown in Table 3. As expected, the RBF neural network model produces unsatisfactory results based on the current problem definition and dataset available. SMO regression's attempt to exclude outliers leads to a lower relative absolute error, and its performance can be considered statistically as good as linear regression. However, it is way more complex in nature than linear regression, probably leading to higher implementation cost and lower user acceptance. Therefore, linear regression is selected as the solution to the quantity take-off problem. The Linear Regression implementation in WEKA uses the Akaike criterion (Burnham and Anderson 2002) for model selection. A statistical selection procedure is also incorporated in WEKA to determine the best combination of independent input variables. The selected factors and the regression parameters are given in Equation (1) in Section 4.3. Although ANN models are generally popular in the construction industry, they are more suitable for non-linear problems. RBF neural network was also tested in this study for comparison with the number of clusters set as the number of shops. Moreover, a Support Vector Machine with Sequential Minimal Optimization (SMO) algorithm (Platt 1998; Shevade et al. 2000) was tested. The Support Vector Regression (SVR) method defines an objective function on the training set with a constraint threshold, and the optimization objective is to find the best fit objective function while excluding the least outlying training data (Smola and Schölkopf 2004). Table 3: Evaluation comparison of various models. 4.3 Model Validation and Evaluation The dataset from 2009 to 2012, which accounts for 248 jobs and 1343 divisions out of the total 298 jobs and 1605 divisions, is used to train the model. Data in 2013 is reserved for testing. To validate the model, 10-fold cross-validation was performed on the 2009-2012 dataset. Figure 6 shows the visualization of classifier errors of cross-validation results. The horizontal axis represents actual fabrication man-hours; the vertical axis represents the model-predicted man-hours. The closer the data points to a 45 degree line, the closer the forecast to the actual values. The method of tracking and recording actual hours on the floor is always improving. The historical data can have errors due to inaccurate records, for instance, working hours assigned to the wrong division number. The overall convergence in Figure 6 proves the validity of the trained model. The developed best-fit model is shown in Equation (1) below. ``` 054 . 15 4215 . 340 . 4813 .0 . 5694 .1 . 776 . 10 . 7614 .1 . 6684 .0 . 29 .0 . 1164 .2 1334 .0 1645 .0 . 5089 . 11 . 4317 . 50 . 4378 .2 . 8187 .0 7095 .0 2687 .0 7115 .6 1708 .0 9271 .0 2036 .0 015 .0 + × + × + × + × + × − × − × − × + × + × + × + × + × − × + × + × − × + × + × − × + × = FabA HEX MBOLT WELD PP WELD CP STUD NS WASHER BV WASHER FL PL CHECK HTB PLT PIPE XXS PIPE XS PIPE STD HSS RD HS L MC C WT W divWt divAct (1) ``` Note: divAct = division actual MHrs; divWt = division weight; W = length of wide flange beams; WT = length of structural tees from W shapes; C = length of channels; MC = length of miscellaneous channels; L = length of equal or unequal legs; HS = length of square or rectangular hollow steel sections; RD.HSS = length of round hollow steel sections; STD.PIPE = length of standard pipes; XS.PIPE = length of extra strong pipes; XXS.PIPE = length of extra extra strong pipes; PLT = length of plates; CHECK.PL = length of checker plates; NS.STUD = length of Nelson S3L shear connectors and H4L headed concrete anchors; CP.WELD = length of complete penetration weld; PP.WELD = length of partial penetration weld; FL.WASHER = quantity of flat washers; BV.WASHER = quantity of beveled washers; MBOLT.HEX = quantity of hex head machine bolts; FabA = fabricated in shop "A". Next, the 2013 data was used as a test set to evaluate the model. Figure 7 shows the correlation between the actual fabrication man-hours and the values predicted by the model. The better convergence of the data points in 2013 compared to the historical data can be due to the improved tracing and recording of actual hours. Limitations in the data are also suggested in the figure. For some divisions, the model tends to predict the work to be more than the actual man-hours recorded. One reason may be that when a worker is working on multiple divisions in a day, it is very likely that he fails to precisely track the number of hours he has spent in each division. Nevertheless, the figure clearly demonstrates that the trained model can be considered satisfactorily accurate in predicting the fabrication man-hours. The shop budgeted man-hours are also compared with the actual fabrication man-hours. Shop budgets are the numbers produced by estimators following the current practice, which relies on the overall steel weight and a man-hour per ton factor from experience. The evaluation results can be found in Table 4. Table 4: Evaluation summary of linear regression vs. experience. The forecasted results are closer to the actual values than the judgment made by the professional judgment and experience. The increased accuracy in quantity takeoff will help optimize the company's resource allocation and reduce the risk of cost and schedule overrun. More importantly, the model can be useful as decision support or guidance for someone with little or no experience, especially when no detailed estimating handbook or manual, except a procedure guideline, is available. The estimating process can be accelerated and managers better assured. 5 CONCLUSIONS Structural steel fabrication is an industry with characteristics that makes it different from traditional construction and manufacturing. The use of BIM is on the rise not only in general construction but also in structural steel fabrication. However, the functions and advantages of BIM-based models are limited to design and drafting in most cases. This research aims to develop an approach to extend BIM's usage further into estimating and planning. The performance information recorded in historical BIM data is important and useful for the company's future projects. This study develops a linear regression model to predict man-hour quantity for steel fabrication projects in the planning phase. The proposed methodology is implemented and validated, proving the models to be both feasible and recommended to support project estimating and planning. The results of this study show much promise for advanced BIM in steel fabrication planning and control. The combination of BIM with the current scheduling and fabrication process can be investigated in future studies. A target in the next phase of this research would be to develop an integrated system of estimating and scheduling, pushing the application of BIM further to the planning stage in steel fabrication shops. Related problems include how to determine the priority of various steel elements in scheduling, the complexity of the specific fabrication, and optimal allocation of the various resources on the shop floor. The models were developed using the production data from the collaborating company, so that they were customized to the company's information management system (IMS). Another steel fabrication company may have different ways of tracking data and implementing IMS, but the methodology and framework of the study can still be used for the development of quantity take-off prediction models. ACKNOWLEDGMENTS This project was funded by the Natural Sciences and Engineering Research Council of Canada under Collaborative Research and Development Grants (CRD). The authors wish to thank Darrell Mykitiuk and Jim Kanerva of Waiward Steel Fabricators Ltd. for their support. REFERENCES Aranda-Mena, G., J. Crawford, A. Chevez, and T. Froese. 2009. "Building Information Modelling Demystified: Does It Make Business Sense to Adopt BIM?" International Journal of Managing Projects in Business 2 (3): 419–34. doi:10.1108/17538370910971063. Bouckaert, R. R., E. Frank, M. Hall, R. Kirkby, P. Reutemann, A. Seewald, and D. Scuse. 2014. "WEKA Autodesk. 2003. Building Information Modeling. San Rafael, CA: Autodesk, Inc. http://www.laiserin.com/features/bim/autodesk_bim.pdf. Manual for Version 3-6-11". The University of Waikato. Information-Theoretic Approach. Springer. Burnham, K. P., and D. R. Anderson. 2002. Model Selection and Multimodel Inference: A Practical Dozzi, S. P., and S. M. AbouRizk. 1993. Productivity in Construction. National Research Council Canada. http://archive.nrc-cnrc.gc.ca/obj/irc/doc/pubs/nrcc37001.pdf. El-Abbasy, M., A. Senouci, T. Zayed, F. Mirahadi, and L. Parvizsedghy. 2014. "Condition Prediction Models for Oil and Gas Pipelines Using Regression Analysis." Journal of Construction Engineering and Management 0 (0): 04014013. doi:10.1061/(ASCE)CO.1943-7862.0000838. Eastman, C., and R. Sacks. 2008. "Relative Productivity in the AEC Industries in the United States for On-Site and Off-Site Activities." Journal of Construction Engineering and Management 134 (7): 517–26. doi:10.1061/(ASCE)0733-9364(2008)134:7(517). Fayek, A., and A. Oduba. 2005. "Predicting Industrial Construction Labor Productivity Using Fuzzy Expert Systems." Journal of Construction Engineering and Management 131 (8): 938–41. doi:10.1061/(ASCE)0733-9364(2005)131:8(938). Karshenas, S., and X. Feng. 1992. "Application of Neural Networks in Earthmoving Equipment Production Estimating." In Computing in Civil Engineering and Geographic Information Systems Symposium, 841–47. ASCE. http://cedb.asce.org/cgi/WWWdisplay.cgi?9201884. Hu, D., and Y. Mohamed. 2012. "Automating Fabrication Sequencing for Industrial Construction." Gerontechnology 11 (2). doi:10.4017/gt.2012.11.02.318.749. Lee, J. K., B. Y. Kim, J. Y. Kim, T. H. Kim, and K. Son. 2013. "A Quantity Prediction Model for Reinforced Concrete and Bricks in Education Facilities Using Regression Analysis." Journal of the Korea Institute of Building Construction 13 (5): 506–12. doi:10.5345/JKIBC.2013.13.5.506. Monteiro, A., and J. P. Martins. 2013. "A Survey on Modeling Guidelines for Quantity Takeoff-Oriented BIM-Based Design." Automation in Construction 35 (November): 238–53. doi:10.1016/j.autcon.2013.05.005. Lu, M. 2001. "Productivity Studies Using Advanced ANN Models". Ph.D. thesis, Department of Civil and Environmental Engineering, University of Alberta, Edmonton, Alberta, Canada. http://www.collectionscanada.gc.ca/obj/s4/f2/dsk1/tape4/PQDD_0012/NQ60322.pdf. National Building Information Model Standard Project Committee. 2014. "What Is a BIM?" National BIM Standard - United States. Accessed April 29. http://www.nationalbimstandard.org/faq.php#faq1. Portas, J. B. 1996. "Estimating Concrete Formwork Productivity". Master of Science thesis, Department Platt, J. C. 1998. Sequential Minimal Optimization: A Fast Algorithm for Training Support Vector Machines. Technical Report. MSR-TR-98-14. Microsoft Research. of Civil and Environmental Engineering, University of Alberta, Edmonton, Alberta, Canada. Shevade, S. K., S. S. Keerthi, C. Bhattacharyya, and K. Murthy. 2000. "Improvements to the SMO Algorithm for SVM Regression." IEEE Transactions on Neural Networks 11 (5): 1188–93. doi:10.1109/72.870050. Sattineni, A., and R. H. Bradford. 2011. "Estimating with BIM: A Survey of US Construction Companies." Proceedings of the 28th International Symposium on Automation and Robotics in Construction, 564–69. Smith, S. 1999. "Earthmoving Productivity Estimation Using Linear Regression Techniques." Journal of Construction Engineering and Management 125 (3): 133–41. doi:10.1061/(ASCE)07339364(1999)125:3(133). Song, L., and S. M. AbouRizk. 2008. "Measuring and Modeling Labor Productivity Using Historical Data." Journal of Construction Engineering and Management 134 (10): 786–94. doi:10.1061/(ASCE)0733-9364(2008)134:10(786). Smola, A. J., and B. Schölkopf. 2004. "A Tutorial on Support Vector Regression." Statistics and Computing 14 (3): 199–222. doi:10.1023/B:STCO.0000035301.49549.88. Sonmez, R., and J. Rowings. 1998. "Construction Labor Productivity Modeling with Neural Networks." Journal of Construction Engineering and Management 124 (6): 498–504. doi:10.1061/(ASCE)0733-9364(1998)124:6(498). Tekla. 2014. "Tekla." SteelConstruction.info. 2014. "The Case for Steel." Steelconstruction.info. Accessed April 27. http://www.steelconstruction.info/The_case_for_steel. The University of Waikato. 2014. "Weka 3 - Data Mining with Open Source Machine Learning Software in Java." WEKA. Accessed April 30. http://www.cs.waikato.ac.nz/ml/weka/. Tekla. Accessed April 30. http://www.tekla.com/us. Tiwari, S., J. Odelson, A. Watt, and A. Khanzode. 2009. "Model Based Estimating to Inform Target Value Design." AECbytes "Building the Future" Article. http://www.aecbytes.com/buildingthefuture/2009/ModelBasedEstimating.html. Vertigraph, Inc. 2004. "Automating the Takeoff and Estimating Process." http://www.vertigraph.com. AUTHOR BIOGRAPHIES XIAOLIN HU is M.Sc. student in the Department of Civil and Environmental Engineering at University of Alberta. She holds a B.Eng. degree in Computer Science from Harbin Institute of Technology in Harbin, China with a concentration in Information Security. Her email address is firstname.lastname@example.org. MING LU is an Associate Professor in the Department of Civil & Environmental Engineering at the University of Alberta. He has been committed to achieving excellence in research and teaching in areas of construction engineering and project management. In 2010, he assumed a position of Associate Professor at the University of Alberta, Canada. He is the recipient of Fiatech 2013 Superior Technical Achievement Recognition (STAR) Award. His research interests are construction surveying and automation, operations simulation and scheduling in construction. His email address is email@example.com. SIMAAN ABOURIZK is a Professor in the Department of Civil and Environmental Engineering at University of Alberta and Executive Board member of Construction Research Institute of Canada. He received his Ph.D. in Construction Engineering and Management from Purdue University in 1990, and his M.Sc. in Construction Engineering and Management from Georgia Institute of Technology in 1985. The overall goal of his research is to develop a better framework for the planning and control of construction projects through advancements in simulation. His email address is firstname.lastname@example.org.
0.65
medium
6
7,388
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.65, "creativity": 0.35 }
e7554b5d-c0e8-40a0-805f-ef20f85feaa4
Introduction: What is HTML
technology
historical_context
Introduction: What is HTML? HTML stands for HyperText Markup Language and it is the basic language of the web. It is made up of elements and attributes that allow you to create semantic and meaningful markups. Here we will see how it is used to make websites. HTML is a markup language that defines how content should be written out on a webpage. We use HTML elements to give meaning to those pieces of content, such as headings, paragraphs, lists, etc. And we use attributes such as “class” or “id” to tell the browser how those HTML elements should render on-screen so they can be styled with CSS later on. It has been around since the 1990s and has seen incremental updates over the years. What are the Components of an HTML Document? HTML documents are the backbone of any modern website. They are the only way that we can express ourselves and transmit our ideas to the world. These HTML documents are constantly evolving into a more seamless experience for both viewers and creators. The HTML document breaks down into three main parts: head tags, body tags, and HTML elements. HTML Elements: These are the building blocks of any web page and include things like images, hyperlinks, tables, forms, and other types of content you can interact with on a webpage. Head Tags: There are six head tags – HEAD – TITLE – SCRIPT – META – LINK and IMG. Head Tags provide information about how your document should be interpreted by web browsers or other programs that understand them. Body Tags: These determine the formatting for all other pages in your site – whether it’s text or an image or another element that you want to appear somewhere on your page. Today, HTML has evolved so much that it’s not just about providing the necessary markup for web pages. It has also become the basis for how we access and interact with the internet in general, both on desktops and mobile devices. HTML tags are used to communicate with the web browser. HTML tags are one of the most important aspects of web design. They’re used to communicate with the web browser and tell it how to display the content. This is done through different types of tags that indicate what kind of content they enclose (things like paragraphs, headings, lists, images). HTML tags are simple and are often used on the web. There are tags that control what text looks like, where it is displayed, how it is formatted, what links to include, images, or other multimedia elements like video or sound. HTML tags are text characters that are meant to be interpreted by the browser. They are used to mark up both the content and the document structure of a web page. HTML tags work the same way as any other programming language. They require a start tag and an end tag (known as an opening and closing tag). The opening tag tells the browser what type of element you want to use and then you provide attributes for that element. The end tags tell the browser when it has reached the end of that section. Know more in this html for beginners blog. Below is a list of tags that can be used to mark up different parts of a web page: – Lists or ordered lists Start Learning HTML Today to Build Amazing Websites Tomorrow Online tutorials are a good way to start learning HTML coding. They provide a step-by-step guide on how to use tags and codes and what they do in detail.
0.6
medium
3
713
[ "programming basics" ]
[ "software development" ]
[ "science", "language_arts" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
5ba227aa-34a0-481c-8a2b-66d652e7feb7
German railway operator Deutsche
interdisciplinary
historical_context
German railway operator Deutsche Bahn (DB) stirred up a storm after deciding to name one of its new Inter City Express (ICE) trains after Anne Frank, ignoring the fact that she—like many other Jews at the time—perished after being hauled to a Nazi extermination camp via the same method of transportation. The name was chosen when in mid-September DB called on its customers to choose the names for its new generation of urban express trains. Within a month, 19,400 suggestions for more than 2,500 names were received—one of them being Anne Frank. A naming committee established by the company then selected names for the 25 new trains from a list of the most popular suggestions. DB explained that their goal was to eternalize Frank's memory even further in German society. Antia Neubauer, DB head of public relations and member of the naming committee, was quoted by the British tabloid Daily Mail as saying that her name was picked for her representation of tolerance. "It stands for tolerance and for a peaceful co-existence of different cultures, which in times like these is more important than ever," she said. Other names chosen for the trains include famous Germans such as theoretical physicist Albert Einstein, philosopher Karl Marx and composer Ludwig van Beethoven. "As different as the selected personalities are, they have one thing in common: they were curious about the world," said naming committee member and gender history professor, Gisela Mettele. Many, though, took to social media to condemn the decision, with Iris Erberl, a German politician from the conservative Christian-Social Union party, calling the decision "disrespectful." One wrote: "Am I actually the only one who finds it strange to call a train of the legal successor of the Reichsbahn Anne Frank?" The Deutsche Reichsbahn was Germany's national railway company that operated during the Third Reich, and the predecessor of DB. It gathered infamy for transporting Jews and other victims of the Holocaust to Nazi concentration and extermination camps. "The legal successor to the Reichsbahn, which does not compensate forced laborers to this day, baptizes an ICE train Anne Frank. As an historian I unfortunately find this terribly wrong," wrote another scholar. The new trains, including the one named after Anne Frank, are set to make their first voyage in December. Over the past month, complaints have been raised in different parts of the world over the use of Anne Frank's name. An American website that sells Halloween costumes for children, for instance, came under fire for including in its range an outfit of Anne Frank. Moreover, Italian police and soccer authorities this month opened investigations after Lazio fans posted anti-Semitic stickers of Anne Frank wearing the jersey of their top-flight city rivals AS Roma.
0.65
medium
6
583
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
6d24a180-01c2-4bf8-9e13-abe646254f36
It's fascinating you're interested
technology
code_implementation
It's fascinating that you're interested in teaching web development concepts through the lens of advanced mathematics like Class Field Theory! While Class Field Theory itself is a highly abstract and complex area of number theory, translating its essence into a web development lesson requires a creative approach. Since Class Field Theory doesn't directly map to typical web development components (like buttons or forms), we'll use it as a *metaphor* or *inspiration* for teaching web development principles, particularly around structure, relationships, and building complex systems from simpler parts. Let's frame this as teaching **"The Architecture of Web Applications: Inspired by Class Field Theory."** This allows us to explore how seemingly disparate parts of the web (HTML, CSS, JavaScript) work together in a structured, almost algebraic way to create a cohesive and functional whole, much like how Class Field Theory relates different number fields. --- ## The Architecture of Web Applications: Inspired by Class Field Theory Welcome to a journey into the heart of web development, where we'll explore how the fundamental building blocks of the internet – HTML, CSS, and JavaScript – come together to create the dynamic experiences you interact with every day. Our guide on this journey? The profound and beautiful **Class Field Theory** from mathematics. ### A Glimpse into the Past: The Genesis of Structure Before we dive into the code, let's appreciate the historical context that shaped both mathematics and the web. **Mathematics: The Quest for Order** For centuries, mathematicians have sought to understand the underlying structure of numbers. Early number theory focused on properties of integers, like primality and divisibility. However, as mathematicians delved deeper, they encountered increasingly complex algebraic structures. One of the most significant breakthroughs came with the study of **algebraic number fields**. These are extensions of the familiar rational numbers ($\mathbb{Q}$) that include roots of polynomials with rational coefficients. Think of them as richer number systems with more intricate relationships between their elements. **The Problem:** While these new number fields offered exciting possibilities, understanding their arithmetic (like how prime numbers behave within them) proved incredibly challenging. It was like trying to navigate a new continent without a map. **The Solution: Class Field Theory** Enter **Class Field Theory**, a monumental achievement in number theory, largely developed in the late 19th and early 20th centuries by mathematicians like **David Hilbert**, **Philipp Furtwängler**, and **Emil Artin**. **Hilbert's Vision:** Hilbert, a towering figure in mathematics, posed a series of problems in 1900 that set the agenda for much of 20th-century mathematics. Among these was the "Teichmüller problem," which hinted at the need for a theory to connect algebraic number fields to their "class groups" – structures that capture information about ideal factorization. **The Core Idea:** Class Field Theory provides this crucial connection. It establishes a deep and surprising relationship between the arithmetic of an algebraic number field and the structure of its **abelian extensions**. An abelian extension is a specific type of field extension where the "Galois group" (which describes the symmetries of the extension) is abelian (meaning its elements commute). In essence, Class Field Theory states that every algebraic number field has a unique "maximal unramified abelian extension" that is entirely determined by the arithmetic of the base field itself, specifically by its class group. This is akin to saying that the fundamental properties of a number system dictate its most natural and extensive "abelian" extensions. It provides a way to understand the "geometry" of number fields through their arithmetic. **Why was it needed?** It was crucial for unifying different branches of number theory, solving long-standing problems (like the Inverse Galois Problem for abelian extensions), and providing a framework for understanding the distribution of prime numbers in these extended number systems. --- ### The Web: Building Complex Systems from Simple Parts Now, let's draw a parallel to the world of web development. Just as mathematicians built complex theories from fundamental number systems, we build intricate web applications from simple, yet powerful, languages: * **HTML (HyperText Markup Language):** The **structure** and **content** of our web pages. It's like the fundamental number field itself – the raw material. * **CSS (Cascading Style Sheets):** The **presentation** and **layout**. This is where we start defining relationships and properties, much like how we study the arithmetic of a number field. * **JavaScript:** The **interactivity** and **dynamic behavior**. This adds a layer of complexity and allows for sophisticated interactions, akin to exploring extensions and their properties. Our goal is to see how these elements, when combined with intention and structure, create a cohesive and functional web application, mirroring the elegance of Class Field Theory’s connections. --- ### Our Web Application: "The Number Field Explorer" Let's imagine we're building a simplified web application that visually represents concepts related to number fields. We'll focus on the structure and how different parts relate, inspired by the fundamental theorems of Class Field Theory. **The Analogy:** * **Base Field ($\mathbb{Q}$):** Our core HTML structure. * **Class Group:** The CSS rules that define relationships and properties. * **Abelian Extensions:** JavaScript functionalities that interact with and extend the base structure based on these CSS-defined relationships. --- ### 1. HTML: The Foundation (The Base Field) We start with semantic HTML to create a clear and accessible structure. This is our base field, the fundamental building block. ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Number Field Explorer</title> <!-- Link to our stylesheet --> <link rel="stylesheet" href="styles.css"> </head> <body> <header> <h1>Number Field Explorer</h1> <p>An educational tool inspired by Class Field Theory</p> </header> <main> <section id="field-info"> <h2>The Base Field</h2> <article class="field-description"> <h3>Rational Numbers ($\mathbb{Q}$)</h3> <p>The familiar set of numbers that can be expressed as a fraction p/q, where p and q are integers and q is not zero. This is our foundational number system.</p> <p><strong>Key Properties:</strong> Commutative, associative, distributive. It forms a field.</p> </article> </section> <section id="extensions-overview"> <h2>Abelian Extensions</h2> <p>These are extensions of our base field that exhibit specific, well-behaved symmetries. Class Field Theory connects these to the arithmetic of the base field.</p> <div class="extension-list"> <!-- Placeholder for dynamically added extensions --> <article class="extension-item" data-field-type="quadratic"> <h3>Quadratic Field Extension</h3> <p>An extension of $\mathbb{Q}$ by adjoining the square root of a non-square integer.</p> <button class="explore-btn" data-target="quadratic-details">Explore Properties</button> </article> <article class="extension-item" data-field-type="cyclotomic"> <h3>Cyclotomic Field Extension</h3> <p>An extension of $\mathbb{Q}$ by adjoining roots of unity.</p> <button class="explore-btn" data-target="cyclotomic-details">Explore Properties</button> </article> </div> </section> <section id="details-section" class="hidden"> <h2>Extension Details</h2> <div id="quadratic-details" class="extension-detail-content"> <h3>Quadratic Field Details</h3> <p>Example: $\mathbb{Q}(\sqrt{2})$. Numbers are of the form $a + b\sqrt{2}$, where $a, b \in \mathbb{Q}$.</p> <p><strong>Class Group Connection:</strong> The structure of the class group of $\mathbb{Q}(\sqrt{2})$ influences its arithmetic properties.</p> </div> <div id="cyclotomic-details" class="extension-detail-content"> <h3>Cyclotomic Field Details</h3> <p>Example: $\mathbb{Q}(\zeta_5)$, where $\zeta_5$ is a primitive 5th root of unity.</p> <p><strong>Class Field Theory Insight:</strong> The structure of these fields is deeply tied to the class groups of simpler fields.</p> </div> </section> </main> <footer> <p>&copy; 2023 Number Theory Web Dev Project. Inspired by the elegance of Class Field Theory.</p> </footer> <!-- Link to our JavaScript file --> <script src="script.js"></script> </body> </html> ``` **Explanation:** * **`<!DOCTYPE html>` and `<html>`:** Standard declarations for an HTML5 document. * **`<head>`:** Contains meta-information. * **`<meta charset="UTF-8">`:** Ensures proper character encoding. * **`<meta name="viewport" ...>`:** Crucial for responsive design, telling the browser to set the page width to the device width and scale it appropriately. * **`<title>`:** The text that appears in the browser tab. * **`<link rel="stylesheet" href="styles.css">`:** Links our HTML to the CSS file, establishing the "properties" of our elements. * **`<body>`:** Contains the visible content. * **`<header>`, `<main>`, `<footer>`:** Semantic elements that define the main regions of the page, improving accessibility and SEO. * **`<section>`:** Organizes content into logical groups. * **`<article>`:** Represents a self-contained piece of content, like a specific field description. * **`<h2>`, `<h3>`, `<p>`:** Heading and paragraph tags for content structure. * **`<strong>`:** Used for semantically important text. * **`<div class="extension-list">` and `<div class="extension-detail-content">`:** Container elements that we'll style and manipulate with CSS and JavaScript. * **`<button class="explore-btn">`:** Interactive elements. The `data-target` attribute is a custom data attribute that will be used by JavaScript to link buttons to specific detail sections. * **`class="hidden"`:** A utility class we'll define in CSS to initially hide detail sections. * **`<script src="script.js">`:** Links our HTML to the JavaScript file, enabling dynamic behavior. --- ### 2. CSS: The Relationships and Properties (The Class Group) Now, we define the "arithmetic" and "structure" of our page using CSS. This is where we establish the relationships between elements, their visual properties, and how they respond to different contexts. ```css /* styles.css */ /* --- Core Variables (Inspired by fundamental constants) --- */ :root { --primary-color: #0056b3; /* A deep, foundational blue */ --secondary-color: #f8f9fa; /* Light background for contrast */ --accent-color: #28a745; /* A green for interactive elements */ --text-color: #333; --header-footer-bg: #e9ecef; --border-color: #dee2e6; --spacing-unit: 1rem; } /* --- Global Styles & Typography --- */ body { font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif; /* Modern, readable font stack */ line-height: 1.6; color: var(--text-color); background-color: var(--secondary-color); margin: 0; /* Reset default margin */ padding: 0; display: flex; flex-direction: column; min-height: 100vh; /* Ensure footer stays at bottom */ } h1, h2, h3 { color: var(--primary-color); margin-bottom: var(--spacing-unit) } h1 { font-size: 2.5rem; } h2 { font-size: 1.8rem; } h3 { font-size: 1.3rem; } p { margin-bottom: var(--spacing-unit); } /* --- Semantic Layout Styling --- */ header, footer { background-color: var(--header-footer-bg); padding: calc(var(--spacing-unit) * 1.5) var(--spacing-unit); text-align: center; border-bottom: 1px solid var(--border-color); } footer { margin-top: auto; /* Pushes footer to the bottom in a flex column */ border-top: 1px solid var(--border-color); border-bottom: none; font-size: 0.9rem; color: #6c757d; } main { flex-grow: 1; /* Allows main to take up available space */ padding: calc(var(--spacing-unit) * 2) var(--spacing-unit); max-width: 1200px; /* Limit content width for readability */ margin: 0 auto; /* Center the main content */ width: 100%; /* Take full width up to max-width */ } section { margin-bottom: calc(var(--spacing-unit) * 3); padding: var(--spacing-unit) * 2; background-color: #fff; border-radius: 8px; box-shadow: 0 2px 5px rgba(0,0,0,0.1); /* Subtle shadow for depth */ } section h2 { border-bottom: 2px solid var(--primary-color); padding-bottom: calc(var(--spacing-unit) / 2); } /* --- Specific Element Styling (The "Arithmetic" of our page) --- */ /* Styling for individual field descriptions */ .field-description { border-left: 4px solid var(--primary-color); padding-left: var(--spacing-unit); margin-top: var(--spacing-unit); background-color: var(--secondary-color); /* Slightly different background */ padding: var(--spacing-unit) calc(var(--spacing-unit) * 1.5); border-radius: 4px; } /* Styling for the list of extensions */ .extension-list { display: grid; /* Use CSS Grid for layout */ grid-template-columns: repeat(auto-fit, minmax(280px, 1fr)); /* Responsive columns */ gap: calc(var(--spacing-unit) * 2); /* Space between grid items */ margin-top: var(--spacing-unit) * 2; } /* Styling for each extension item */ .extension-item { background-color: #fdfdfd; border: 1px solid var(--border-color); border-radius: 6px; padding: var(--spacing-unit) * 2; transition: transform 0.2s ease-in-out, box-shadow 0.2s ease-in-out; /* Smooth hover effects */ } .extension-item:hover { transform: translateY(-5px); /* Lift effect */ box-shadow: 0 8px 15px rgba(0,0,0,0.15); } .extension-item h3 { margin-top: 0; color: var(--primary-color); } /* Styling for the explore buttons */ .explore-btn { display: inline-block; /* Allows padding and margin */ background-color: var(--accent-color); color: white; padding: calc(var(--spacing-unit) * 0.8) calc(var(--spacing-unit) * 1.5); border: none; border-radius: 5px; cursor: pointer; font-size: 1rem; transition: background-color 0.2s ease-in-out, transform 0.1s ease-in-out; } .explore-btn:hover { background-color: #218838; /* Darker green on hover */ transform: translateY(-2px); } .explore-btn:active { transform: translateY(0); /* Press effect */ } /* Styling for the detail content sections */ .extension-detail-content { background-color: var(--secondary-color); border: 1px dashed var(--primary-color); /* Dashed border to show it's a specific detail */ padding: var(--spacing-unit) * 2; border-radius: 6px; margin-top: var(--spacing-unit) * 2; } /* Utility class to hide elements */ .hidden { display: none; } /* --- Responsive Design: Adapting the "Field Structure" --- */ /* For smaller screens (e.g., mobile phones) */ @media (max-width: 768px) { h1 { font-size: 2rem; } h2 { font-size: 1.5rem; } h3 { font-size: 1.1rem; } header, footer { padding: var(--spacing-unit); } main {
0.65
medium
8
3,954
[ "distributed systems", "architecture" ]
[ "research computing" ]
[ "mathematics", "science", "life_skills" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.65, "creativity": 0.4 }
6ace38b7-1045-4be6-85db-4563af47734c
Fitness Professionals for Personal Training in Broadlands, IL
technology
technical_documentation
Fitness Professionals for Personal Training in Broadlands, IL near Results 1-10 of 209 About 31.4 miles from you Verified Cert/Training About 14.2 miles from you Verified Cert/Training About 33.1 miles from you Verified Cert/Training About 39.9 miles from you Verified Cert/Training IDEA Member About 33.1 miles from you Verified Cert/Training About 27.0 miles from you Verified Cert/Training About 23.3 miles from you About 40.2 miles from you Verified Cert/Training About 27.8 miles from you Verified Cert/Training About 28.0 miles from you Verified Cert/Training IDEA Member Tips for Finding a Personal Training in Broadlands, IL • Look for a "verified" Personal Training who is certified by a nationally recognized organization. • Make sure the Personal Training you choose has liability insurance and a CPR/AED certification. • Be aware of how many years of experience the Personal Training has. Already a Fitness Professional? Get Listed Fill out your profile today on the first free, all-inclusive national directory of fitness professionals. Increase leads and client retention by taking advantage of amazing business tools such as client/lead management, customized client newsletters, blogs, and class/event listings. Filter Results
0.65
medium
4
327
[ "programming fundamentals", "logic" ]
[ "system design" ]
[ "life_skills" ]
{ "clarity": 0.7, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.3 }
10686a3e-3c8b-42cc-8ba6-8f1b4228cf4c
Each student save his
science
worked_examples
Each student should save his or her questions until the end. 每次遇到这种情况都要说 his or her,真心觉得有点儿麻烦。 提问于 15 May '14, 13:49 Currently, however, many writers find the use of the generic he or his to rename indefinite antecedents limiting or offensive. Substituting he or she in its place is the logical thing to do if it works. But it often doesn't work, if only because repetition makes it sound boring or silly. Consider these strategies to avoid an awkward overuse of he or she or an unintentional emphasis on the masculine: Use the plural rather than the singular. The writer must address his readers' concerns. Writers must address their readers' concerns. Eliminate the pronoun altogether. The writer must address his readers' concerns. The writer must address readers' concerns. Substitute the second person for the third person. The writer must address his readers' concerns. As a writer, you must address your readers' concerns. No one need fear to use he if common sense supports it. If you think she is a handy substitute for he, try it and see what happens. Alternatively, put all controversial nouns in the plural and avoid the choice of sex altogether, although you may find your prose sounding general and diffuse as a result.
0.6
medium
3
306
[ "basic science", "measurement" ]
[ "experimental design" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
eb335d78-1fa5-4eb9-9c10-93404cca3278
Tutorial: Nigericin Step-by-Step Guide
science
tutorial
## Tutorial: Nigericin – A Step-by-Step Guide **Overview:** This tutorial provides a detailed, step-by-step guide to understanding Nigericin, a polyether antibiotic, based on its provided specifications and information. It covers its origin, properties, uses, and relevant technical details, suitable for researchers or individuals needing to work with this compound. **Prerequisites:** Basic understanding of antibiotic mechanisms, ionophores, and microbial biochemistry is helpful. Familiarity with chemical terminology (e.g., solubility, molecular formula) is beneficial. **Step 1: Identify Nigericin’s Origin and History** - **Action:** Locate the source information regarding Nigericin’s discovery. - **Detailed Instructions:** The original content states that Nigericin was first isolated from *Streptomyces hygroscopicus* in the 1950s. Specifically, it was isolated from the strain *S. hygroscopicus*. Its structure was fully elucidated in 1968. - **Expected Outcome:** You should be able to clearly state that Nigericin originates from *Streptomyces hygroscopicus* and was initially isolated in the mid-20th century. - **Checkpoint:** Verify this by reviewing the “Source” section of the provided information. Confirm that *Streptomyces hygroscopicus* is listed as the source organism. **Step 2: Understand Nigericin’s Chemical Properties** - **Action:** Analyze Nigericin’s physical and chemical characteristics. - **Detailed Instructions:** Examine the provided data on its appearance, solubility, and chemical formula. Nigericin appears as a white powder. It’s soluble in DMSO (dimethyl sulfoxide) and partially soluble in methanol. Its water solubility is poor. The molecular formula is C40H68O11, and the molecular weight is 724.9. - **Example/Concrete Application:** Consider how solubility impacts its use. DMSO solubility is crucial for preparing solutions for in vitro assays. Poor water solubility might require specialized techniques for aqueous applications. - **Common Pitfalls to Avoid:** Misinterpreting solubility data. Ensure you understand the solvent in which it’s soluble. **Step 3: Recognize Nigericin’s Mechanism of Action** - **Action:** Detail how Nigericin interacts with biological systems. - **Detailed Instructions:** Nigericin is an ionophore, meaning it selectively binds to and carries ions across membranes. Specifically, it has a high affinity for monovalent cations like Na+ and K+. It disrupts mitochondrial membrane potential and Golgi apparatus function. - **Checkpoint:** Explain, in your own words, how an ionophore like Nigericin could disrupt cellular function. **Step 4: Appreciate Nigericin’s Biological Activity** - **Action:** List the biological activities demonstrated by Nigericin. - **Detailed Instructions:** The content indicates Nigericin exhibits broad biological activity, including: activity against Gram-positive bacteria, fungi, tumor cell lines, and certain viruses (including HIV). - **Example/Concrete Application:** Recognize the implications of this broad activity. It’s a valuable tool for studying various cellular processes and potential antiviral therapies. - **Common Pitfalls to Avoid:** Overlooking the specific types of organisms or cell lines where it demonstrates activity. **Step 5: Recognize Nigericin’s Role in Bioassay Screening** - **Action:** Understand Nigericin’s importance in bioassay development. - **Detailed Instructions:** Nigericin is a common false positive in in vitro screening bioassays using crude microbial extracts. This means it can interfere with accurate detection of other compounds. Therefore, it’s a crucial standard for "dereplication" – confirming the uniqueness of newly discovered compounds. - **Checkpoint:** Explain why a false positive would be problematic in a bioassay. **Step 6: Locate Relevant Catalog and Technical Data** - **Action:** Find the specific catalog number and CAS number for Nigericin. - **Detailed Instructions:** The catalog number is N2566-90. The CAS number is 28643-80-3. These numbers are used for ordering and identification. - **Expected Outcome:** You can accurately locate Nigericin using its catalog and CAS numbers. **Practice Exercise:** Imagine you are designing an experiment to test the efficacy of a new antifungal compound. You’ll be using a crude extract from a fungal culture in an in vitro assay. Considering Nigericin’s properties, what steps would you take to minimize the risk of a false positive result? (Hint: Think about controls and potential interference.) **Key Takeaways:** * Nigericin is derived from *Streptomyces hygroscopicus*. * It’s an ionophore affecting mitochondrial and Golgi function. * It’s a valuable tool in biological research but can cause false positives in bioassays. * Catalog number N2566-90 and CAS number 28643-80-3 are important identifiers. **Next Steps:** * Research the specific mechanisms of action of Nigericin at the molecular level. * Investigate the use of Nigericin in specific research areas (e.g., HIV drug development). * Explore methods for minimizing interference from Nigericin in bioassay experiments (e.g., using Nigericin as a control).
0.65
medium
5
1,109
[ "introductory science", "algebra" ]
[ "research methodology" ]
[ "mathematics", "technology" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.45, "creativity": 0.3 }
3547fb9b-9a26-478f-b740-da6a0a17bbcd
time give non-stick pans
life_skills
historical_context
Is it time to give non-stick pans the slip? It turns out that non-stick cookware can have some serious sticking power. While you might enjoy that easy-to-clean slippery surface, the synthetic chemicals commonly used in non-stick pans can have lingering effects in our bodies and – with the short lifespan of non-stick pans leading to most ending up in landfill after a few years – our environment. Teflon is probably the best-known non-stick brand name. This was invented by global chemical giant DuPont back in the 1930s. It was hugely popular at the time, as it stopped food from sticking to cookware, removing the need to scrub pots and pans after cooking. But in 2005, the American Environmental Protection Agency discovered the potential cancer-causing chemical used in its production. Because of this, the use of this chemical – known as PFOA – was phased out in non-stick pans by 2013. But the problems caused by PFOA continued to stick, with thousands of lawsuits against DuPont persisting for years. The chemical and the company even featured in the 2019 US film Dark Waters. While PFOA is no longer used, non-stick pans can still contain similar chemicals, known as PFAS chemicals. Sometimes referred to as “forever chemicals” because of how long they can hang around, PFAS can accumulate over time – both in the environment and in the human body. PFAS are now found in our water supply and the bodies of almost all humans in developed countries. It takes up to five years for concentrations of some PFAS chemicals to drop by half in the body, according to the US Centers for Disease Control and Prevention. A slippery slope According to consumer watchdog Choice, PFAS have been linked to various health problems, including endocrine effects such as altered thyroid and sex hormone levels, low birth weights and reproductive disorders. In another Hollywood connection, Erin Brockovich – the young lawyer made famous in the eponymous film – has campaigned against PFAS in the US. She was even involved in an Australian lawsuit that involved tens of thousands of people suing the Australian government over PFAS contamination of their land. This came from the use of firefighting foam that contained PFAS chemicals. Many companies are phasing out the use of PFAS (they can also be found in upholstery, carpets, pizza boxes and cosmetics). But you should always try to look for cookware that is PFAS-free. If you have older non-stick pans and you’re not sure about the coating, only use it at low temperatures, use oils with lower smoke points and don’t scratch or scrub at the surface. If the surface starts the crack or peel, you should toss it. Buy, dispose, repeat… or stick to a different option The problem with tossing those old non-stick pans is not just about potential health risks. Synthetic-coated non-stick cookware generally has a short lifespan. This means more than a million non-stick pans are ending up in our landfills every year. And those “forever chemicals” in the coatings never break down in the environment. And although it might mean a little more washing up, a little sticking can do wonders for your cooking. Adherence of food to the pan will actually make the end result taste better. Those browned bits stuck to the bottom are known as “fond”. This builds up on the bottom of a pan when you’re roasting meats and vegetables. It’s used for deglazing to create flavourful sauces and gravies. It’s virtually impossible to develop this on non-stick cookware. However, it’s easy on iron and steel cookware – which is the cookware of choice for chefs. Luckily there are plenty of healthy, sustainable, non-toxic and natural cookware options available in Australia. The engineers at sustainable cookware company Solidteknics Australia have compiled a list of how different types of cookware stack up. Solidteknics has developed two ranges of innovative world-first cookware: Aus-ION wrought iron (formed low-carbon steel) and nöni ferritic wrought stainless cookware. Australian-made, they’re non-toxic and can last for generations. Season your pan like a pro In this case, “seasoning” has nothing to do with salt and pepper. By seasoning pans like Solidteknics’ Aus-ION range (the nöni range doesn’t need seasoning) you can achieve a natural non-stick surface that’s forever renewable. Seasoning is simply layers of oil that are baked onto the pan through a process called polymerisation. Over time, this builds to form a natural, healthy, easy-release cooking surface that only continues to get better with use. So, the more you cook with your pan, the more seasoned it becomes. Cooks all over the world have been seasoning iron pans for centuries. It was the norm before the introduction of non-stick pans. A well-seasoned pan can give you a restaurant-quality sear on meat. It also means you need less oil, which equals healthier cooking. It’s up to you how much seasoning you choose to do. Some people like to spend a bit of time initially to create a non-stick surface to begin with. Others prefer the no-fuss approach of letting it build up naturally through cooking over time. Your seasoning will constantly change depending on what and how you’re cooking. And while seasoned pans might look a little patchy, remember that quality cookware shouldn’t be about looks. It should be about performance, durability and the end result – delicious food. To check out the full Solidteknics range, head to solidteknics.com
0.6
medium
4
1,183
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science", "technology", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
9bf0ed66-08cf-4947-ad9d-7bbf1831bc4b
The Arthurian Legend
technology
historical_context
The Arthurian Legend The Barony of North Cadbury is deeply connected to the legendary King Arthur and his knights of the round table. North Cadbury (Cadeberie - Cada's Fort) takes its name from Cadbury Castle in South Cadbury. Cadbury Castle also known as Camelot Castle, is a bronze and iron age hillfort in the civil parish of South Cadbury. The hill is the most probable site of King Arthur's principle court famously called Camelot. John Leland (1503-1552) an English poet, chaplain and librarian to King Henry VIII, was the earliest of a notable group of English antiquarians. He traveled through England and Wales between 1538 and 1543. On his journey through the county of Somerset he visited the historic places of North and South Cadbury. In Leland's itinerary of 1542, he was the first to record the tradition (possibly influenced by the proximity of the villages of Queen Camel and West Camel, which are lying as well as North and South Cadbury at the River Cam) identifying the hillfort of Cadbury Castle in Somerset as King Arthur's Camelot: "At the very south ende of the chirch of South-Cadbyri standeth Camallate, sumtyme a famose toun or castelle, apon a very torre or hille, wunderfully enstregnthenid of nature..... The people can telle nothing ther but that they have hard say that Arture much resortid to Camalat." John Leland's material provides invaluable evidence for reconstructing the lost "tomb" of Arthur at Glastonbury Abbey. From the 12th century Glastonbury is associated with the legend of King Arthur. This connection was promoted by medieval monks who asserted that Glastonbury was Avalon. It is stated that Arthur's burial place is at Glastonbury Abbey, located not far from Cadbury Castle - King Arthur's Camelot. Cadbury Castle aerial view 1967 The countryside is rich of Arthurian traditions Cadbury Castle is a scheduled monument and associated with the legend of King Arthur. Legend has it that on midsummer's eve (23rd June) the hill turns clear as glass and inside can be seen King Arthur and his knights of the round table. It is said on moonlight nights King Arthur and his knights to gallop round the fortifications on steeds shod with silver shoes. A hardly traceable forest-path runs at the base of the hill in the directon of Glastonbury. This is King Arthur's hunting track. Cadbury Castle is also be said by an ancient writer to have been one of the stations of the Round Table of King Arthur. The following account of this singular fraternity will be interesting to the reader: "This Round Table was kept at several places, especially at Caerleon in Monmouthshire, at Winchester, and at Camalet in Somersetshire. The location of the battle is unknown but there are several possibilities. One is Queen Camel in Somerset, close to the hill fort Cadbury Castle near South Cadbury. Identified by some, including Geoffrey Ashe, with King Arthur's Camelot, where the River Cam flows beneath Camel Hill and Annis Hill. Glastonbury - King Arthur's Avalon and burial place After the Arthurian Legend Glastonbury Abbey is the ancient graveyard of King Arthur and his wife and Queen Guinevere. Excavations at Cadbury Castle In June 1913 trial excavations were held on Cadbury Castle. The excavations took place in the south west corner of the hill said by some to be the 'Camelot' of King Arthur. Six men were employed in the excavation work. Many pieces of pottery of the Romano-British era were found as well as evidence of walls and ramparts and a small child's skeleton. The work was carried out on behalf of the Somerset Archaeological and Natural History Society. The book 'By South Cadbury is that Camelot...' Excavations at Cadbury Castle 1966-70 is an excellent reference about the excavations at Cadbury Castle in the years 1966-1970. This book by Leslie Alcock, published by Thames and Hudson Ltd. in 1972, is certainly of great archaeological and historical significance and was also published in Germany by Gustav Lübbe Verlag, Bergisch Gladbach in 1974. Among the excavations a vast number of unusual findings were found here, especially from the assumed time of King Arthur around the fifth and sixth century. This indicates that then a very rich and powerful personage had his seat at Cadbury Castle. Glas bed, pre-Roman Iron Age Silver coin, Roman, 109 BC Gilt bronze letter from Roman temple Hinge from Roman soldier's armour Bronze brooch, 1st century AD Neolithic flint arrowhead Late Bronze Age knife Bronze harness fitting and ‘safety-pin’ brooch, Iron Age Rim of imported dish, 5-6th cent AD Gilt-bronze, mid-6th cent AD Late Saxon knife A series of FDC Official Covers of 'The Arthurian Legend' - South Cadbury stamped 3rd September 1985. These First Day Covers marks the connection of King Arthur to Cadbury Castle, which is the most probable site for Arthur's principle court called Camelot.
0.65
medium
6
1,190
[ "algorithms", "software design" ]
[ "distributed systems" ]
[]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.35 }
1e52f39a-610d-44b8-99d7-0fea750b0f97
Many anti-capitalists, especially those
interdisciplinary
code_implementation
Many anti-capitalists, especially those based in Detroit, America, will today be celebrating the 100th birthday of Grace Lee Boggs, a remarkable Chinese American activist and socialist humanist philosopher. During the 1940s, in collaboration with C.L.R. James and Raya Dunayevskaya, Grace helped make an important contribution to Marxist theory before becoming a grassroots organiser for the powerful Civil Rights and Black Power movement in the United States alongside her partner James Boggs, an organic intellectual of the organised black working class in Detroit. Grace was born Yuk Ping or Jade Peace, Americanised to Grace Chin Lee, on 27 June 1915 above her father’s Chinese restaurant in downtown Providence, Rhode Island, one of seven children. After moving to New York when her father’s business expanded in the ‘Roaring Twenties’, Grace experienced something of the widespread racism against Chinese Americans. Her intellect and talent for independent critical thinking saw her aged 16 decide to defy expectations and study philosophy at Barnard College, where she was one of only three students of colour. Amidst the Great Depression, Grace later recalled that ‘what troubled me as an undergraduate major in philosophy was that my professors were an elite who saw themselves as “pure reason” in search of Truth – having nothing to do with the world’. Winning a scholarship, Grace completed a PhD at Bryn Mawr College in 1940, on the American pragmatist philosopher George Herbert Mead, which was later published, though her emerging intellectual passion was for reading Hegel in the original German. For her, ‘Hegel was like music’, she later recalled. With no hopes of securing anything but the most menial teaching post as a result of her colour and gender, Grace moved to Chicago, where she politically radicalised towards socialism and came across the Trotskyist Workers’ Party agitating against poor housing conditions. In 1941, black Americans were also on the march, and there was a tremendous response to a call by black trade union leader A. Philip Randolph for a March on Washington which successfully forced President Roosevelt to end racist discrimination in the booming war-time defence industry. As she recalled, ‘I was attracted to the black movement because Jim Crow in 1940 was so barbaric and because I viewed black struggle as the catalyst for revolutionising this country’. Grace had seen something of the power and potential of a mass movement from below for the first time, and ‘decided that what I wanted to do with the rest of my life was to become a movement activist in the black community’. During the Second World War, Grace worked alongside mainly black women workers in a defence plant in Brooklyn, New York, was thrilled by the ‘camaraderie’ among her fellow workers, and helped organise black history classes in coffee breaks. In 1941, through the Workers’ Party, Grace had had the fortune to meet the black Trinidadian Marxist C.L.R. James, who stopped by Chicago on way back from organising sharecroppers in south east Missouri. ‘When he found out I knew German and had studied Hegel, we sat down almost immediately on the red couch in my basement room and began comparing the texts of Marx’s Capital and Hegel’s [Science of] Logic. That was the beginning of our close collaboration of twenty years’. Grace learnt her Marxism through joining James’s small current within American Trotskyism, known as the Johnson-Forest Tendency (from the pseudonyms taken by its leaders, James – ‘J.R. Johnson’ - and Dunayevskaya – ‘Freddie Forest’). Under her own pseudonym ‘Ria Stone’ Grace soon emerged as a leading theorist of this group in her own right, and would later co-author several insightful and pioneering works of Marxist theory with James and Dunayevskaya, including The Invading Socialist Society and State Capitalism and World Revolution. Grace was particularly convinced by James and Dunayevskaya’s Marxist analysis of the Soviet Union under Stalinist dictatorship as a form of state capitalism rather than any form of ‘socialism’. As she recalled in an interview in 2000, ‘regardless of whether the state or private capitalists owned the means of production, Soviet workers in the process of production were still being exploited, dehumanised, and deprived of their “natural and acquired powers” in the way described by Marx in Capital. I was attracted to the State Capitalist position on the Soviet Union because it seemed much more dialectical. It embodied much more the way of thinking of Hegel, whom I had studied, because it recognised that new contradictions could arise out of great struggles for liberation and that progress did not take place in a straight line. I was also rather attracted to it because it emphasised the humanity of the workers rather than property relationships.’ When Dunayevskaya found a Russian translation of Marx’s 1844 Economic and Philosophical Manuscripts in New York Public Library, Grace translated three of Marx’s essays from the original German into English, and in 1947 these were published by the Johnson-Forest Tendency, their first ever publication in English. That year she also wrote an afterword to The American Worker, a detailed analysis of class struggle at the point of production written by a General Motors worker Phil Singer (under the pseudonym Paul Romano), which stressed the importance of Marx’s theory of alienation – developed in the 1844 Manuscripts - to understanding the dehumanisation which accompanies exploitation in the labour process. In 1950, the Johnson-Forest Tendency broke with the official Trotskyist movement to form their own independent Marxist current, and in 1953, Grace moved to Detroit, where she soon married James Boggs, a member of the group and a militant black trade unionist at the local Chrysler-Jefferson car plant. On their honeymoon in Alabama, the couple had to sleep in their own car because Southern motels refused to accept mixed-race couples. The post-war economic boom, the development of new technology in factory production (‘automation’), the conservatism of the trade union bureaucracy and the lack of response from most white workers in America to the growing Civil Rights movement led James and Grace Boggs to declare in 1961 that the American working class was ‘backward and bourgeoisified’, and so for them hope now lay in revolutions in the Third World and marginalised outsiders such as young black people. As Grace would later put it in her autobiography Living for Change, they felt ‘the labour movement had been superceded by the black movement’. This abandonment of the Marxist stress on the centrality of the working class as the agent of change in society and shift towards movementism led the Boggses to politically split with C.L.R. James in 1962. Grace’s subsequent tireless and heroic work as a local anti-racist organiser in Detroit nonetheless won her the respect of not only iconic leading figures such as Martin Luther King, Malcolm X and Rosa Parks but also militants around the League of Revolutionary Black Workers in Detroit – and so confused the FBI that they thought she must be ‘Afro-Chinese’. The defeats of the liberation movements which had reached their height in the late 1960s by the 1970s and 1980s now led James and Grace Boggs to shift even further away from Marxism towards what they called ‘dialectical humanism’, which sought to ‘advance beyond the idea that all radicals have held – that in order to advance socialism, you must first smash capitalism. We have to advance towards the new society by projecting an entirely different way to live and by building new social ties’. In the face of the devastating de-industrialisation of Detroit, the one time jewel of American industrial capitalism, James and Grace Boggs now saw the fight as not primarily one to save jobs and defend public services but to try and ‘re-civilise our society’ through fostering ‘collective self-reliance’, ‘creating support networks to look out for each other and moving onto community gardens and greenhouses, community recycling projects, community repair shops, community day care networks, community mediation centers’. Shortly after the passing of James Boggs in 1993, the James and Grace Lee Boggs Centre to Nurture Community Leadership was founded in Detroit. Overall, though Grace Lee Boggs has then long shifted away from the classical Marxist ‘conception of the creative power of the proletariat in industry as a force for the social regeneration of society’ that she did so much during the 1940s to develop and clarify at a time when Stalinism meant that for so many people ‘socialism’ had come to be seen merely as state ownership without any accompanying revolutionary democracy or workers’ control, she has always remained an eloquent moral critic of capitalism. Moreover, her record of activism from the 1941 March on Washington against Jim Crow up to the Occupy movement in 2011 remains inspiring, and demands and deserves to be celebrated and honoured today. In the words of veteran American radical Dan Georgakas – co-author of the classic work on the League of Revolutionary Black Workers, Detroit: I do mind dying – Grace Lee Boggs is ‘a long-time legendary figure in Detroit and in the black liberation movement’ – and as that movement heroically erupts again to confront the systematic murderous racism of the US state and society under the banner #BlackLivesMatter – long may this legendary figure continue to inspire! Christian Høgsbjerg is the author of C.L.R. James in Imperial Britain and a member of the editorial board of International Socialism.
0.6
medium
6
2,017
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.4, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.65, "creativity": 0.45 }
9ab74e06-a908-4ab2-8cf6-90bb898c7512
Worked Examples: Ring Theory
mathematics
historical_context
## Worked Examples: Ring Theory Ring theory is a fundamental branch of abstract algebra that generalizes the familiar arithmetic of integers. It studies algebraic structures called rings, which possess two operations (typically addition and multiplication) that behave in many ways like those of integers. This includes properties like associativity, distributivity, and the existence of additive inverses. --- ### Example 1: Foundation - Identifying a Ring **Problem Statement:** Determine if the set of all $2 \times 2$ matrices with integer entries, denoted by $M_2(\mathbb{Z})$, forms a ring under standard matrix addition and multiplication. **Solution Steps:** 1. **Identify the Set and Operations:** * The set is $S = \{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \mid a, b, c, d \in \mathbb{Z} \}$. * The operations are matrix addition and matrix multiplication. 2. **Verify Closure under Addition:** * Let $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ and $B = \begin{pmatrix} e & f \\ g & h \end{pmatrix}$ be arbitrary elements of $M_2(\mathbb{Z})$. * $A + B = \begin{pmatrix} a+e & b+f \\ c+g & d+h \end{pmatrix}$. * Since $a, b, c, d, e, f, g, h \in \mathbb{Z}$, their sums $(a+e), (b+f), (c+g), (d+h)$ are also integers (due to the closure property of integers under addition). * Therefore, $A+B$ is a $2 \times 2$ matrix with integer entries, so $A+B \in M_2(\mathbb{Z})$. Closure under addition holds. 3. **Verify Closure under Multiplication:** * Using the same matrices $A$ and $B$ from step 2: * $A \cdot B = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \begin{pmatrix} e & f \\ g & h \end{pmatrix} = \begin{pmatrix} ae+bg & af+bh \\ ce+dg & cf+dh \end{pmatrix}$. * Since $a, b, c, d, e, f, g, h \in \mathbb{Z}$, all products $(ae, bg, af, bh, ce, dg, cf, dh)$ and their sums are also integers (due to closure properties of integers under addition and multiplication). * Therefore, $A \cdot B$ is a $2 \times 2$ matrix with integer entries, so $A \cdot B \in M_2(\mathbb{Z})$. Closure under multiplication holds. 4. **Verify Ring Axioms:** * **Associativity of Addition:** Matrix addition is associative for any matrices, so it is associative for matrices in $M_2(\mathbb{Z})$. $(A+B)+C = A+(B+C)$. * **Commutativity of Addition:** Matrix addition is commutative for any matrices, so it is commutative for matrices in $M_2(\mathbb{Z})$. $A+B = B+A$. * **Existence of Additive Identity (Zero Element):** The zero matrix $0 = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}$ is in $M_2(\mathbb{Z})$, and $A+0 = A$ for all $A \in M_2(\mathbb{Z})$. * **Existence of Additive Inverses:** For any $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in M_2(\mathbb{Z})$, its additive inverse is $-A = \begin{pmatrix} -a & -b \\ -c & -d \end{pmatrix}$. Since $a, b, c, d \in \mathbb{Z}$, their negatives are also in $\mathbb{Z}$. $A + (-A) = 0$. * **Associativity of Multiplication:** Matrix multiplication is associative for any matrices, so it is associative for matrices in $M_2(\mathbb{Z})$. $(A \cdot B) \cdot C = A \cdot (B \cdot C)$. * **Distributivity of Multiplication over Addition:** Matrix multiplication distributes over matrix addition for any matrices, so it does for matrices in $M_2(\mathbb{Z})$. $A \cdot (B+C) = A \cdot B + A \cdot C$ and $(A+B) \cdot C = A \cdot C + B \cdot C$. 5. **Conclusion:** Since all the required axioms (closure under both operations, associativity of both, commutativity of addition, existence of additive identity and inverses, associativity of multiplication, and distributivity) are satisfied, $M_2(\mathbb{Z})$ forms a ring. **Key Insight:** This example demonstrates that structures beyond familiar number systems (like integers or real numbers) can form rings. It also highlights the importance of verifying *all* ring axioms, including closure and the distributive property. --- ### Example 2: Application - Ideals in $\mathbb{Z}$ **Problem Statement:** Prove that the set $I = \{3k \mid k \in \mathbb{Z}\}$ is an ideal of the ring of integers $\mathbb{Z}$. **Solution Steps:** 1. **Identify the Set and Ring:** * The set is $I = \{ \dots, -6, -3, 0, 3, 6, \dots \}$, the set of all multiples of 3. * The ring is $\mathbb{Z}$ (integers) with standard addition and multiplication. 2. **Verify Subgroup under Addition:** To be an ideal, $I$ must first be a subgroup of $(\mathbb{Z}, +)$. * **Closure under Addition:** Let $x, y \in I$. Then $x = 3k$ and $y = 3m$ for some $k, m \in \mathbb{Z}$. * $x + y = 3k + 3m = 3(k+m)$. * Since $k, m \in \mathbb{Z}$, $k+m \in \mathbb{Z}$. Thus, $x+y$ is a multiple of 3, so $x+y \in I$. Closure holds. * **Existence of Additive Identity:** The additive identity in $\mathbb{Z}$ is 0. Since $0 = 3 \cdot 0$, $0 \in I$. * **Existence of Additive Inverses:** Let $x \in I$. Then $x = 3k$ for some $k \in \mathbb{Z}$. * The additive inverse is $-x = -3k = 3(-k)$. * Since $k \in \mathbb{Z}$, $-k \in \mathbb{Z}$. Thus, $-x$ is a multiple of 3, so $-x \in I$. * Since $I$ is closed under addition, contains the identity, and contains inverses, $I$ is a subgroup of $(\mathbb{Z}, +)$. 3. **Verify Ideal Absorption Properties:** Now, we check the two crucial properties for an ideal: * **Absorption from the left:** For any $r \in \mathbb{Z}$ and any $x \in I$, we need to show $r \cdot x \in I$. * Let $r \in \mathbb{Z}$ and $x = 3k \in I$ for some $k \in \mathbb{Z}$. * $r \cdot x = r \cdot (3k) = 3 \cdot (r \cdot k)$. * Since $r, k \in \mathbb{Z}$, the product $r \cdot k \in \mathbb{Z}$. * Therefore, $r \cdot x$ is a multiple of 3, which means $r \cdot x \in I$. This property holds. * **Absorption from the right:** For any $r \in \mathbb{Z}$ and any $x \in I$, we need to show $x \cdot r \in I$. * Let $r \in \mathbb{Z}$ and $x = 3k \in I$ for some $k \in \mathbb{Z}$. * $x \cdot r = (3k) \cdot r = 3 \cdot (k \cdot r)$. * Since $k, r \in \mathbb{Z}$, the product $k \cdot r \in \mathbb{Z}$. * Therefore, $x \cdot r$ is a multiple of 3, which means $x \cdot r \in I$. This property also holds. 4. **Conclusion:** Since $I$ is a subgroup of $(\mathbb{Z}, +)$ and absorbs multiplication from both the left and the right by elements of $\mathbb{Z}$, $I$ is an ideal of $\mathbb{Z}$. **Alternative Approach (Specific to Commutative Rings):** In a commutative ring $R$ (like $\mathbb{Z}$), an ideal $I$ is a subgroup of $(R, +)$ such that for all $r \in R$ and $x \in I$, $rx \in I$. Since $\mathbb{Z}$ is commutative, we only needed to check $r \cdot x \in I$. The check for $x \cdot r \in I$ is redundant because $r \cdot x = x \cdot r$ in this case. **Key Insight:** This example illustrates the definition of an ideal, a fundamental substructure in ring theory that generalizes the concept of divisibility. It shows that the set of multiples of any integer $n$ forms an ideal in $\mathbb{Z}$, known as a *principal ideal*. --- ### Example 3: Advanced/Edge Case - Non-Commutative Rings and Zero Divisors **Problem Statement:** Consider the ring $R = \mathbb{Z}_6$ (integers modulo 6). Find two non-zero elements $a, b \in R$ such that $a \cdot b \equiv 0 \pmod{6}$. Also, determine if the set $S = \{0, 3\}$ forms a subring of $R$. **Solution Steps:** 1. **Identify Zero Divisors in $\mathbb{Z}_6$:** * The elements of $\mathbb{Z}_6$ are $\{0, 1, 2, 3, 4, 5\}$. * We are looking for $a, b \in \{1, 2, 3, 4, 5\}$ such that $a \cdot b \equiv 0 \pmod{6}$. * Let's test pairs: * $2 \cdot 3 = 6 \equiv 0 \pmod{6}$. So, $a=2, b=3$ is a solution. * $3 \cdot 4 = 12 \equiv 0 \pmod{6}$. So, $a=3, b=4$ is a solution. * $2 \cdot 0 = 0$, $3 \cdot 0 = 0$, etc., but we need non-zero elements. * Elements like 2, 3, and 4 in $\mathbb{Z}_6$ are called **zero divisors** because their product with another non-zero element is zero. This property is characteristic of rings that are *not* integral domains. 2. **Check for Subring Properties for $S = \{0, 3\}$:** * A subset $S$ of a ring $R$ is a subring if it is itself a ring under the same operations, which requires: * **Non-empty:** $S$ contains 0, so it's non-empty. * **Closure under Subtraction:** For any $x, y \in S$, $x-y \in S$. * $0 - 0 = 0 \in S$. * $0 - 3 = -3$. In $\mathbb{Z}_6$, $-3 \equiv 3 \pmod{6}$. So, $-3 \in S$. * $3 - 0 = 3 \in S$. * $3 - 3 = 0 \in S$. * Closure under subtraction holds. (This implies closure under addition and existence of inverses). * **Closure under Multiplication:** For any $x, y \in S$, $x \cdot y \in S$. * $0 \cdot 0 = 0 \in S$. * $0 \cdot 3 = 0 \in S$. * $3 \cdot 0 = 0 \in S$. * $3 \cdot 3 = 9$. In $\mathbb{Z}_6$, $9 \equiv 3 \pmod{6}$. So, $3 \cdot 3 \in S$. * Closure under multiplication holds. 3. **Conclusion:** Since $S = \{0, 3\}$ is non-empty, closed under subtraction, and closed under multiplication, it is a subring of $\mathbb{Z}_6$. **Common Pitfalls:** * Forgetting to check closure under subtraction (or closure under addition and inverses separately) when verifying a subring. * Confusing the definition of an ideal with a subring. Ideals require the extra absorption property, while subrings only need to be rings in their own right. * Assuming that if $ab=0$, then either $a=0$ or $b=0$ (this is only true in integral domains). **Key Insight:** This example highlights that rings can have zero divisors, meaning the product of two non-zero elements can be zero. This is a critical distinction from integral domains (like $\mathbb{Z}$ or fields). It also demonstrates how to check if a subset forms a subring, requiring it to satisfy the ring axioms itself. --- ## Pattern Recognition * **Hierarchy of Structures:** We see a progression from basic ring identification ($M_2(\mathbb{Z})$) to substructures like ideals ($\{3k\}$) and subrings ($\{0, 3\}$ in $\mathbb{Z}_6$). * **Axiom Verification:** All examples involve systematically checking the defining axioms of rings, ideals, or subrings. * **Role of Number Systems:** The properties of the underlying number system (integers $\mathbb{Z}$, integers modulo $n$, matrices) heavily influence the ring's behavior (e.Z., commutativity, presence of zero divisors). * **Generalization:** Ring theory provides a framework to study algebraic properties that are common across diverse mathematical objects. ## When to Apply * **Ring Theory:** Use when analyzing algebraic structures with two operations (addition and multiplication) that share properties with integers, such as polynomial rings, matrix rings, or quotient rings. * **Ideals:** Apply when studying substructures that are "closed" under multiplication by any ring element, essential for constructing quotient rings and understanding ring homomorphisms. * **Subrings:** Use when identifying subsets within a larger ring that are themselves rings under the inherited operations. * **Zero Divisors:** Recognize their presence in rings that are not integral domains, impacting properties like cancellation laws.
0.65
low
4
3,458
[ "algebra", "geometry basics" ]
[ "calculus", "statistics" ]
[ "science" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
68ab0bf1-0cb6-4c2d-a89d-4f73aa1a781d
Previously undiscovered secrets how
science
research_summary
Previously undiscovered secrets of how human cells interact with a bacterium which causes a serious human disease have been revealed in new research by microbiologists at The University of Nottingham. The scientists at the University’s Centre for Biomolecular Sciences have shed new light on how two proteins found on many human cells are targeted by the human pathogen Neisseria meningitidis which can cause life-threatening meningitis and septicaemia. The proteins, laminin receptor (LAMR1) and galectin-3 (Gal-3) are found in and on the surface of many human cells. Previous research has shown they play diverse roles in a variety of infectious and non-infectious diseases. For example, the LAMR1 is a key receptor targeted by disease-causing pathogens and their toxins and is also a receptor for the spread of cancer around the body and for the development of Alzheimer’s. Using the latest bimolecular fluorescence and confocal imaging techniques, the researchers have shown that these two separate proteins can form pairs made up of two similar molecules (homodimers) or one of each molecule (heterodimers) which are targeted by Neisseria meningitidis. They have also identified critical components which cause the formation of these pairs of molecules. These new mechanistic insights into the three-way relationship between proteins and bacterial pathogens could have significant implications in the fields of infection, vaccination and cancer biology. Associate Professor of Microbiology, Dr Karl Wooldridge, said: “We have shown evidence for the self and mutual association of these two important proteins and their distinctive surface distribution on the human cell. We’ve also demonstrated that they are targeted by the serious human pathogen Neisseria meningitidis. This is significant because these proteins could potentially be used to develop new vaccines and treatments which could sabotage the colonisation of these dangerous bacteria, and also which could protect the blood-brain barrier which is disrupted in cases of bacterial meningitis.” Co-investigator Dr Jafar Mahdavi added: “One of the problems of studying laminin receptors is that there are at least two forms of LAMR1 found on the cell surface, called 37LRP and 67LR, and many previous studies have not sufficiently distinguished between the two forms. There are antibodies available but the specificity for the different types of laminin receptor had not previously been adequately reported. In our research we were able to identify which antibody detects which specific type of receptor. The paper, published today in the Royal Society journal Open Biology, will inform the whole field of laminin receptor research, not just infection. The team at Nottingham says that by examining bacteria as model organisms which have learned how to manipulate cell biology systems, they may gain insights into how medical science can manipulate cell systems for cancer treatment, Alzheimer’s disease and other diseases associated with the laminin receptor. ‘Deciphering the complex three-way interaction between the non-integrin laminin receptor, galectin-3 and Neisseria meningitidis’ is available online — Ends — Our academics can now be interviewed for broadcast via our Media Hub, which offers a Globelynx fixed camera and ISDN line facilities at University Park campus. For further information please contact a member of the Communications team on +44 (0)115 951 5798, email email@example.com or see the Globelynx website for how to register for this service. For up to the minute media alerts, follow us on Twitter Notes to editors: The University of Nottingham has 43,000 students and is ‘the nearest Britain has to a truly global university, with campuses in China and Malaysia modelled on a headquarters that is among the most attractive in Britain’ (Times Good University Guide 2014). It is also the most popular university in the UK among graduate employers, in the top 10 for student experience according to the Times Higher Education and one of the world’s greenest universities. It is ranked in the world’s top 1% of universities by the QS World University Rankings. Impact: The Nottingham Campaign, its biggest-ever fundraising campaign, is delivering the University’s vision to change lives, tackle global issues and shape the future. More news…
0.55
medium
5
884
[ "introductory science", "algebra" ]
[ "research methodology" ]
[]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
4166b1e1-1315-467a-9477-af992f4d69e1
Okay, here’s step-by-step tutorial
science
historical_context
Okay, here’s a step-by-step tutorial transforming the provided content about the “As it is Written: The Genesis Account Literal or Literary?” book, aiming for clarity, actionability, and practical guidance. **Overview:** This tutorial will guide you through understanding the “framework hypothesis” as presented in the book “As it is Written,” focusing on how it reinterprets the Genesis creation account. We’ll break down the core concept, its historical roots, and its implications for understanding scripture. **Prerequisites:** * **Basic Biblical Literacy:** Familiarity with the narrative of Genesis 1-11 is helpful. You don’t need to be an expert, but understanding the core story is beneficial. * **Understanding of Interpretive Approaches:** A basic awareness of different ways to interpret scripture (literal, figurative, historical-grammatical) will aid comprehension. **Step 1: Identify the Framework Hypothesis** - **Detailed Instructions:** The book introduces the “framework hypothesis” as a growing perspective among some readers of Scripture. This view prioritizes “science” as the authority for interpreting God’s Word. Specifically, it suggests that Genesis isn’t meant to be read as a precise, chronological account of six literal days, but rather as a literary framework – a story with deeper theological meaning. - **Expected Outcome:** You should be able to define the framework hypothesis in your own words: “It’s a way of reading Genesis that sees the creation story as a symbolic representation of God’s creative power, rather than a literal historical record.” - **Checkpoint:** Write a one-sentence summary of the framework hypothesis. (Example: “The framework hypothesis interprets Genesis as a literary framework conveying theological truths about God’s creation, not a detailed historical timeline.”) **Step 2: Trace the Historical Origins of the Hypothesis** - **Detailed Instructions:** Kenneth Gentry’s book explores the historical development of this view. He argues that the framework hypothesis has roots in earlier theological debates about interpreting scripture. It emerged partly as a response to perceived conflicts between Genesis and scientific discoveries. Look for sections in the book where Gentry discusses the historical context – particularly the rise of liberal theology and its impact on biblical interpretation. Pay attention to the arguments used by early proponents. - **Example/Concrete Application:** Consider the debate surrounding the age of the Earth. Early scientific theories suggested an Earth much older than six thousand years. The framework hypothesis arose, in part, to reconcile Genesis with these scientific findings. - **Common Pitfalls to Avoid:** Don’t assume the framework hypothesis is a modern invention. Understanding its historical roots helps you appreciate the motivations behind it. Avoid simply dismissing it as a “new” idea; it has a history. **Step 3: Understand the Reinterpretation of Genesis** - **Detailed Instructions:** The core of the framework hypothesis is the reinterpretation of Genesis. Instead of a six-day creation, it proposes that the “days” represent periods of time – perhaps ages, epochs, or symbolic stages of creation. The book likely provides specific examples of how Gentry interprets each day (e.g., Day 1 – the separation of light, interpreted as the initial separation of God’s presence from the chaos). - **Checkpoint:** Choose one “day” from Genesis 1 and, based on the framework hypothesis, write a brief paragraph explaining what that day might represent symbolically. (Example: “Day 2 – the separation of waters above and below – could symbolize the division of God’s creative power from the formless void.”) **Step 4: Recognize the Theological Purpose** - **Detailed Instructions:** The framework hypothesis isn’t just about accommodating science; it’s about emphasizing theological truths. It shifts the focus from *how* God created to *what* God created and *why* it matters. The book likely argues that the primary purpose of Genesis is to reveal God’s character and his relationship with humanity. - **Troubleshooting Tip:** If you’re struggling to grasp the purpose, ask yourself: “What is the central message of Genesis that the framework hypothesis is trying to highlight?” **Practice Exercise:** Imagine you’re explaining the framework hypothesis to a friend who is skeptical of it. Write a short paragraph (approximately 100-150 words) outlining the key points, addressing potential concerns about abandoning a literal interpretation, and emphasizing the theological significance. **Key Takeaways:** * The framework hypothesis reinterprets Genesis as a literary framework, not a literal historical account. * It emerged from a historical context of theological debate and a desire to reconcile scripture with scientific findings. * Its primary purpose is to emphasize God’s character and his relationship with humanity. **Next Steps:** * Read Kenneth Gentry’s book, “As it is Written,” to gain a deeper understanding of his arguments and supporting evidence. * Explore other perspectives on Genesis interpretation (e.g., the “day-age” theory, the “progressive creation” view). * Consider how your own understanding of Genesis influences your faith and worldview. Would you like me to elaborate on any specific step, or perhaps create a practice exercise focused on a particular aspect of the framework hypothesis?
0.6
medium
4
1,069
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "language_arts", "philosophy_and_ethics" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
6749d827-762e-483a-b1b7-a75cd128c020
Think teaching your child
social_studies
problem_set
Think of it as teaching your child to read. You wouldn't punish your child for not knowing the alphabet if they have never seen it before right? So we don't need to punish our puppies for things we haven't taught them. Here, we will talk about normal puppy behavior and ways to encourage them to try something new. Remember - Behavior that is followed by something the dog likes will increase! puppy training should be a lot of fun! Jumping: There are two ways to solve this - first, stand on their leash. Second, teach them to sit. By teaching sit we are removing the desire to jump. Mouthing: See this full article on puppy mouthing. Pulling: This one is easy. Just stop walking! If you never go anywhere when the dog pulls, he will quickly learn. Offer a small treat when he is by your side and talk in a happy voice to encourage him. The key to training any behavior is to be 100% consistent in your efforts. Often times it will get worse before it gets better but it will go away if you are consistent. Puppy dog training Try enrolling in a puppy kindergarten near you. This will really help with socialization and early puppy dog training. Find a dog trainer near you. Puppies need to go to the bathroom after they play, chew, drink, eat or sleep. Start by putting them on a leash and tethering them to you. That way you can be aware of them when they start sniffing the ground (usually a cue the puppy needs to go potty). Pick them up and carry them outside. When they go potty, tell them "good potty" and give a cookie as a reward. Do not let your puppy off leash until they have finished going so they get into the habit of going potty, then playing. Many puppies will want to go outside just to play if you let them off leash first. What if my puppy has an accident? He will. Do not punish your puppy though. If you catch your puppy in the act then clap your hands or pick them up - this will make your puppy stop going. Get him outside FAST and let him finish up out there. Reward him for going outside. Clean up the mess with natures miracle which will remove the odor completely. Do NOT rub their nose in it, swat them with a newspaper, or isolate your puppy. It will only teach them to not go in front of you. If your puppy is going frequently in the house you are probably not supervising them enough. Watch and learn their cues. If you feel you are watching and your puppy isn't getting it you can always have them checked for a Urinary tract infection. If your puppy piddles when you greet them, this could be submissive urination. This would be a problem that is usually cured by ignoring the behavior and letting your puppy calm down before touching them when you first see them. This article not only applies to puppies, but it is how you should train a dog of any age. If you can't supervise your dog, please put them in a crate to help curb accidents. Remember, dogs go where they go most! This can be a trying time, so have patience and you will get through it. About the Author Written by Amy Dunphy Amy Dunphy is the owner of The Dog Trainer Search. Amy is a professional dog trainer and offers articles, tips and advice through her website.
0.65
low
4
727
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.35 }
a517f680-dc9b-4d4a-9546-ab0a5d18abb9
Comments on: After punishment
technology
technical_documentation
Comments on: After punishment handed down, it’s time to move past Marcus Smart’s shove http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/ College basketball news, features, opinion and everything else. Thu, 27 Oct 2016 20:06:58 +0000 hourly 1 http://wordpress.com/ By: vincentbojackson http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/comment-page-1/#comment-16087 Mon, 10 Feb 2014 12:35:27 +0000 http://collegebasketballtalk.nbcsports.com/?p=484580#comment-16087 I think they’re both total a-holes that can’t control their actions. ]]> By: eagles512 http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/comment-page-1/#comment-16084 Mon, 10 Feb 2014 04:47:09 +0000 http://collegebasketballtalk.nbcsports.com/?p=484580#comment-16084 If he only called him a piece of crap, he shouldn’t apologize. I hear a lot worse at games, especially if someone thinks it was a cheap foul. ]]> By: osuintx http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/comment-page-1/#comment-16082 Mon, 10 Feb 2014 04:23:05 +0000 http://collegebasketballtalk.nbcsports.com/?p=484580#comment-16082 And you are an idiot. It is clear it was said but it happened while Smart was on the ground. Orr kept mouthing off after he got up and it was then that Smart shoved him. What was said then is NOT on the tape. This tape shows nothing and proves nothing except to someone who wants it to…without the slightest hint of validation. ]]> By: 1historian http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/comment-page-1/#comment-16081 Mon, 10 Feb 2014 04:00:19 +0000 http://collegebasketballtalk.nbcsports.com/?p=484580#comment-16081 Leave it to the media to make a big thing out of this. The guy (who should be fined for being a complete a-hole) said something that really pissed the guy off and the guy went after him. I don’t know what it was and I don’t care. The kid should have stayed on the court but he didn’t. NEWS FLASH – 20 YEAR OLD KID GETS PISSED AFTER BEING INSULTED BY A-HOLE IN THE CROWD!!! How long do you think this will go on, especially if Orr gets interviewed, which I bet will happen? ]]> By: jbaxt http://collegebasketball.nbcsports.com/2014/02/09/after-punishment-handed-down-its-time-to-move-past-marcus-smarts-shove/comment-page-1/#comment-16074 Mon, 10 Feb 2014 02:42:37 +0000 http://collegebasketballtalk.nbcsports.com/?p=484580#comment-16074 Only Smart was in the wrong. He was called a piece of crap and only that. It’s clear on the tape. The media wanted this to be racial, Smart wanted it to be racial(because he acted like a child), and the evidence in FACT shows no race card. The media sucks and Smart is just the opposite. There are thousands of fans just like Orr, it’s just he was heard. Hurry media, make that fact you wanted a race story disappear. But blame both to help you race baiters sleep better. You’re disgusting! ]]>
0.55
medium
5
1,167
[ "data structures", "algorithms basics" ]
[ "architecture patterns" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
00949a47-5ace-4c23-966e-4e48ec2eaae6
age where fast fashion
interdisciplinary
historical_context
In an age where fast fashion dominates the industry, sustainability has become an essential aspect of the modern wardrobe. As consumers become more conscious of their purchasing habits and the environmental impacts of their choices, upcycling has emerged as an innovative and creative way to revolutionize the fashion landscape. This article explores the art of upcycling clothing, its potential to transform the apparel industry, and the significance of this sustainable trend for the future of fashion. From the growing popularity of upcycling to the environmental benefits it brings, we delve into the heart of this fashion movement and its potential to contribute to a greener, more sustainable world. Importance of sustainable fashion in today's world The fashion industry is one of the world's largest polluters, with textile waste, high carbon emissions, and the excessive use of natural resources being major concerns. As the harmful effects of fast fashion on the environment become more evident, sustainable fashion is increasingly seen as a necessary alternative. The sustainable fashion movement is not only about creating new garments from sustainable materials but also finding creative ways to breathe new life into old clothes, reduce waste, and adopt a circular fashion approach. Sustainable fashion encompasses a range of practices, including using organic fibers, reducing water consumption in the production process, and recycling textile waste. One particularly innovative aspect of sustainable fashion is upcycling, which focuses on giving a new purpose to old garments and waste materials. Upcycling fashion has the potential to significantly reduce the environmental impact of the fashion industry and promote a greener future. What Is Upcycled Clothing and Is It Sustainable? Upcycled clothing refers to garments that have been created by reusing and repurposing the same fabric materials, such as old clothes, textiles, and deadstock fabrics. This process involves transforming these materials in such a way that their value is enhanced, resulting in unique, one-of-a-kind fashion items. Upcycling clothes not only reduces the need for virgin materials in garment production but also helps minimize textile waste, which is a significant issue in the fashion industry. Upcycled fashion is a sustainable practice as it prolongs the life of materials, reduces waste material, and decreases the demand for new resources. By diverting textile waste from landfills and finding creative reuse for post-consumer waste, upcycle clothing contributes to a circular economy. In this model, waste materials are continuously repurposed, reducing the overall environmental impact of the fashion supply chain. Why Is the Popularity of Upcycling Growing? The popularity of upcycling in fashion is growing for several reasons. Firstly, it addresses the environmental issues associated with fast fashion by reducing waste and conserving resources. As consumers become more aware of the environmental impact of their choices, they are increasingly seeking out sustainable fashion options, including upcycled clothing. Secondly, upcycling offers a unique creative outlet for fashion designers and enthusiasts. Upcycled clothes often have a one-of-a-kind appeal, as they are made from repurposed materials and showcase the designers' artistic vision. This originality appeals to consumers who value individuality and personal expression in their wardrobe choices. Lastly, upcycling supports local communities and small businesses. Many upcycling brands are small enterprises that work closely with local artisans and craftspeople. By supporting these businesses, consumers can contribute to the growth of local economies and promote ethical production practices in the fashion industry. The potential impact of upcycling on the fashion industry Upcycling has the potential to transform the fashion industry in several ways. By reducing the demand for new materials and promoting the creative reuse of existing resources, it can help lower the carbon footprint and greenhouse gas emissions associated with garment production. Upcycling also fosters a more inclusive and diverse fashion landscape, as it encourages collaboration between luxury fashion houses, independent designers, and grassroots upcycling brands. This collaboration can drive innovation and inspire new collections that challenge traditional production methods and aesthetics. Moreover, upcycling can contribute to a shift in consumer attitudes toward fashion. As people embrace upcycling, they may become more conscious of the environmental impacts of their purchasing decisions and more inclined to choose sustainable, ethically-produced garments over fast fashion. The Upcycling Trend: Turning Old Into New The process of upcycling clothes How do you upcycle clothes? Upcycling clothes involves taking existing garments or textile waste and transforming them into new, wearable pieces. This can be done through various techniques, such as doing patch works on your old pair of t-shirts or other garments using a sewing machine, embroidery, dyeing, or even combining different fabrics and materials. It often requires a creative and resourceful mindset, as designers need to envision new possibilities for old clothes and find ways to overcome the limitations of the materials at hand. Examples of upcycled garments and accessories Upcycled garments and accessories come in many forms, from repurposed t-shirts and jeans to unique handbags made from reclaimed fabrics. Some common examples include turning an old t-shirt into a tote bag, repurposing denim jeans into stylish jackets, or creating new clothes from a combination of vintage fabrics. Creative techniques used in upcycling Various creative techniques are employed in upcycling fashion, such as patchwork, embroidery, screen printing, and tie dye. These methods allow designers to personalize and embellish upcycled garments, giving them a unique, distinctive look. By experimenting with different techniques, an upcycled garment can push the boundaries of traditional fashion design and create truly original pieces. Sustainable Fashion: The Environmental Benefits of Upcycling Reducing waste and conserving resources One of the main environmental benefits of upcycling is the reduction of waste. By repurposing old clothes, textile waste, and deadstock fabrics, upcycling diverts these materials from landfills and gives them a new life. This not only helps reduce waste but also conserves valuable resources, such as water and energy, that would otherwise be used in the production of new garments. Lowering the carbon footprint of the fashion industry It also contributes to a lower carbon footprint in the fashion industry. By reusing existing materials, upcycling reduces the need for new raw materials and the associated greenhouse gas emissions from their extraction, transportation, and processing. Furthermore, it often involves local production, which minimizes the environmental impact of transportation compared to mass-produced garments made in distant factories. Promoting a circular economy It supports the concept of a circular economy, in which materials are continuously reused and repurposed, minimizing waste and resource consumption. By embracing upcycling, the fashion industry can transition from a linear, "take-make-waste" model to a more sustainable, circular approach. This shift has the potential to significantly reduce the environmental impact of the fashion supply chain and contribute to a greener future. The Future of Upcycling Fashion Emerging trends and innovations As upcycling gains momentum, new trends and innovations continue to emerge. For instance, some fashion designers in the textile industry are experimenting with textile recycling technologies, such as mechanical recycling, which breaks down textile waste into fibers that can be spun into new yarns. Others are incorporating sustainable materials, like natural fibers and eco-friendly dyes, into their upcycled creations. Collaboration between upcycling fashion and mainstream brands Collaborations between upcycling fashion brands and mainstream fashion labels are becoming more common, as the industry recognizes the potential of upcycling to contribute to sustainability efforts. High-profile partnerships, such as those between urban outfitters and local upcycling brands or luxury fashion houses and innovative upcycling designers, can raise awareness about the benefits of upcycling and inspire more consumers to embrace this trend. The Role of Consumers in Driving the upcycling movement Consumers play a crucial role in driving the upcycling movement. By choosing upcycled garments over fast fashion and supporting brands that promote sustainable practices, consumers can help create a market demand for upcycled fashion. Additionally, individuals can learn to upcycle their own clothes, extending the life of their garments and minimizing their personal waste. The significance of upcycling in reshaping the fashion industry Upcycling has the potential to significantly reshape the fashion industry by: Encouraging a shift towards sustainable practices: As upcycling becomes more mainstream, it can inspire more brands to adopt sustainable production methods and promote a circular economy. Driving innovation: Upcycling challenges traditional fashion design, pushing designers to experiment with new materials, techniques, and aesthetics, thus fostering innovation and creativity within the industry. Building a more inclusive fashion landscape: Upcycling fosters collaboration between luxury fashion houses, independent designers, and grassroots upcycling brands, creating a more diverse and inclusive fashion ecosystem. Shifting consumer attitudes: By embracing upcycling, consumers can become more conscious of the environmental impacts of their purchasing decisions and contribute to the demand for sustainable fashion options. The potential of upcycling fashion to contribute to a more sustainable world Upcycling fashion has the potential to contribute to a more sustainable world by promoting resource conservation, reducing waste, and lowering the environmental impact of the fashion industry. By encouraging a circular economy and fostering a more environmentally responsible approach to fashion, upcycling can play a vital role in mitigating the negative effects of fast fashion on the planet. Additionally, by supporting upcycling initiatives and embracing sustainable fashion practices, consumers can help drive the shift towards a greener, more sustainable future for the fashion industry and the world as a whole. The Role of Technology in upcycling fashion Technology plays an essential role in the evolution and growth of upcycling fashion. From advances in textile recycling to the development of eco-friendly materials, technology enables the creation of innovative upcycled products while minimizing the environmental impact of the fashion industry. Some examples of technology's role in upcycling fashion include: Textile recycling technologies: Innovations in mechanical and chemical recycling processes allow for the efficient breakdown of textile waste into fibers, which can then be spun into new yarns and fabrics for upcycled garments. 3D printing and digital fabrication: These technologies enable designers to create unique upcycled garments and accessories by repurposing materials and transforming them into new forms with minimal waste. Eco-friendly dyes and finishes: The development of environmentally friendly dyes and finishes helps reduce the environmental impact of upcycling processes and ensures the sustainability of upcycled products. Digital platforms and marketplaces: Online platforms and marketplaces connect upcycling fashion designers, artisans, and consumers, facilitating the growth of the upcycling movement and promoting the exchange of ideas and resources within the sustainable fashion community. Challenges and limitations of upcycling fashion While upcycling fashion offers numerous benefits and holds great potential, it also faces several challenges and limitations: Scale: Upcycling is often a labor-intensive and time-consuming process, which can make it difficult to scale up production and meet the growing demand for sustainable fashion. Material limitations: The quality and availability of materials used for upcycling can vary, potentially affecting the durability and aesthetics of upcycled garments. Consumer perception: Some consumers may perceive upcycled fashion as less fashionable or of lower quality compared to traditionally produced garments, which can limit the growth and mainstream acceptance of upcycling. Economic viability: The cost of upcycled fashion can be higher than mass-produced clothing due to the labor-intensive nature of the process and the use of premium materials. This can make upcycled garments less accessible to a wider audience. Despite these challenges, the upcycling fashion movement continues to gain momentum, driven by growing consumer awareness and the increasing recognition of the need for sustainable practices within the fashion industry. By addressing these limitations and finding innovative solutions, upcycling fashion has the potential to become an essential part of the fashion landscape and contribute to a more sustainable and responsible future for the industry and the planet. What is fashion upcycling? Fashion Upcycling is the process of taking old or discarded garments and materials and transforming them into new, fashionable, and wearable items. It is a sustainable practice within the fashion industry that aims to reduce waste, conserve resources, and minimize the environmental impact of clothing production. By creatively repurposing existing materials, upcycling offers a unique and innovative approach to fashion design, while also promoting a more sustainable and circular economy. Is upcycling the future of fashion? As concerns about the environmental impact of the fashion industry grow, upcycling is emerging as a new trend and a promising solution to many of the issues associated with fast fashion. While it may not entirely replace traditional production methods, upcycling has the potential to become an integral part of the future of fashion. By encouraging the creative reuse of materials, reducing waste, and promoting a circular economy, upcycling can contribute to a more sustainable and environmentally responsible fashion landscape. As more designers, brands, and consumers embrace upcycling, its role in shaping the future of fashion is likely to expand. Why is upcycling important in fashion? Upcycling is important in fashion for several reasons: Environmental benefits: it helps reduce waste and conserve resources by repurposing old garments and materials. This reduces the demand for new raw materials, lowering the environmental impact of clothing production. Creativity and originality: it offers a unique creative outlet for designers and fashion enthusiasts, allowing them to create one-of-a-kind garments that reflect their individuality and artistic vision. Supporting local communities: Many upcycling brands and initiatives work closely with local artisans and craftspeople, fostering economic growth and promoting ethical production practices within the fashion industry. Consumer awareness: it raises awareness about the importance of sustainability and encourages consumers to make more environmentally responsible choices when purchasing clothing and other textile products. How do upcycling clothes help the environment? Upcycling clothes plays a crucial role in helping the environment by reducing the waste stream, challenging the practices of luxury houses, and mitigating the negative impacts of mass production. Firstly, upcycling clothes divert textiles from the waste stream, significantly reducing the amount of waste that ends up in landfills or incinerators. By giving new life to used or discarded garments, it not only prevents pollution but also conserves natural resources, such as water and energy, that would otherwise be consumed in producing new textiles. Secondly, it poses a challenge to luxury houses and the broader fashion industry, which has historically been built on a foundation of exclusivity and high-end materials. By transforming discarded or pre-loved items into stylish, desirable pieces, upcycling demonstrates that fashion can be both sustainable and luxurious. This forces luxury houses to rethink their practices and embrace more eco-friendly alternatives, ultimately pushing the fashion industry towards a greener future. Lastly, mass production in the fashion industry is known to be resource-intensive and environmentally harmful. Upcycling clothes presents an alternative to this model, as it promotes the use of existing materials rather than relying on the constant production of new textiles. This approach not only reduces the environmental footprint of the fashion industry but also encourages consumers to appreciate the value of the clothes they already own, fostering a more sustainable consumption culture. What do you need to upcycle clothes? To upcycle clothes, you need a few basic tools, materials, and some creativity. Start by gathering old or damaged garments like jeans, t-shirts, or clothing from fast fashion brands that you no longer wear. Then, arm yourself with a sewing kit, which should include needles, threads, scissors, and pins. A sewing machine can be helpful but is not strictly necessary. Next, explore various upcycling techniques such as patchwork, embroidery, or dyeing. To find inspiration, research tutorials online, follow enthusiasts on social media, or attend local workshops. With a little patience and imagination, you can transform your old clothes into unique, fashionable, and sustainable pieces. The art of upcycling fashion represents a creative and sustainable approach to addressing the environmental challenges of the fashion industry. By repurposing old garments and waste materials, upcycling promotes a circular economy, creates less waste, and lowers the carbon footprint of fashion. As the popularity of upcycling continues to grow, it has the potential to reshape the industry, drive innovation, and contribute to a more sustainable world. By embracing and supporting sustainable fashion, both consumers and brands can play a part in building a greener future.
0.65
medium
6
3,266
[ "intermediate understanding" ]
[ "research" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
037251f8-745b-4efb-ab6c-2b955e091259
雅思剑桥系列写作范文篇一:改变是好还是不好Topic:Some people prefer spend
science
problem_set
雅思剑桥系列写作范文篇一:改变是好还是不好Topic:Some people prefer to spend their lives doing the same things and avoding change. Others, however, think that change is always a good thing。 I tend to agree that young children can be negatively affected by too much time spent on the computer every day. This is partly because sitting in front of a screen for too long can be damaging to both the eyes and the physical posture of a young child,雅思大作文范文汇总,regardless of what they are using the computer for. 开头段落共两句话,共58个字,第一句话:直接表明自己的观点;tend to agree 第二句话:简述观点的原因; 雅思作文范文【1】 题目:The older generations tend to have very traditional ideas about how people should live, think and behave. However。 However,the main concern is about the type of computer activities that attract children. These electronic games tend to be very intense and rather violent. The player is usually the ‘hero’ of the game and too much exposure can encourage children to be self-centred and insensitive to others. 三、准备必要的表达方式 四、注重连词的使用 只有身怀雅思备考攻略,才能够在雅思考中脱颖而出。连词在语句中往往起着承上启下的作用,添加连词,不但可以增加文章的逻辑感,同时还会使得文章更加严谨,同时。 祈使句是不可以用在写作里,而且更地道的表达应该是we should avoid having food at night,所以,就需要同学们在学习中多积累,并逐渐摒弃中文思维。 家庭是我们的避风港,关于家庭的话题也是雅思君比较爱考的 作文 话题,下面我给大家带来关于家庭的雅思作文及解析。 雅思大作文家庭 生活类 话题 范文 --家庭和外界对孩子造成的影响 Task。 雅思大作文范文一: 在学习的时候分心做其他事情好不好 While I prefer to put away my cellphone when doing homework, I can see how some students could benefit from having a cellphone around。 Even when children use a computer for other purposes,such as getting information or emailing friends,雅思小作文范文30篇图表题,it is no substitute for human interaction. Spending time with other children and sharing non-virtual experiences is an important part of a child's development that cannot be provided by a computer. In spite of this,the obvious benefits of computer skills for young children cannot be denied. Their adult world will be changing constantly in terms of technology and the Internet is the key to all the knowledge and information available in the world today. Therefore it is important that children learn at an early age to use the equipment enthusiastically and with confidence as they will need these skills throughout their studies and working lives. However, students should not overwork themselves and should spend the weekend on some leisure activities apart from study so as to refreshing their mind and relax. 雅思教育类写作高分范文。 I think the main point is to make sure that young children do not overuse computers. Parents must ensure that their children learn to enjoy other kinds of activity and not simply sit at home,learning to live in a virtual world.
0.55
high
5
1,064
[ "introductory science", "algebra" ]
[ "research methodology" ]
[ "technology" ]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
14b1564b-35bb-4f33-b56a-834ffa8bec0b
5340-00-291-4211 Enlarge View
social_studies
review_summary
5340-00-291-4211 Enlarge View 5340-00-291-4211 Alternative Views • There are no alternate images available for this product.   Master Lock NSN 5340-00-291-4211 Made to Meet Exacting U.S. Government Standards Master Lock's high quality padlocks protect assets at government facilities throughout the world, securing high-value military property, withstanding extensive use and harsh environments, and defeating attempts to breach security. Master Lock provides locks to every segment of the U.S. Government, including: - Department of Defense (DOD)  - Military Services - Defense Agencies                     - Coast Guard - General Services Administration (GSA) Master Lock government padlocks can be ordered by NSN or by commercial equivalent part number (NSN 5340-00-291-4211 = Master Lock No. 6004 N MK W27 SBC USS 20). Where required, Master Lock government padlocks are manufactured to exacting U.S. Government standards CID-A-A-59486 and CID-A-A-59487. They feature a bump resistant, five-pin tumbler cylinder for more than 10,000 key changes and a dual locking mechanism for superior pry resistance. For added security, all models have non-removable keys. - Delivery: The Master NSN 5340-00-291-4211 Padlock is factory-ordered, please allow 8-10 working days' lead time.     Total Price: $390.00   Master Lock 5340-00-291-4211 Padlock Features and Benefits: Equivalent to 6004 N MK W27 SBC USS 20. - 1-9/16" (40mm) wide laminated brass padlock, 9/32" (7mm) diameter dual lever locking brass shackle. - Non-removable key, 5-pin BumpStop cylinder, complies with ASTM F883 Performance Standard for bump resistance testing. - Stamped "US Set". - Spec Number (CID): AA 59486-1IB20. - Master keyed, 2 keys. - With chain. - Set size of 20. Bulk Discounts: For Bulk Discounts please call (1-800-676-7670) or e-mail (sales@taylorsecurity.com).       This item currently has no reviews. Be the first to review this item!   close (X)
0.7
high
4
677
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.6, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
dcf321c2-1114-4a5a-bd2b-f9501c69aab0
Hence, important you regular
life_skills
worked_examples
Hence, it is important for you to go for regular screening tests at the recommended frequency. For those under 30 years of age, health screening is recommended every two years. However, for those individuals who are 30 years or older, a yearly health screening is highly recommended. For those over 50 years of age diet, more age-related screening tests are conducted. One of the major risk factors for a variety of life-altering diseases is age. Core Aspects In Healthcare Examined Screenings are tests that look for diseases before you have symptoms. Screening tests can find diseases early, when they're easier to treat. These screenings can indicate the presence of cervical cancer and help assess risk. Your doctor will insert a sterile swab in your vagina and gently scrape your cervix to obtain a cell sample for analysis. While this test may be uncomfortable to some, it is critical for assessing cervical health. There are some diseases that are very insidious and their progression cannot be tracked very well unless the patient has regular health screenings. This makes it crucial that one gets a regular health check-up to ensure that you are not missing out on identifying any life-threatening diseases. Elevations in any of these levels are a sign that your body is not processing glucose properly, which can increase your risk for diabetes, heart disease, cancer, and Alzheimer’s disease. One study published in the Journal of Neurology in showed that even if your Hba1c is considered in “normal range”, every increase by 0.1 will increase the rate at which your brain shrinks in size per year. This is why being advised on how to reach “optimal range” is so much more important than simply saying you’re in “normal range”. Which tests you need depends on your age, your sex, your family history, and whether you have risk factors for certain diseases. After a screening test, ask when you will get the results and whom to talk to about them. Others need special equipment, so you may need to go to a different office or clinic. It is best to have a trusted health care provider you see regularly who has access to your health records. A one-off screening will only pick up health conditions that are present at the time of screening. Regular screening helps to detect conditions that may develop after the previous screening. - Our website services, content, and products are for informational purposes only. - In women, high levels can cause typically male traits, like excess body hair, to develop, so low levels are normal. - But a clot in a vein or artery can be deadly, blocking blood flow to your brain, heart, or lungs and causing heart attack or strokes. - Healthline Media does not provide medical advice, diagnosis, or treatment. - Clotting is a crucial process that helps your stop bleeding after a cut or wound. - In men, DHEA helps develop traits like body hair growth, so low levels are considered abnormal. Core Criteria Of Healthy Habits Considered However, with early detection and treatment, the body can be provided with the best defense against these diseases. This test, also known as a CBC, is the most common blood test performed. It measures the types and numbers of cells in the blood, including red and white blood cells and platelets. This test is used to determine general health status, screen for disorders and evaluate nutritional status. It can help evaluate symptoms such as weakness, fatigue and bruising, and can help diagnose conditions such as anemia, leukemia, malaria and infection. Lead-time bias refers to the fact that patients whose diseases are detected by screening before they experience symptoms have a longer survival time from diagnosis to death. The prevalence of the detectable preclinical phase of disease has to be high among the population screened. This relates to the relative costs of the screening program in relation to the number of cases detected and to positive predictive value. The expenditure of resources on screening must be justifiable in terms of eliminating or decreasing adverse health consequences. A screening program that finds diseases that occur less often could only benefit few individuals. Blood tests can be used in a number of ways, such as helping to diagnose a condition, assessing the health of certain organs or screening for some genetic conditions. Comparing the outcomes of screened and unscreened groups can be challenging due to several biases. While preventing even one death is important, given limited resources, a more cost-effective program for diseases that are more common should be given a higher priority, because it will help more people. A lipoprotein panel measures the levels of LDL and HDL cholesterol and triglycerides in your blood. Abnormal cholesterol and triglyceride levels may be signs of increased risk for CHD. A lipoprotein panel is a blood test that can help show whether you're at risk for coronary heart disease . This test looks at substances in your blood that carry cholesterol.
0.65
medium
4
989
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.45, "creativity": 0.3 }
4949f067-eac7-4bef-b17c-01c5b5d70108
This Day in History: May 15th- Nothing Less
social_studies
historical_context
This Day in History: May 15th- Nothing Less This Day In History: May 15, 1869 On May 15, 1869, The National Woman Suffrage Association was formed in New York City. The group was the result of ideological and political disagreements between two factions of the suffrage movement, one difference being whether to support the 15th amendment, which prohibited the government from denying citizens the right to vote, regardless of their “race, color, or previous condition of servitude.” The first signs of dissension were evident as early as 1860, but for all intents and purposes the women’s suffrage movement took a hiatus during the Civil War. During the post-war era, the movement re-grouped as the American Equal Rights Association and set up a new platform. The Association was divided on how to proceed when faced with the Reconstruction amendments which entailed the inclusion of the word “male” in the United States Constitution for the first time, with Susan B. Anthony and Elizabeth Cady Stanton, in an open letter to the 1868 Democratic National Convention, stating, “While the dominant party has with one hand lifted up two million black men and crowned them with the honor and dignity of citizenship, with the other it has dethroned fifteen million white women—their own mothers and sisters, their own wives and daughters—and cast them under the heel of the lowest orders of manhood.” The American Equal Rights Association eventually split over this issue. One faction insisted that, as citizens, women already inherently had the right to vote if the 15th amendment were to pass and felt that close ties must be maintained to abolitionists and the Republican party. The other believed that it was imperative that a woman’s right to vote be established at the same time as a black man’s and that continuing strong ties with the abolitionist movement was unnecessary and potentially detrimental, given certain perceived previous betrayals, such as the aforementioned use of “male” in amendments. Supporters of the latter perspective included women such as Susan B. Anthony and Elizabeth Cady Stanton, who went on to form the National Woman Suffrage Association, also known as “the National,” with Stanton serving as the organization’s first president. Their weekly newsletter, The Revolution, although short-lived, packed quite a punch. Its purpose was to aid the working women of America, and reflected the agenda of the National. It was, however, not without controversy, even among other women’s suffragists. For instance, Stanton stated in The Revolution (in a not-so-subtle attempt to appeal to white Southerners and play on certain people’s fears that they’d soon be marginalized in government) “American women of wealth, education, virtue and refinement, if you do not wish the lower orders of Chinese, Africans, Germans and Irish, with their low ideas of womanhood to make laws for you and your daughters … demand that women too shall be represented in government.” (Women’s suffragist supporter and husband to American Woman Suffrage Association leader Lucy Stone, Henry Blackwell, also made the argument to Southern legislatures that if they granted women the right to vote at the same time as black people, “the political supremacy of your white race will remain unchanged.”) While these types of racist appeals may seem curious given that African Americans and women’s rights activists had previously often fought hand in hand for their rights, it should be noted that there were other reasons for making such statements, even beyond just common prejudices of the day. The National suffragists, particularly, were afraid of what would happen if black people gained the right to vote first. You see, at the time there was a strong sentiment that black men, on the whole, would vote against the right for women to vote. As to whether they were correct in their theory or not, it’s difficult to discern. The great Frederick Douglas, for one, strongly supported women’s suffrage and was a pivotal figure in convincing people at the first women’s rights convention in the U.S. (organized by Stanton and Lucretia Mott) that women must fight for the right to vote; this was something Stanton was heavily pushing for, but Mott thought “would make us ridiculous.” Douglas stated at that meeting, that if women could not vote, he could not in good conscience himself accept the right to vote. And that In this denial of the right to participate in government, not merely the degradation of woman and the perpetuation of a great injustice happens, but the maiming and repudiation of one-half of the moral and intellectual power of the government of the world. However, though Douglas was a strong supporter of a woman’s right to vote, he did imply that he believed the National suffragists were correct on the idea that black men would largely vote against them when given the right to vote, stating in the May 15, 1868 edition of the New York Tribune, “The race to which I belong have not generally taken the right ground on this question.” Whatever the case, during the country’s Centennial celebration in Virginia in 1876, members of the National approached the stage after the Declaration of Independence was read. They presented the presiding officer with a document called the Declaration of Rights of the Women of the United States. The article listed the natural rights of women the government was infringing upon that were its duty to uphold as part of the social contract. It went on to state, It was the boast of the founders of the republic, that the rights for which they contended, were the rights of human nature. If these rights are ignored in the case of one half the people, the nation is surely preparing for its own downfall… Woman has not been a heedless spectator of the events of this century, nor a dull listener to the grand arguments for the equal rights of humanity. From the earliest history of our country, woman has shown equal devotion with man to the cause of freedom, and has stood firmly by his side in its defence. Together, they have made this country what it is. Woman’s wealth, thought and labor have cemented the stones of every monument man has reared to liberty… We ask of our rulers, at this hour, no special favors, no special privileges, no special legislation. We ask justice, we ask equality, we ask that all the civil and political rights that belong to citizens of the United States, be guaranteed to us and our daughters forever. One hundred years after Abigail Adams implored her husband John to “remember the ladies” (“If particular care and attention is not paid to the ladies, we are determined to foment a rebellion, and will not hold ourselves bound by any laws in which we have no voice or representation.”), women were indeed fighting tooth and nail for their rights. In the end, despite numerous federal amendments drafted after these events calling for women’s suffrage, it wasn’t until May of 1919 that it was finally put in place. That year, President Woodrow Wilson called for a special session of congress with the purpose of passing the suffrage bill. It passed with 42 more votes than necessary in the House. It then passed 56 to 25 in the Senate. The states themselves ratified it with Illinois, Wisconsin, and Michigan being the first and Tennessee the last of the needed 36 states to ratify the bill. Thus, in the summer of 1920, the 19th Amendment to the Constitution was put in place, (finally) allowing women in all states the right to vote. - The First Woman to Cast a Vote in Chicago Did So With Her Feet - The Remarkable Emma Goldman - The Remarkable Nellie Bly and Her Adventure in a Mad-House - Why Elections Are Held on Tuesday in the United States - The Great Frederick Douglas |Share the Knowledge!|
0.6
medium
4
1,658
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.55, "creativity": 0.3 }
ca926605-fab0-416a-b5f9-9f45fb854d69
There four primary levels
science
historical_context
There are four primary levels of conflict that may be present in organizations: intrapersonal (within an individual), interpersonal (between individuals), intragroup (within a group), and intergroup (between groups). Has anyone seen any of these operating in the workplace? Please share your example and if and how it was addressed/resolved. Interpersonal conflicts are hugely common in any workplace, and the bigger the workplace, the more interpersonal conflicts there typically are, simply because there are more people in the workplace. This has become even more prevalent with the diversification that we have seen in recent years in the workplace. We now have people from various countries living and working in our country to an extent greater than we have ever before seen in history. As a result, wee are also seeing a rise in workplace conflicts due to the diversification in the workforce. This is a situation that ... The solution provides a detailed discussion examining primary levels of conflict in organizations, and specifically answers the question, Has anyone seen any of these operating in the workplace? Please share your example and if and how it was addressed/resolved. This solution is written based on 25+ years of professional experience in management.
0.55
medium
4
241
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "social_studies", "life_skills" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
fa53d6ff-e70b-434f-9a9f-5750b756329f
수집: 우주비행사 지구에서 360km(225마일)
interdisciplinary
concept_introduction
수집: 우주비행사 사진 지구에서 약 360km(225마일) 높이의 관점에서 국제우주정거장 승무원들은 지구 대기 상층을 통해 측면 달을 촬영했습니다. 이미지 하단의 닫힌 구름 덮개는 약 6km(3마일) 높이에 있을 가능성이 있습니다. 파란색에서 검은색으로 변하는 색조는 매우 낮은 밀도의 대기 상층에 있는 기체 분자에 빛이 퍼지면서 생깁니다. 칠레의 비오비오 강은 안데스 산맥의 고지대에서 출발해 산티아고에서 약 450km 남쪽의 콘셉시온 근처 태평양으로 흐릅니다. 이 강은 전 세계적으로 유명한 화이트워터 래프팅으로 알려져 있습니다. 이 이미지는 안데스 산맥의 칼라키 화산 주변을 흐르는 비오비오 강의 일부를 보여주며, 팡게 댐과 강 계곡의 좁고 굽이치는 구간을 채우는 저수지를 보여줍니다. 1996년에 완공된 이 댐은 칠레 전력회사 ENDESA가 계획한 6개의 수력발전소 중 첫 번째입니다. 비오비오 강의 미래 개발은 칠레인들 사이에서 큰 논쟁의 대상이 되었으며, 칠레의 "정의적인 환경 문제"로 불리기도 했습니다. 미국-멕시코 국경의 엘파소-후아레즈 지역의 이 이미지는 우주비행사들이 국제우주정거장에서 촬영한 지구의 10만 번째 사진입니다. 이 사진은 2004년 1월 26일, 미션 8 승무원들에 의해 촬영되었습니다. 리오 그란데 강이 이 지역을 흐르며, 텍사스주 엘파소와 키우아우아 주 후아레즈라는 자매도시의 경계를 형성합니다. 이 이미지에서 북쪽은 오른쪽에 있으며, 지는 해가 시에라 후아레즈와 프랭클린 산맥의 동측을 그림자로 덮고 있습니다. 리오 데 라 플라타는 파라나 강과 우루과이 강의 흙탕물로 가득한 만으로, 아르헨티나와 우루과이의 경계 일부를 형성합니다. 풍부한 만은 부에노스아이레스와 몬테비데오 두 수도를 지원합니다. 이 이미지는 리오 데 라 플라타에서 강물과 남대서양의 물이 복잡하게 섞이는 모습을 보여줍니다. 2003년 10월의 좋은 아침, 국제우주정거장의 ISS-7 승무원들은 뉴잉글랜드의 가을 단풍을 종단적으로 바라보았습니다. 메인 주 포틀랜드의 배axter 월드스 공원의 가을 단풍은 단풍나무, 오래된 백단풍나무와 헤클의 혼합 나무에서 붉은색과 갈색을 보여줍니다. 가까운 에버그린 묘지에서는 단풍나무의 밝은 붉은색과 노란색 잎이 두드러집니다. 포틀랜드 도시 풍경에 둘러싸인 이 숲속 묘지는 역사적인 무덤과 숲속 길로 유명합니다. 2003년 11월 2일, 오션의 파괴적인 화재 잔해는 여전히 캘리포니아 중부 평야를 채우고 있었습니다. 이 "거꾸로된" 디지털 사진은 태평양 북서부에서 캘리포니아 중부로 내려다보는 위치에서 국제우주정거장에서 촬영되었습니다. 이 이미지가 촬영된 시점에는 화재가 결국 통제되었지만, 평야 위에 둘러싸인 산맥인 시에라 네바다 산맥(왼쪽)과 코스트 레인지 산맥(오른쪽)의 테두리 안에 먼지와 연기가 여전히 공기 중에 갇혀 있었습니다. 파키스탄에 있는 두 수도는 서로 인접하지만, 사용 패턴은 완전히 다릅니다. 인도스탄(1998년 인구 901,000명)은 마르갈라 산(왼쪽 상단)에 인접한 정확한 계획을 세운 직사각형 거리 패턴을 가지고 있습니다. 더 큰 라왈핀디(1998년 인구 1,406,214명)는 소안 강에 위치한 남쪽에 있습니다. 인도스탄은 1961년 건설이 시작된 이후 급속히 성장했습니다. 이 도시는 정부, 최고 재판소, 외교관들의 새로운 행정 구역으로 설계되었습니다. 파이살 모스크의 거대한 흰 건물은 도시의 북쪽 경계에 있습니다. 인도스탄의 직사각형 패턴과 달리, 라왈핀디는 도시 중심부를 흐르는 강을 중심으로 한 도시의 방사형 교통 패턴을 보여줍니다. 도시 블록은 작고, 성장은 더 제어되지 않았습니다. 산 베르나디노 산맥의 화재는 산타 아나 바람에 의해 이 사진이 국제우주정거장에서 촬영된 일요일 아침에 통제되지 않은 상태로 타고 있었습니다(약 11시 PST). 두꺼운 노란 연기가 남쪽으로 불어, 아래 계곡을 덮습니다. 이 사진은 ISS가 해당 지역을 접근하고 지나갈 때 연기 구름을 포착합니다. 어로우헤드 호수는 사진 왼쪽 가장자리 근처에 있는 저수지입니다. 질문: 엘파소-후아레즈 이미지가 촬영될 때까지 국제우주정거장에서 촬영된 지구 사진은 몇 장이었나요? 답변: 100,000장의 사진 질문: 다음 중 텍스트에 따르면 비오비오 강의 특징이 아닌 것은 무엇입니까? A) 화이트워터 래프팅으로 유명하다 B) 태평양으로 흐른다 C) 칠레에서 가장 긴 강이다 D) 팡게 댐이라는 댐이 있다 답변: C) 칠레에서 가장 긴 강이다 질문: 비오비오 강의 미래 개발에 대한 논쟁의 주요 이유는 무엇입니까? 답변: 비오비오 강의 미래 개발은 환경 문제로 인해 칠레인들 사이에서 큰 논쟁의 대상이 되었습니다. 질문: 리오 그란데는 엘파소와 후아레즈 사이의 경계입니까? 답변: 네 질문: 지구 대기 사진에서 파란색에서 검은색으로 변화하는 색조는 무엇으로 인해 발생했나요? 답변: 파란색에서 검은색으로 변화하는 색조는 매우 낮은 밀도의 대기 상층에 있는 기체 분자에 빛이 퍼지면서 생깁니다. 질문: 1998년 기준으로 인도스탄의 인구는 얼마나 되었나요? 답변: 1998년 기준으로 인도스탄의 인구는 901,000명이었습니다. 질문: 2003년 11월, 캘리포니아 중부 평야에 남아 있던 연기 잔해는 무엇 때문이었나요? 답변: 10월의 파괴적인 화재로 인한 연기 잔해로, 통제되었지만 공기 중에 갇혀 있었습니다. 질문: 달의 신월 사진이 촬영될 때 국제우주정거장의 대략적인 고도는 얼마입니까? 답변: 국제우주정거장은 지구로부터 약 360km(225마일) 높이에 있었습니다.
0.65
low
4
5,462
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.4 }
07e8e431-3873-4489-847a-336c9c679cae
Looking through book Jonah
philosophy_and_ethics
tutorial
Looking through the book of Jonah, an overarching theme is that Jonah grossly misunderstands the Lord. By examining three main failures in his understanding of God’s character and His steadfast, gracious, enduring love, we see that Jonah’s understanding of the Lord he claims to serve is shown up as seeing God as too small, God’s compassion as too limited, and God’s worship as too pagan. What about ours? Before we move on to Jonah and the fish, there’s an important lesson for us yet to be learned. The sailors take up a significant portion of chapter one. Their response to the storm and Jonah is very instructive. The lessons there will help us to ask questions today like, “What do the people around me believe about God?” and “How do they respond to evil?” Jonah is a little book with a big lesson. It teaches us the truth that God’s people aren’t just supposed to know the truth; they’re expected to share it with those who don’t. In this first chapter, we’re introduced to Jonah, a prophet who rejected God’s calling. His rejection was rooted not in fear, but in a disdain for the people he was called to preach to.
0.65
medium
4
255
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
22aee16a-5c68-4723-823c-cb441b9b7697
Digital Signal Processing: Weaving
technology
data_analysis
## Digital Signal Processing: Weaving the Fabric of the Digital World **Knowledge Network Position**: Digital Signal Processing (DSP) is a foundational pillar within the domain of **programming_systems**, acting as the crucial bridge between the analog, continuous world and the discrete, computational realm of digital computers. It doesn't just reside within programming systems; it *enables* them to interact meaningfully with physical phenomena. DSP translates the continuous vibrations of sound waves, the variations in light intensity, or the fluctuations in electrical signals into sequences of numbers that computers can understand, manipulate, and interpret. This fundamental transformation allows for the creation of intelligent systems that can perceive, analyze, and respond to their environment. Its connections radiate outwards to numerous other domains: 1. **Mathematics (specifically, Fourier Analysis, Linear Algebra, Probability & Statistics)**: DSP is deeply rooted in mathematical principles. The **Fourier Transform** is arguably the most critical tool, allowing us to decompose signals into their constituent frequencies, revealing hidden patterns and enabling frequency-domain analysis. **Linear Algebra** is essential for representing signals as vectors and operations on them as matrix transformations, particularly in areas like filtering and system modeling. **Probability and Statistics** are vital for understanding and modeling noisy signals, estimating parameters, and designing robust detection and estimation algorithms. Without these mathematical underpinnings, DSP would be impossible. 2. **Electrical Engineering (specifically, Communications Systems, Control Systems, Circuit Theory)**: DSP is the engine that drives modern communication systems. It's used for modulation and demodulation of signals (e.g., in Wi-Fi, cellular networks), error correction, and efficient data compression. In control systems, DSP algorithms analyze sensor data to make real-time adjustments to physical processes, from stabilizing aircraft to controlling robotic arms. Understanding **circuit theory** is crucial for the hardware implementation of DSP algorithms, as it dictates how signals are processed by electronic components. 3. **Computer Science (specifically, Algorithms, Data Structures, Embedded Systems)**: DSP algorithms are a specialized class of algorithms that operate on sequences of data. Efficient implementation requires careful consideration of **data structures** to manage these sequences. Furthermore, DSP is a cornerstone of **embedded systems**, where microcontrollers and specialized DSP processors are programmed to perform real-time signal analysis and control tasks in devices ranging from smart appliances to automotive systems. 4. **Physics (specifically, Acoustics, Optics, Electromagnetism)**: DSP provides the computational tools to analyze and manipulate signals originating from physical phenomena. In **acoustics**, it's used for audio processing, noise cancellation, and speech recognition. In **optics**, it enables image processing, pattern recognition in visual data, and the analysis of light signals. Understanding **electromagnetism** is key to interpreting and processing radio waves and other electromagnetic signals that are fundamental to wireless communication and sensing. 5. **Information Theory (specifically, Signal Compression, Error Correction)**: DSP plays a vital role in efficiently transmitting and storing information. Techniques like **lossy compression** (e.g., MP3 for audio, JPEG for images) rely on DSP to remove perceptually irrelevant information, while **error correction codes** (e.g., used in satellite communication and data storage) employ DSP to detect and correct errors introduced during transmission or storage. **Real-World Applications**: DSP is ubiquitous in modern technology, often operating silently in the background: * **Audio Processing**: Noise cancellation in headphones (e.g., Bose QuietComfort), equalization and effects in music production, speech recognition in virtual assistants (e.g., Siri, Alexa), and audio codecs (MP3, AAC) for music streaming. * **Image and Video Processing**: Digital cameras use DSP for image enhancement, autofocus, and noise reduction. Video compression codecs (H.264, HEVC) rely heavily on DSP for efficient video streaming. Medical imaging (MRI, CT scans) uses DSP to reconstruct detailed images from raw sensor data. * **Telecommunications**: Modulation and demodulation in cellular phones and Wi-Fi routers, digital filtering to remove interference, and error correction to ensure reliable data transmission. * **Biomedical Engineering**: Analyzing electrocardiograms (ECGs) for heart health, electroencephalograms (EEGs) for brain activity, and processing ultrasound and X-ray data. * **Automotive Industry**: Anti-lock braking systems (ABS), electronic stability control (ESC), engine control units (ECUs), and advanced driver-assistance systems (ADAS) all utilize DSP for real-time sensor data analysis and control. * **Consumer Electronics**: Digital TVs, smartphones, gaming consoles, and even washing machines employ DSP for various functions like audio playback, image display, and motor control. **Advanced Extensions**: DSP is not just a set of techniques; it's a gateway to numerous advanced fields: * **Machine Learning & Artificial Intelligence**: DSP provides the fundamental signal preprocessing and feature extraction techniques that are essential for training machine learning models. For instance, Fourier transforms of audio signals are used as input for speech recognition models. Convolutional Neural Networks (CNNs) for image processing are deeply inspired by and often implement DSP-like operations (convolution). * **Computer Vision**: Advanced image and video analysis, object detection, facial recognition, and augmented reality systems all rely on sophisticated DSP algorithms for feature extraction, filtering, and transformation. * **Robotics and Autonomous Systems**: Real-time sensor fusion (combining data from multiple sensors like cameras, LiDAR, radar), path planning, and control systems in autonomous vehicles and robots are heavily dependent on DSP for processing environmental data. * **Computational Finance**: Analyzing financial time series data, detecting anomalies, and building predictive models often involve DSP techniques for signal decomposition and pattern recognition. * **Quantum Signal Processing**: An emerging field that applies DSP principles to quantum information, aiming to process and analyze quantum signals for applications in quantum computing and communication. **Limitations & Open Questions**: Despite its power, DSP has limitations and raises ongoing questions: * **Quantization Error**: The conversion of analog signals to digital involves approximation, leading to quantization errors that can introduce noise and distortion. Minimizing these errors while maintaining efficiency is a constant challenge. * **Sampling Rate Limitations**: According to the Nyquist-Shannon sampling theorem, a signal must be sampled at a rate at least twice its highest frequency component to be perfectly reconstructed. This implies that infinite bandwidth signals cannot be perfectly digitized, and very high-frequency signals require impractically high sampling rates. * **Computational Complexity**: Many advanced DSP algorithms are computationally intensive, requiring specialized hardware (DSPs, FPGAs) or significant processing power, which can be a bottleneck in real-time applications with strict latency requirements. * **Robustness to Non-Stationary Signals**: While DSP excels at analyzing stationary signals (whose statistical properties don't change over time), dealing with highly non-stationary or chaotic signals (e.g., complex biological signals, unpredictable environmental noise) remains an active area of research. * **Interpretability of Complex Transforms**: Understanding the meaning of transformed signals in higher dimensions or with complex mathematical bases can be challenging, leading to a need for more intuitive visualization and interpretation tools. **Historical Context**: The roots of DSP can be traced back to the early 20th century with the development of **Fourier analysis** by Jean-Baptiste Joseph Fourier. However, the practical implementation of DSP was severely limited by the computational power of the time. Key milestones include: * **1940s-1950s**: Early work on digital filtering and sampling theory by pioneers like Claude Shannon (information theory and sampling) and Harry Nyquist. * **1960s**: The development of the **Fast Fourier Transform (FFT)** algorithm by James Cooley and John Tukey revolutionized spectral analysis, making it computationally feasible. This is often considered the true birth of practical DSP. * **1970s-1980s**: The advent of affordable microprocessors and dedicated Digital Signal Processor (DSP) chips (e.g., by Texas Instruments and Analog Devices) democratized DSP, enabling its widespread adoption in commercial products. * **1990s-Present**: Continued advancements in algorithm design, hardware acceleration (e.g., GPUs), and the integration of DSP with other fields like machine learning have led to increasingly sophisticated applications. **Systems Integration**: DSP is rarely a standalone concept; it's a critical component within larger systems: * **Communication Systems**: DSP chips are integrated into modems, routers, and mobile phones to handle signal encoding, decoding, modulation, and demodulation. They work in conjunction with **transmission media** (cables, airwaves) and higher-level **protocol stacks** (TCP/IP) to ensure reliable data transfer. * **Embedded Systems**: DSP processors or DSP capabilities on general-purpose processors are integrated into microcontrollers to process sensor inputs (e.g., from microphones, accelerometers) and generate control outputs (e.g., motor commands, display updates). They operate within the context of an **operating system** or a **Real-Time Operating System (RTOS)**. * **Multimedia Frameworks**: Software frameworks like GStreamer or FFmpeg utilize DSP libraries for audio and video encoding, decoding, filtering, and effects, enabling rich media experiences on various platforms. * **Sensor Networks**: In distributed sensor systems, DSP algorithms are often implemented on individual sensor nodes or aggregation points to preprocess data, reduce bandwidth requirements, and extract meaningful information before it's sent to a central processing unit. **Future Directions**: The evolution of DSP is driven by the ever-increasing demand for real-time processing of complex data and the pursuit of greater intelligence: * **Edge AI and TinyML**: Pushing DSP capabilities to resource-constrained edge devices and microcontrollers to enable on-device AI inference and real-time signal analysis without relying on cloud connectivity. * **AI-Accelerated DSP**: Integrating machine learning models directly into DSP pipelines for adaptive filtering, intelligent noise reduction, and more sophisticated signal modeling. * **Neuromorphic Computing**: Exploring new computational architectures inspired by the human brain that could offer highly efficient and parallel processing for complex DSP tasks. * **Quantum DSP**: As quantum computing matures, developing quantum algorithms for signal processing tasks could lead to unprecedented speedups for certain problems, such as complex spectral analysis or solving large linear systems. * **Personalized and Adaptive Signal Processing**: Developing DSP systems that can learn and adapt to individual user preferences or changing environmental conditions, leading to more personalized audio experiences, adaptive medical monitoring, and context-aware systems. * **Beyond Traditional Signal Models**: Extending DSP to handle more complex and unstructured data, such as graph signals or spatio-temporal data, opening up new avenues for analysis in fields like social networks and complex physical systems. In essence, Digital Signal Processing is the invisible architect of our digital lives, transforming raw, continuous physical phenomena into the structured, manipulable data that powers our connected world. Its continued evolution promises even more sophisticated and intelligent interactions with our environment.
0.7
high
6
2,346
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "mathematics", "science", "arts_and_creativity" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.65, "creativity": 0.45 }
5065fae4-32db-49bc-a3c9-0a2eff143ee6
DMM 借金日記 終戦の日に添えて 投稿日:
interdisciplinary
ethical_analysis
DMM FX 借金 借金日記 終戦の日に添えて 投稿日:       こういった内容は、あまり読者の方の反応も良くないのですが・・・ 今日は書かなければいけないと思いました。       名目上は本日8月15日が太平洋戦争の終戦日となっています。 その後も沖縄などでは戦争が続いていたので、一概には言えませんが・・・。   終戦から74年です。     TVでは色々と戦争に関する映像やドキュメンタリーが流れました。     日本の為に戦って亡くなられた方、そして戦争によって亡くなった方・・・ 日本人約300万人       先ほどまで、神風特攻隊の方の遺書を読んでいました。   17歳で特攻隊として命を落とした方もいます。       自分の身の甘さを恥ずかしくなります。     何が、 ギャンブル辞められない~ 親に嘘ついてお金借りてギャンブル~ また、給料全額ギャンブルに使ってしまった~     借金が~ お金無い~ お母さん助けて~       ・・・。       有り得ない程に、お恥ずかしい。       そういう時代だと。 平和だと。 それで片付けられる問題ではありません。     こんなでは祖国の為に亡くなっていった先人達に顔向けができません。       こんな腑抜けを生かす為に、17歳の少年は飛行機で突っ込んでいったのか・・・。       終戦の日は、日本男児としての有り方を今一度考えさせられる1日となります。       甘かった。 34年間。   2000万円稼げる!?   ベラジョンカジノ 僕たけしが1万円から105万円に増やした伝説のオンラインカジノです。 無料登録で3000円貰えて増えた分は出金できます。 ベラジョンカジノの登録はここから     GemForex 30,000円のボーナスが貰える!! 入金不要、利益分出金可能!! リスクゼロ!! GemForexに登録する!     月額2200円が今なら31日間無料! U-NEXT U-NEXT 【関連記事一覧】 -借金, 借金日記 Copyright© 30歳無職の借金1000万返済ブログ , 2024 All Rights Reserved.
0.45
high
6
1,512
[ "intermediate understanding" ]
[ "research" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
da2ed7fc-4c62-4ae3-8dd4-b0feda95c80a
weeks gestation now officially
technology
tutorial
- At 32 weeks of gestation he is now officially entering the 8th month. Your baby measures about 42.4 cm and weighs around 1,700 grams. - At this stage of 32 weeks of pregnancy, the baby who already has his eyes open and moves towards the light, already blinks his eyes normally and can identify voices and sounds from the outside of the belly. - The vernix is in full swing to cover the baby’s fine and delicate skin. This layer is a kind of natural fat to protect the baby’s skin from amniotic fluid. - In this phase of 32 weeks of gestation, the pregnant woman tends to gain more weight and it is more than common to gain more than expected. - They seem like endless days, tiredness is plastered on his face even by the lack of position to sleep. MORE INFORMATION ABOUT 32 WEEKS OF GESTATION And the old question: how many months is 32 weeks? At 32 weeks of gestation he is now officially entering the 8th month . Your baby measures about 42.4 cm and weighs around 1700 grams , as his fetal development is fast! By ultrasound it is possible to see your hair, if it is long at this stage. The mother can have grade 1 or 2 , but she can still continue to feed the baby very well, without problems. Your baby kicks and makes the biggest party in mom’s belly, so enjoy it, because each moment is unique and goes by so fast … At this stage of 32 weeks of pregnancy, the baby who already has his eyes open and moves towards the light, already blinks his eyes normally and can identify voices and sounds from the outside of the belly, being his direct contact with the world here outside. Your brain is visibly well formed and your bones are already stiff as they should be. The amniotic fluid that continues to be swallowed by the baby, continues passing through the stomach and forming meconium in the intestine that will be the baby’s first feces when it is born. With 32 weeks, your space inside the belly is getting tighter and your movements start to decrease, so don’t worry about that, it’s totally normal from this stage of pregnancy. At 32 weeks the vernix is in full swing to cover the baby’s fine and delicate skin. This layer is a kind of natural fat to protect the baby’s skin from amniotic fluid . It can be easily identified after birth, as it is the white layer that the baby carries at birth. If your little baby is a boy , at 32 weeks his small testicles have already descended or are about to descend into the scrotum. But there are also cases where the baby is born and one of the testicles has not positioned itself properly, but it is not a cause for concern. In a matter of time he will go to the right place! If the baby is born at 32 weeks, he has a good chance of surviving without the help of breathing apparatus, but still life inside the womb is very important for him to gain enough weight until birth and his respiratory system is matured enough to be able to breathe. no difficulty. How is Mom Feeling … In this phase of 32 weeks of pregnancy, the pregnant woman tends to gain more weight and it is more than common to gain more than expected, that is, 1 kg, because the baby’s fetal development is more for weight gain than for development in fact. With 32 weeks now, training contractions are also more evident, the famous contractions of Braxton Hicks , where the belly hardens, but there is no intense pain as in labor. It is the womb preparing for delivery itself and it is perfectly normal and much more common than you might think. The uterine height should be about 29 to 33 cm, it will depend on the size of the baby and the percentage of growth. Your legs and feet are getting more and more swollen and this is very common in recent weeks, especially now at 32 weeks. So drink a lot of fluids and avoid very salty foods and very tight clothing and shoes. Your belly button will be more and more prominent and can be noticed even by the clothes that mark. But don’t be alarmed, as it will return to its previous shape after the baby’s birth is just a matter of days. In this final stretch, it is possible that you are gaining weight up to half a kilo per week and this corresponds to the baby’s weight gain. Ensure that his diet is adequate and healthy so that he is receiving the proper nutrients to strengthen and develop properly. Enjoy and wash all the baby clothes and also fix the last details, now with 32 weeks left to meet the greatest love in the world! When reaching 32 weeks the days seem endless, the fatigue is plastered on his face even by the lack of position to sleep. Cramps that are in preparation for labor begin to become uncomfortable and it seems that everything starts to irritate at this stage. Know that practicing walking helps in labor and you can do this step even to reduce discomfort and prepare for the long awaited day. 2nd week begins My name is Dr. Alexis Hart I am 38 years old, I am the mother of 3 beautiful children! Different ages, different phases 16 years, 12 years and 7 years. In love with motherhood since always, I found it difficult to make my dreams come true, and also some more after I was already a mother. Since I imagined myself as a mother, in my thoughts everything seemed to be much easier and simpler than it really was, I expected to get pregnant as soon as I wished, but it wasn’t that simple. The first pregnancy was smooth, but my daughter’s birth was very troubled. Joana was born in 2002 with a weight of 2930kg and 45cm, from a very peaceful cesarean delivery but she had already been born with congenital pneumonia due to a broken bag not treated with antibiotics even before delivery.
0.65
low
4
1,297
[ "programming fundamentals", "logic" ]
[ "system design" ]
[ "science" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.45, "creativity": 0.3 }
74d95fd8-169d-433f-8feb-c319792f3aea
Self Care Includes Vision Care
interdisciplinary
historical_context
Self Care Includes Vision Care More than 3 million Canadians live with diabetes. Experts predict that rates of diabetes are on the rise and set to double in the coming years. The disease and its resulting complications impact many parts of the eye. Book an appointment with an optometrist today to discuss your diabetes-related vision concerns. How Does Diabetes Impact Vision? People with diabetes should get an eye exam every year to keep their vision crystal clear. Regular Eye Exams Prevent Diabetes-Related Vision Problems If the body’s blood sugar is elevated, the eyes might fill with extra fluid. This causes blurry vision and, while ordinarily temporary, can signal more serious issues. Visit your optometrist to rule out something more serious. Blurry vision can also be caused by starting insulin treatment or making changes to your medications. Damage to the eye’s blood vessels can occur if blood glucose levels are too high for too long. Tiny, delicate vessels in the eye provide blood to the retina. If they are damaged or break, your retina can make more to compensate for the burst vessels. The new vessels are usually even more fragile, and if they burst, fluid can leak into the retina, causing damage to vision. Diabetic Macular Edema Our office is conveniently located on the corner of St. Clair Ave West and Atlas Avenue in mid-town Toronto - 822 St Clair Ave W - Toronto, ON M6C 1C1 HOURS OF OPERATION - Monday: 9:30 AM – 6:30 PM - Tuesday: 9:30 AM – 6:00 PM - Wednesday: 9:30 AM – 6:30 PM - Thursday: 9:30 AM – 6:00 PM - Friday: 9:30 AM – 4:30 PM - Saturday: 9:00 AM – 3:00 PM - Sunday: Closed
0.65
medium
3
416
[ "foundational knowledge" ]
[ "advanced concepts" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
669e4224-c727-4c1e-a8f1-6b36b8142e76
**Shared Pattern**: fundamental pattern
interdisciplinary
historical_context
**Shared Pattern**: The fundamental pattern connecting differential equations and classical mechanics is the **description of change over time and the prediction of future states based on current conditions and governing laws.** Both disciplines are concerned with how systems evolve. Differential equations provide the mathematical language and tools to express these rates of change, while classical mechanics applies this language to the physical world, specifically to the motion of objects under the influence of forces. **The Surprising Connection**: The surprising connection lies in the fact that **the abstract mathematical framework of differential equations is the *essential engine* that drives the predictive power of classical mechanics.** It's not immediately obvious because classical mechanics can be initially understood through intuitive concepts like "pushing" and "pulling" or inertia. However, to move beyond qualitative descriptions and achieve quantitative predictions – to know *exactly* where a projectile will land or *precisely* when a planet will next be at a certain point in its orbit – one *must* engage with differential equations. The elegance and power of Newton's Laws, for instance, are fully realized only when translated into the language of differential equations, revealing the deterministic nature of classical systems. **Illustrative Example**: **The Simple Harmonic Oscillator (SHO)** Let's consider a mass *m* attached to a spring with spring constant *k*, oscillating horizontally on a frictionless surface. * **Classical Mechanics Aspect**: We know from Hooke's Law that the force exerted by the spring is proportional to its displacement from equilibrium: $F = -kx$, where *x* is the displacement and the negative sign indicates the force is restorative, always pulling the mass back towards equilibrium. Newton's Second Law states that the net force on an object is equal to its mass times its acceleration: $F_{net} = ma$. * **Differential Equation Formulation**: 1. **Apply Newton's Second Law**: $ma = -kx$ 2. **Express acceleration as the second derivative of displacement**: $a = \frac{d^2x}{dt^2}$ (since velocity $v = \frac{dx}{dt}$ and acceleration $a = \frac{dv}{dt} = \frac{d^2x}{dt^2}$) 3. **Substitute into Newton's Law**: $m\frac{d^2x}{dt^2} = -kx$ 4. **Rearrange into standard differential equation form**: $\frac{d^2x}{dt^2} + \frac{k}{m}x = 0$ * **Solving the Differential Equation**: This is a second-order linear homogeneous differential equation with constant coefficients. The characteristic equation is $r^2 + \frac{k}{m} = 0$, which has roots $r = \pm i\sqrt{\frac{k}{m}}$. The general solution is of the form: $x(t) = A \cos\left(\sqrt{\frac{k}{m}}t\right) + B \sin\left(\sqrt{\frac{k}{m}}t\right)$ where $A$ and $B$ are constants determined by initial conditions (e.g., initial position and velocity). * **Interpretation**: This solution tells us that the displacement of the mass from equilibrium, $x(t)$, will vary sinusoidally with time. The term $\sqrt{\frac{k}{m}}$ is the angular frequency, often denoted as $\omega$. The solution precisely predicts the position of the mass at any future time, given its starting position and velocity. Without the differential equation, we would only know the *qualitative* behavior (it oscillates), but not the *quantitative* details of its motion. **Reciprocal Learning**: * **Differential Equations $\rightarrow$ Classical Mechanics**: Mastering differential equations provides the *tools* to analyze and predict the behavior of classical mechanical systems. Understanding concepts like derivatives, integrals, initial value problems, and methods for solving different types of ODEs directly translates into the ability to solve problems in mechanics, from projectile motion to planetary orbits. A student who understands how to solve $\frac{d^2x}{dt^2} = g$ (for free fall) can immediately predict the motion of an object under gravity. * **Classical Mechanics $\rightarrow$ Differential Equations**: Conversely, grappling with physical problems in classical mechanics provides concrete, intuitive *motivation* and *context* for learning differential equations. The physical meaning behind the terms in an equation (e.g., mass, acceleration, force, damping) makes the abstract mathematics more tangible. Understanding the *why* behind equations like $\frac{d^2x}{dt^2} + \frac{b}{m}\frac{dx}{dt} + \frac{k}{m}x = 0$ (damped harmonic oscillator) makes the study of second-order linear ODEs much more purposeful. **Mathematical Foundation**: The core relationship is Newton's Second Law of Motion, $F = ma$. When force $F$ is a function of position $x$, velocity $v$, or time $t$, and acceleration $a$ is the second time derivative of position $\frac{d^2x}{dt^2}$, this law becomes a **second-order ordinary differential equation**. The general form is: $m\frac{d^2x}{dt^2} = F(x, \frac{dx}{dt}, t)$ This equation encapsulates the entire predictive power of classical mechanics. Solving it, given initial conditions $x(0)$ and $\frac{dx}{dt}(0)$, yields the position $x(t)$ and velocity $\frac{dx}{dt}(t)$ of the object at any time $t$. **Universal Application and Implications**: This pattern of using differential equations to model the evolution of systems based on rates of change and governing laws is **ubiquitous**. * **Physics**: Beyond classical mechanics, it's fundamental in electromagnetism (Maxwell's Equations), quantum mechanics (Schrödinger Equation), thermodynamics, fluid dynamics, and even general relativity. * **Engineering**: Designing circuits, analyzing control systems, modeling heat transfer, predicting structural behavior, and simulating fluid flow all rely heavily on differential equations. * **Biology**: Population dynamics (e.g., predator-prey models), spread of diseases (epidemiology), and biochemical reaction rates are often described by differential equations. * **Economics**: Modeling market trends, economic growth, and financial instruments often involves differential equations. * **Everyday Life**: Even simple phenomena like the cooling of a cup of coffee (Newton's Law of Cooling), the rate of decay of a radioactive substance, or the way a pendulum swings are governed by differential equations. The connection highlights a profound truth: the universe, at many scales, operates according to rules that can be expressed as relationships between quantities and their rates of change. Mastering differential equations unlocks the ability to understand and predict the behavior of a vast array of dynamic systems, making it a cornerstone of scientific and technological progress.
0.65
medium
8
1,462
[ "advanced knowledge" ]
[ "cutting-edge work" ]
[ "mathematics", "science", "technology" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.4 }
6ea112bb-0860-434d-b2c2-0102deca5354
Trusted data (or authentic
interdisciplinary
data_analysis
Trusted data (or authentic data) is information translated into a form usable by computers, whose source is verifiable — it can be checked through a standardised method to demonstrate accuracy. The trusted data economy takes trusted data one step further, encompassing the business models that can enable a fairer, more transparent and decentralised world. “Systems that expand the radius of trust change societies” (Werbach, 2016: 4) Trust is the underpinning of all human contact and institutional interactions; a crucial value in international affairs and a complex interpersonal and organisational construct, embedded in all areas of society, from individuals’ relationships with each other to the global political system. It is widely seen as one of the most important synthetic forces within society which encompasses values such as reciprocity, solidarity and cooperation, whilst within areas such as technology, law and governance, it is less of a synthetic value and more of an intrinsic and core construct within contracts, regulation and code. But what is trust? At its core, trust is centred on the reliability of an assertion about someone or something; an indisputable, verifiable claim (the operative word here being ‘verifiable’ — the ability to check or demonstrate accuracy). In a continuously digitised and globalised world, trust has been increasingly hard to nurture, and as events over the past decade have shown, it has fast become a threatened commodity the world over. The evolution of trusted data With the mass adoption of the internet, the world has witnessed a rapid acceleration of innovation, and as a result, a diverse range of positive outcomes. Access to the internet, for example, in a vast portion of the world is now considered a basic human right and an essential component of a functioning society. However, for the vast majority of people, the reliance and somewhat addiction to the efficiencies it brings to our lives, towers over our curiosity, and knowledge of, the very real risks it poses to us around what is being done backstage, behind the warm lure of the glossy frontends. As we engage in our now phenomenally digitised day-to-day lives, the information about the things we write, say, do and read is packaged up into something we hear about a lot, but don’t necessarily question all that often what it is. You guessed it.… data. Data, within the context of computing, is simply information, facts provided or learned about something or someone, translated into a form that is efficient for movement or processing. Put simply, everything we know about anything is information which can be packaged up and utilised as data. This page addresses the meeting of the two: trust and data. Trusted data (we also use the term authentic data interchangeable) is, therefore: information translated into a form usable by computers, whose source is verifiable — and can be checked through a standardised method to demonstrate accuracy. The need for trusted data For much history, we have found methods to demonstrate that information itself can be trusted. This is based on the way the issuer of information, and the verifier of information agree on what makes something trustworthy. We trust paper money because we trust the fine details that are imprinted onto each one, which can be verified as being issued by the body that can reliably demonstrate they have the authority to do so. As a temporary holder of the paper money, one can also conduct their own checks. - The hologram image changes between ‘Twenty’ and ‘Pounds’ - The foil is gold and blue on the front of the note and silver on the back within the see-through windows - A portrait of the Queen is printed on the window with ‘£20 Bank of England’ printed twice around the edge - A round, purple foil patch contains the letter ‘T’ - Under a good-quality ultraviolet light, the number ‘20’ appears in bright red and green on the front of the note Similarly, if we look at an identity-related example, we trust the information on a passport, driver’s licence or birth certificate, because we note other fine details and unique characteristics which a verifier of this information can reliably identify. This model of having a common societal understanding of what is trustworthy and what is not extends across all aspects of information in the physical world, and now deep into the digital world. However, with the advancement of technology and ease of access to methods and tools to behave in a fraudulent manner, combined with a lack of transparency over where the information (stored as data) is being held, it is more difficult to actually verify a claim and ultimately be able to reliably state that data is trustworthy. A world of truly trusted data in Web 3.0 and Decentralised Identity Without going into too much technical detail of how this data is made trusted in Decentralised Identity, you can find out all you need to know on our learn site here, some of the underlying principles and basics do help illustrate what a world where trusted data is the norm would look like. In Web 2.0, much of the world’s data is held in huge data centres controlled by a small number of large players, acting as gatekeepers. The term ‘cloud’ has been used effectively to create the feeling that one’s data is just held in the air, the ether, for an individual to call on when they need it, yet in reality, our data is locked up and secured by these large gatekeepers. Our data is not held, controlled or owned by ourselves. As a result, our understanding of what goes in and what comes out is limited. An issuer may provide a trusted piece of data, but what happens before this arrives to an individual or a verifier is out of your control. Likewise, with more sophisticated means of cybercrime and hacking, uncovering whether some data has been tampered with is harder and harder to do. To get around this, a combination of technologies have come together at a poignant moment across different industries. Within the Decentralised Identity / SSI space, three technologies, in particular, are integral: - Decentralised Identifiers (DIDs) - Verifiable Credentials (VCs) - Blockchain technology (used with SSI as a Verifiable Data Registry) (note: for decentralised identity, blockchain is not necessarily required but it does offer some significant advantages in further improving the level of trust, transparency and the overall efficiencies required for it to flourish Decentralised Identifiers (DIDs) and Verifiable Credentials work in tandem as the foundations of Decentralised Identity to ensure data can be trusted. DIDs act as a form of digital stamp or hologram, making it possible to check the authenticity of the information, whilst VCs contain the very information itself that needs to be checked and verified — more on both DIDs and VCs here. Blockchain is often described as a “trustless” system, meaning that ultimately one does not have to have some synthetic indeterminate level of “trust” as the rules and structures laid out in code do this. Although blockchains use complicated technology, which often deters people from further reading, their basic function is quite simple: to provide a distributed yet provably accurate record. In other words, everyone can maintain a copy of a dynamically-updated ledger, but all those copies remain the same, even without a central administrator or master version. This approach offers two basic benefits. First, one can have confidence in transactions without trusting the integrity of any individuals, intermediaries, or governments. Data is therefore trustworthy because no party can tamper with it so the data put in is what comes out. Second, the distributed ledger replaces many private databases that must be reconciled for consistency, thus reducing transaction costs. The trusted data economy The trusted data economy takes trusted data one step further. Over the past two decades leveraging the buying and selling of data has become a powerful and incredibly profitable business model. It is what has led to the growth and dominance of the behemoths of the internet: Google and Facebook the most prominent examples. By providing a service, totally free for most uses, these companies have quietly deepened their drills into the gold mine of individuals’ data whilst many have unknowingly compromised their privacy and freedoms. Yet, though more and more this is being exposed and famous terms are emerging, such as ‘if the product is free, you’re the product’, there has still been very little movement and change at a regulatory or social level. Enter the trusted data economy The trusted data economy flips this business model entirely on its head, shifting control away from these internet behemoths and over to the individuals. Through a range of payment models enabled through these technologies, the individual can now be the ultimate gatekeeper and vendor of their identity; able to choose who and for what their data is used and even sold for! This new data economy of trusted data has been labelled as ‘decentralised identity’ as well as ‘self-sovereign identity (SSI)’ since it directly empowers individuals to have control and engage in trusted interactions in both the physical and digital spheres. Find out more about the economy of trusted data might look in cheqd’s tokenomics for self-sovereign identity. Transparency, freedom, determination, democratisation — these are all features of what Web 3 and the shift in power promises, however none of these are truly possible without a new era of data management in which we can have faith in where data resides, who has access to it and who is making money with it. Through revolutions over time, a small number of people and organisations benefit most and power concentrates to the few. Yet, as time progresses and accessibility to the technologies that enabled that revolution increases, the more the masses can engage and challenge the status quo. Trusted data is a very real solution for many of the problems of today and the trusted data economy which enables it to gain mass adoption can make it happen. Find out more about how we at cheqd are helping usher in the trusted data revolution….
0.65
medium
6
2,086
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
4f69de8b-ae02-47d4-9a88-703c5a136f91
Like0 Dislike0 krankzinnige orb ex Game
social_studies
ethical_analysis
Like0 Dislike0 krankzinnige orb ex Game. stuiteren en schieten totdat u het ding hoort. pong is nog nooit gekregen zo extreem, maar dit is natuurlijk pong. Instructions: UP & DOWN ARROWS to move paddle. SPACE to fire. krankzinnige orb ex All copyrights and trademarks of this game are held by owners and their use is allowed under the fair use clause of the Copyright Law. If you believe we violating your copyrights, please advise us at [email protected] in order that we can solve the problems. Advertisement
0.55
medium
4
150
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
265289ab-bfbb-4ee5-9dd7-94c55f00ea43
Let's build bridge your
interdisciplinary
practical_application
Let's build a bridge from your foundational computer graphics and linear algebra knowledge to the exciting world of advanced computer graphics. **Prerequisite Review** * **Key Concept 1: 3D Transformations (Translation, Rotation, Scaling)** Remember how we used matrices to move, rotate, and resize objects in 3D space? Think of a simple cube. Translating it means adding a vector to its vertex coordinates. Rotating it involves multiplying vertex coordinates by rotation matrices. Scaling uses a diagonal matrix. These operations are fundamental to placing and manipulating objects in a scene. * **Key Concept 2: Vector and Matrix Operations** You'll need to be comfortable with vector addition, subtraction, dot products, and cross products. Matrix multiplication is also crucial – it's how we chain transformations together (e.g., rotating an object and then translating it). * **Key Concept 3: Coordinate Systems (World, View, Projection)** We’ve worked with different coordinate systems: the object's local space, the world's global space, the camera's view space, and finally, the 2D screen space via projection. Understanding the transformations between these is key to rendering. **Knowledge Gap Identification** * **Common Gap 1: Understanding the "Why" Behind Matrix Stacking Order** Learners sometimes get confused about why multiplying matrices in a different order yields a different result. This isn't just an academic point; it directly impacts how objects behave in a game or simulation. * **Bridge:** Consider animating a robot arm. First, you rotate the shoulder, then the elbow, then the wrist. If you apply these rotations in the reverse order (wrist, elbow, shoulder), the arm will move completely differently. Matrix multiplication order dictates the sequence of transformations, mirroring real-world kinematic chains. * **Common Gap 2: The Transition from View Space to Clip Space** While you know about projection matrices (perspective or orthographic) that flatten 3D to 2D, the intermediate step of "clipping" is often a bit fuzzy. * **Bridge:** Imagine a camera looking through a window. Anything outside the window is irrelevant. The projection matrix, in conjunction with clipping planes, defines this "window" in 3D space. Vertices outside this view frustrate are "clipped" (discarded or partially rendered), optimizing the rendering pipeline. Practicing transforming points through a perspective projection matrix and identifying which ones fall within the standard view frustum (typically x, y, z between -1 and 1) will solidify this. **Building the Bridge** * **Step 1:** From basic 3D transformations (translation, rotation, scaling) applied to individual objects using matrices. * **Step 2:** We extend this by understanding how to combine multiple transformations efficiently and how to transform entire scenes from object space, through camera space, to a normalized device coordinate space. This involves understanding the view matrix and projection matrix as composite transformations. * **Step 3:** Which leads us to advanced concepts like hierarchical transformations (e.g., scene graphs), efficient matrix manipulation for animation, and the foundational principles of rasterization and shading, all of which build directly upon these matrix and coordinate system concepts. **Conceptual Transformation** * **How:** Simple object transformations become complex scene manipulations. Instead of just moving a single cube, we're now orchestrating the movement and interaction of many objects within a dynamic environment. * **Why:** The need for efficiency and realism drives this transformation. Applying transformations individually to millions of vertices is slow. We need techniques to group objects and apply transformations hierarchically (like a scene graph) and to optimize the projection process. * **Where:** The familiar pattern of matrix multiplication reappears in the construction of view and projection matrices, and in the hierarchical updates of animated characters or complex machinery. **Connection Reinforcement** * **Similarity:** The core mathematical operations (matrix multiplication, vector math) remain the same. The goal of representing and manipulating 3D geometry is constant. * **Difference:** The *scale* and *complexity* of the operations increase dramatically. We move from single object manipulation to scene-wide transformations and the introduction of new concepts like clipping and viewing frustums. * **Extension:** What's genuinely new is the systematic approach to managing entire scenes, understanding the rendering pipeline's stages, and the mathematical underpinnings of how a 3D world is projected onto a 2D screen. **Checkpoint Questions** 1. If you have a point `P` and apply transformation `T1` then `T2`, what is the resulting transformation matrix multiplication order? (e.g., `T1 * T2 * P` or `T2 * T1 * P`) 2. Describe in your own words why clipping is necessary in 3D graphics. 3. Can you explain how the concept of a "scene graph" leverages hierarchical transformations to simplify scene management?
0.6
medium
4
1,011
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "mathematics", "technology", "language_arts" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
34027936-84ea-4814-a665-2e911adb62d6
Python molurus is carnivorous
life_skills
practical_application
Python molurus is carnivorous. Its diet consists mostly of live prey. Its staples are rodents and other mammals. A small portion of its diet consists of birds, amphibians, and reptiles. When looking for food P. molurus will either stalk prey, ambush, or scavenge for carrion. These snakes have very poor eyesight. To compensate for this, the species has a highly developed sense of smell, and heat pits within each scale along the upper lip, which sense the warmth of nearby prey. Indian pythons kill prey by biting and constricting until the prey suffocates. Prey items are then swallowed whole. To accomplish the feat of swallowing the prey, P. molurus molurus dislocates its jaw and stretches its highly elastic skin around the prey. This allows these snakes to swallow food items many times larger than thier own heads. In cases of scavenging there is no constriction of the prey (Murphy and Henderson 1997, Woodland Park Zoo 2000). Animal Foods: birds; mammals; amphibians; reptiles; carrion Primary Diet: carnivore (Eats terrestrial vertebrates)
0.6
medium
4
234
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
ea867fac-3063-4194-a79d-798b103793da
Crop-Protecting Fungicides May Be Hurting The Honey Bees
technology
research_summary
Crop-Protecting Fungicides May Be Hurting The Honey Bees You know those nasty brown spots that can ruin an otherwise perfectly delicious apple? Those spots and other problems — like blossom blight and yellow leaves — are often caused by fungi. Apple growers usually fight back with fungicides, but a new study has found that those fungicides could be hurting honey bees. "The long-standing assumption is that fungicides won't be toxic to insects," says May Berenbaum, an entomologist at the University of Illinois Urbana-Champaign. But Berenbaum and her colleagues found, in a study published Monday by the Proceedings of the National Academy of Sciences, that fungicides can harm bees by making it harder for them to metabolize their food. If bees can't get energy from their food, they can't fly. The study sheds new light on what appears to be the latest of many threats scientists have identified as they try to understand why honey bees and other pollinating insects have been dying off. Over the past decade, bees that farmers and gardeners rely on to pollinate plants have been dying in unprecedented numbers. Researchers have scrambled to figure out what's killing the bees, and they've identified some factors — including pesticides aimed at killing insects, reduced forage plants, and bee mites and other diseases. Now, researchers have implicated a whole new class of chemicals in recent bee die-offs: fungicides. Berenbaum says the takeaway is that "every kind of pest management approach can have unintended consequences." In Washington and many other states, fungicides are not addressed by pollinator protection guidelines, which focus on other types of pesticides. This story comes to us from member station KUOW and EarthFix, an environmental journalism collaboration led by Oregon Public Broadcasting in partnership with six other public media stations in Oregon, Washington and Idaho. Copyright 2021 KUOW. To see more, visit KUOW.
0.65
medium
6
399
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "mathematics" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
70364c92-41fe-4114-89c4-a404c789fd21
United States’ decision last
interdisciplinary
ethical_analysis
The United States’ decision last week to reverse its position and support a United Nations declaration defending the rights of indigenous people could have positive implications for the Wabanaki people in Maine. President Barack Obama made the announcement last week during the Tribal Nations Conference in Washington, D.C., which was attended by representatives of Maine’s Wabanaki tribes, including Chief Kirk Francis of the Penobscot Indian Nation. “On the international side, it shows a unified recognition of rights worldwide, which is extremely gratifying,” Francis said in an interview Sunday. “In Maine what it means is the opening of a new dialogue to start to talk about our federal relationship. This [declaration] provides for a great template on how native people are to be treated.” Although the U.N. declaration is not legally binding, the declaration “carries considerable moral and political force and complements the president’s ongoing efforts to address historical inequities faced by indigenous communities in the United States,” the U.S. State Department said in a statement. The idea of a declaration dates back to the 1970s, but it has taken decades to draft a resolution the U.N. could support. The declaration first was ratified in September 2007, but the U.S. remained the lone holdout until last week. John Dieffenbacher-Krall, executive director of the Maine Indian Tribal-State Commission, said that while he believed it was sad that the United States was the last country to sign the declaration, the news was historic. Francis said he was buoyed by the president’s commitment to tribal sovereignty. “If you look at some of the things that have happened recently — the Indian Health Care Improving Act [signed into law in March], the Tribal Law and Order Act [signed in July] and the $3.5 billion in stimulus funds that went to tribes — it’s clear that the president supports us,” the Penobscot chief said. Some tribal leaders in Maine long have believed that two pieces of state legislation passed in 1980, the Maine Indian Claims Settlement Act and the Maine Implementing Act, have harmed the tribes’ economic viability. Specifically, Francis has claimed that a stubborn provision of the implementing act — that no federal law passed after 1980 can supersede it without state approval — has threatened the tribes’ sovereignty and hampered their ability to move forward with certain economic initiatives, such as casinos. Whether the United States’ support of the U.N. declaration could have any impact on those laws remains to be seen. In the 30 years since the act was passed, it has seen only minor changes related to expansion of tribal authority over some criminal matters on tribal lands and additions to the land trusts. However, during his announcement last week, Obama said the declaration is a “powerful affirmation” of Indian rights that could guide legislation and government action accordingly. “From our point of view, the declaration is important in how the [settlement and implementing] acts get interpreted, and how we live under those laws,” Francis said. “We’ve got a long way to go, but I think this is a big step forward.” In addition to the Penobscots of Indian Island, Maine’s Wabanaki people are represented by the Passamaquoddys at Pleasant Point and Indian Township and the Maliseets and Micmacs of Aroostook County. The Associated Press contributed to this report.
0.6
medium
4
726
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "social_studies", "life_skills" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
960d40b5-4c5d-4da8-a9f3-5413fe18011c
Tutorial: Understanding Atomic Attrition
science
tutorial
## Tutorial: Understanding Atomic Attrition – A Step-by-Step Guide **Overview:** This tutorial breaks down the University of Pennsylvania’s research into the mechanism of atomic attrition – the transfer of atoms between surfaces at the nanoscale – providing a practical understanding of how it’s measured and predicted. We’ll focus on the methodology used in their TEM-based experiments. **Prerequisites:** Basic understanding of microscopy (specifically Transmission Electron Microscopy - TEM), atomic force microscopy (AFM), and the concept of force versus distance measurements. Familiarity with chemical kinetics is helpful but not strictly required. **Step 1: Understand the Problem of Nanoscale Wear** - **Action Verb:** Research - **Specific Task:** Define why nanoscale wear is a significant problem and how it differs from macroscale wear. - **Detailed Instructions:** Review the initial paragraphs of the article. Specifically, note the statement about nanotechnology requiring longer-lasting interfaces and the drastically reduced lifespan of nanoscale components compared to their macroscale counterparts. Contrast this with traditional wear mechanisms (fracture and plastic deformation) which operate on a much larger atomic scale. - **Expected Outcome:** You should be able to articulate that nanoscale wear is a critical challenge due to the small size of components and the need for significantly longer operational lifetimes. - **Checkpoint:** Write a brief paragraph explaining the difference in timescales between macroscale and nanoscale wear. **Step 2: Explore Atomic Force Microscopy (AFM)** - **Action Verb:** Investigate - **Specific Task:** Describe how an AFM works and its limitations in observing wear processes. - **Detailed Instructions:** Refer to the section on AFM. Understand that an AFM uses a sharp tip to drag across a surface, measuring the deflection of a cantilever. The laser provides feedback to maintain a constant deflection. While powerful, AFM typically provides an *after-the-fact* view of wear – it doesn't directly observe the process. Fracture and plastic deformation are often difficult to distinguish. - **Example/Concrete Application:** Imagine using an AFM to measure the wear on a car engine. You’d see the surface is worn, but you wouldn’t see *how* the wear is happening – is it chipping off pieces, or is the material just getting smoother? - **Common Pitfalls to Avoid:** Don’t confuse AFM with TEM. They are fundamentally different techniques. **Step 3: Learn About Transmission Electron Microscopy (TEM)** - **Action Verb:** Analyze - **Specific Task:** Explain how TEM differs from AFM and why it’s crucial for observing atomic attrition. - **Detailed Instructions:** Read the section on TEM. TEM uses a beam of electrons to image a sample. This provides a much higher magnification than AFM, allowing visualization at the atomic scale. Crucially, it’s a *real-time* imaging technique. - **Expected Outcome:** You should be able to explain that TEM’s high magnification and real-time imaging capabilities are essential for directly observing the process of atomic attrition. - **Checkpoint:** Create a table comparing and contrasting AFM and TEM based on magnification, imaging time, and the type of information they provide. **Step 4: Understand the Penn Team’s Breakthrough** - **Action Verb:** Identify - **Specific Task:** Describe how the researchers combined AFM and TEM to observe atomic attrition. - **Detailed Instructions:** Focus on the modification of the mechanical testing instrument. The researchers integrated the AFM probe and diamond surface within a TEM, enabling simultaneous measurement of sliding distance, force, and volume loss. - **Example/Concrete Application:** Think of it like a sophisticated microscope that’s also a force sensor. The TEM provides the visual data, while the AFM provides the force and distance measurements. - **Common Pitfalls to Avoid:** Don’t overlook the importance of the integration – it’s not just using both techniques separately. **Step 5: Trace the Experimental Process** - **Action Verb:** Visualize - **Specific Task:** Outline the steps involved in the experimental procedure. - **Detailed Instructions:** Follow the description of the experiment: 1. Slide a flat diamond surface against the silicon AFM tip. 2. Place the probe-cantilever assembly inside the TEM. 3. Run the wear experiment, simultaneously measuring distance, force, and volume loss. 4. After each pass, use the TEM to take a high-magnification image of the tip. 5. Trace the tip’s outline and calculate the volume lost. - **Expected Outcome:** You should be able to describe the entire experimental workflow, from sample preparation to data acquisition. **Step 6: Analyze the Data – Chemical Kinetics and Reaction Rate Theory** - **Action Verb:** Apply - **Specific Task:** Explain how the researchers used chemical kinetics to interpret their data. - **Detailed Instructions:** The researchers hypothesized that atomic attrition follows a reaction rate theory. They reasoned that increased force (stress) would lead to a faster rate of bond formation between the silicon and diamond atoms. They compared the measured rate of bond formation to predictions based on this theory. - **Example/Concrete Application:** Relate this to a chemistry experiment where you’re measuring the rate of a reaction. Increasing the temperature (analogous to increasing force) will generally increase the reaction rate. - **Common Pitfalls to Avoid:** Don't assume the theory is automatically correct. It's a hypothesis that needs to be tested against experimental data. **Step 7: Validate the Mechanism** - **Action Verb:** Evaluate - **Specific Task:** Describe how the researchers ruled out fracture and plastic deformation as the primary mechanism. - **Detailed Instructions:** The researchers used the TEM images and force measurements to determine that the volume loss was occurring through bond formation, not through breaking off large pieces (fracture) or permanent deformation (plastic deformation). The "smoking gun" was the exponential relationship between force and wear rate. - **Expected Outcome:** You should understand how the combined visual and force data allowed them to confidently identify atomic attrition as the dominant wear mechanism. **Practice Exercise:** Imagine you are designing a nanoscale component for a sensor. You know it will be subjected to significant stress. Based on the principles discussed, what materials would you choose for the surfaces in contact and how would you consider the contact period to maximize its lifespan? Justify your choices. **Key Takeaways:** * Atomic attrition is a critical wear mechanism at the nanoscale. * TEM provides the necessary real-time, high-magnification imaging to observe this process. * Combining visual data with force measurements allows for the application of chemical kinetics to predict wear rates. **Next Steps:** * Research specific materials used in nanoscale devices and their properties. * Explore advanced microscopy techniques beyond TEM, such as Scanning Tunneling Microscopy (STM). * Investigate how these principles can be applied to design more durable nanoscale components.
0.65
medium
4
1,429
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "technology", "language_arts" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.55, "creativity": 0.3 }
76e21857-3f79-454a-8f6c-3711d2dc1925
by Sarah Tierney and Tracey Waters
interdisciplinary
tutorial
by Sarah Tierney and Tracey Waters We’ve all been there: Maybe it’s Sunday night; maybe you have a few precious minutes of planning time. You’re scrambling to prepare a lesson and you think, why reinvent the wheel? Let’s check the interwebs. You google your topic and…28,000,000 results pop up. How on Earth do you decide what might be worth using with your students? It’s not uncommon for teachers to huddle in front of computers, sifting through the wild west of free education resources available on the internet. A recent report from the RAND Corporation revealed 99 percent of teachers in this country use "materials I developed and/or selected myself" as part of their literacy instruction. And this is no small task. Almost half of teachers reported spending four or more hours a week searching for materials, with Google and Pinterest the most frequently consulted websites. Frustratingly, most of those hours are probably wasted trying to distinguish a few educationally sound resources amongst the clutter. And we know that in addition to teacher effectiveness, the quality of the instructional materials they employ is one of the most critical factors in student achievement. “We’re seeing a need for concrete, ready-to-use materials that teachers can take and run with, but that also have a long-lasting impact on student learning,” explain Joey Hawkins and Diana Leddy of the Vermont Writing Collaborative. “Open source materials run the range and it can be a real challenge for teachers to navigate everything that’s out there.” They’re right. There are a lot of education resources out there, free for the taking—the challenge is figuring out what’s worthy of precious instructional time. Clearly, there’s an urgent need to empower teachers and leaders to make good decisions about the instructional materials they choose. In response, we at ANet have developed a three-step system of guidance that will facilitate the selection process. We’ll explore these three steps, along with a case study, through a three-part blog series. 1. Select high-quality materials In this post we’ll focus on how to know if the open education resources you are using are rigorous and standards-aligned. The IMET rubric aims to help educators identify and evaluate the quality of an entire curricula while the EQUIP rubric was designed to help leaders and teachers evaluate the quality of lessons and units. The following markers of quality and alignment were developed with these two rubrics in mind. What should I look for to know that materials are high-quality? Texts are authentic and complex, and include reading, writing, and discussion questions that are based on the text and scaffold toward the key understanding(s) of the text. Materials include scaffolded, text-dependent questions and writing tasks that are aligned to grade-level standards, not anchor-level standards. Texts and questions push students to build world knowledge and don’t focus solely on discrete literacy skills in isolation from the text. Materials provide multiple opportunities for students to express their thinking through a variety of question types and tasks. - Learning activities place the majority of “heavy lifting” on students, rather than teachers. Once you’ve found a high-quality resource, you’ll want to think about your plan for internalizing the instructional materials. We’ll focus on how to do this in the next post, including guidance around how to strategically adapt resources to fill gaps in curricula or support individual student needs. In our final post, we’ll share two teachers’ experiences using materials from the Vermont Writing Collaborative with their third graders. Below are several high-quality and CCSS-aligned free literacy education resources. Explore these materials with the markers of quality and the needs of your students in mind.
0.65
medium
4
787
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "technology", "language_arts" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.45, "creativity": 0.4 }
1626bdd0-5766-4f03-8287-06a9b3de8fd4
Read our affiliate disclosure here
technology
historical_context
Read our affiliate disclosure here. One of the 5 strains of heirloom tomato seedlings have a leaf problem by Cindy Whiting (Quesnel, BC, Canada) Q. My tomatoes are about 8 weeks old. Just one variety has an issue. First, the veins in the leaves are drying up and then the leaf itself curls. These plants are clearly smaller and weak looking than the other varieties. I start and grow my seeds in coir and feed with diluted Miracle Gro once a week. Soon these plants are being moved into my greenhouse. I wonder if I should separate these particular problematic tomatoes from the other healthy ones? A. Yes, separate them to be safe. You didn't mention what tomato variety is affected. As you may know, different varieties have different vulnerabilities. So it may be difficult to diagnose specifically the issue. Having said that, there is some information that can be helpful. Dried leaves on tomato seedlings. Drying leaves on tomatoes, even seedlings, can indicate under-watering or over-watering. Specifically, tomatoes grown in coir need extra monitoring. Coir's naturally-aerated structure let it hold up to 8 times its weight in water. Yet because the material is fibrous, moisture drains well to the bottom of the container. Plus, the affected variety may need a bit more (or less) water than the others. Tomato leaves are curling. Whiteflies (or other sucking insects) can remove nutrients and cause leaves to curl downward. Treat with an insecticidal soap, like Safer Insect Killing Soap. Curling can also be caused by a virus, but because seedlings are grown in a sterile medium, viruses are not the likely cause. Leaf roll or leaf curl can also develop in cool, wet conditions. Stunted growth on tomato seedlings. Three big culprits are most often responsible for this condition: cold temperatures, inconsistent temperatures, and nutrient deficiencies. Your affected tomatoes are in the same location as the others, meaning they are in the same temperature. They are also receiving diluted fertilizer. But double check to see if they're in a draft or by a window. Even a small difference in temperature can impact a plant's growth. Hope this helps. Good luck and happy gardening! Your friends at Tomato Dirt Click here to post comments Join in and write your own page! It's easy to do. How? Simply click here to return to Problems on Tomato Leaves. As an Amazon Associate and Rakuten Advertising affiliate I earn from qualifying purchases.
0.65
medium
5
526
[ "data structures", "algorithms basics" ]
[ "architecture patterns" ]
[]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
d5e1bffc-8581-426a-a1ed-aed7fa5f0a57
Before her birth December
interdisciplinary
research_summary
Before her birth in December 1998, Hannah Strege was frozen for two years. Conceived in a petri dish, she was preserved as an embryo until she was adopted and given a chance at life. Hannah was one of 28 embryos created through the process of in vitro fertilization for a couple struggling with fertility. Four embryos were kept by the original couple, four were donated to another couple and 20, including Hannah, remained without a family. Facing infertility struggles herself, Marlene Strege asked her doctor a question he had not heard before: Would it be possible to adopt frozen embryos that remained after another couple’s infertility treatments? After consulting trusted pastors and other evangelical leaders, Marlene and her husband, John Strege, came to agree that this type of adoption, though previously unheard of, was something God would value. “I can just really see God really had a plan for me and my parents and for the embryos and it was a plan for life instead of destruction,” said Hannah, now a freshman nursing major at Biola. “If I wasn’t adopted I could have been used for research or just killed off or left in a tank of liquid nitrogen, so I’m very blessed to have been adopted and chosen.” The special circumstances of her birth — being the world’s first-known adopted embryo — have led to special opportunities in her life, including visiting the White House multiple times as a child as part of then-President George W. Bush’s efforts to support embryo adoption during national debates about stem-cell research. (Stem-cell research conducted on embryos destroys them, but embryo adoption is an alternative that allows for the embryo to live.) Thanks in part to that growing awareness, embryo adoption has grown significantly over the years; more than 550 babies have now been born through the Snowflakes Embryo Adoption Program at Nighlight Christian Adoptions, the organization that facilitated Hannah's adoption (and which also happens to be led by Biola alumnus Daniel Nehrbass (Th.M. '03)). Having grown up talking openly about her unique process of adoption, Hannah became comfortable early on with medical terms. This set of vocabulary will prove helpful in her intended career path of nursing. Specifically, Hannah is focusing on neonatal nursing to be present when new life enters the world. Though going into a competitive field, Hannah trusts that God has a plan for her, especially considering the events of her life so far. “I learned that God gives you a lot of trials but he also works a lot of miracles,” she said. “My entire story was a miracle.”
0.6
medium
4
545
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science", "life_skills" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
46fb8a8c-8f73-40da-bfd2-da0b66970323
**Definition**: Art theory systematic
interdisciplinary
historical_context
**Definition**: Art theory is the systematic study of art's nature, meaning, and value, employing analytical frameworks to interpret and evaluate artistic phenomena. **Intuitive Explanation**: Imagine art theory as a set of lenses through which we view and understand art. Instead of just looking at a painting, art theory provides tools to ask *why* it looks that way, *what* it might mean, and *how* it affects us. It's about developing a reasoned approach to appreciating and critiquing art, moving beyond personal preference to objective analysis. **Purpose**: The fundamental purpose of art theory is to establish a coherent and critical understanding of art within the humanities. It provides the intellectual scaffolding for art criticism, enabling informed discussion, historical contextualization, and the development of aesthetic judgments. Without art theory, art appreciation would remain purely subjective and anecdotal. **Mechanism**: Art theory operates by proposing and refining conceptual models, principles, and methodologies. These frameworks analyze elements like form, content, context (historical, social, cultural), artist's intent, and audience reception. It involves critical thinking, logical argumentation, and the application of philosophical concepts to artworks. **Examples**: * **Example 1**: **Formalism and *Compositional Balance*** * **Artwork**: Piet Mondrian's *Composition with Red, Blue and Yellow*. * **Reasoning**: A formalist art theory approach would focus on the visual elements: lines, shapes, colors, and their arrangement. Mondrian's work exhibits strong **asymmetrical balance**. Though not mirror-image symmetrical, the visual weight of the red rectangle on the right is counterbalanced by the blue and yellow areas and the black lines on the left. This adherence to principles of visual equilibrium, a core tenet of formalist theory, contributes to the painting's perceived harmony and stability. * **Example 2**: **Iconography and Van Eyck's *Arnolfini Portrait*** * **Artwork**: Jan van Eyck's *The Arnolfini Portrait*. * **Reasoning**: An iconographic approach, a branch of art theory, interprets the symbols within the artwork. The single lit candle in the chandelier symbolizes the presence of God. The dog at the couple's feet represents fidelity. The discarded shoes suggest the sanctity of the event taking place. These symbols, understood through art-historical knowledge, reveal layers of meaning related to marriage, wealth, and religious devotion, moving beyond a literal depiction. **Common Misconceptions**: * **Art theory is elitist or overly academic**: While it uses specialized language, its goal is to democratize understanding by providing tools for analysis accessible to anyone willing to learn. * **Art theory dictates "correct" interpretations**: It offers frameworks for interpretation, but the ultimate meaning of art is often debated and multifaceted. Theory provides reasoned arguments, not absolute truths. **Connections**: * **Prerequisites**: Art theory builds directly upon **aesthetic theory** (which explores the nature of beauty and taste) and **art criticism** (which applies aesthetic principles to specific artworks). Understanding these allows one to grasp *what* constitutes art and *how* to evaluate it before delving into broader theoretical frameworks. * **Builds to**: Art theory is foundational for understanding subsequent movements like **postmodern art theory**, which questions grand narratives and universal truths in art, and **feminist art theory**, which analyzes art through the lens of gender and power dynamics. These later theories refine and challenge earlier theoretical assumptions.
0.6
high
6
708
[ "intermediate understanding" ]
[ "research" ]
[ "science", "language_arts", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.4 }
9ecc3a89-d04c-4ca4-a00a-e281f02c92e6
Fatherhood in Victorian Times
technology
historical_context
Fatherhood in Victorian Times A new study concludes that Victorian men were the original hands-on father, far from their image as distant and severe. Dr Julie-Marie Strange, a social historian based at Manchester University has just written a book, “Fatherhood and the British Working Class, 1865-1914″, published by Cambridge University Press. She read 250 autobiographies from people who grew up in the period, and scoured musical hall lyrics, advertising, and pamphlets from the cheap press, containing stories. But she said much of the social history the period carried the same assumptions partly because they relied on contemporary commentaries written by middle class social reformers with an agenda to “improve” the working class. More recently, the study of the history of the family has been written from the perspective of women’s history in which men were viewed as the “negative embodiment of the patriarchy”, she explained. Negative stereotypes are not always accurate: She said she found almost no examples of children being severely beaten by fathers – a staple of the popular caricature of Victorian family life. There were, however, several examples mothers effectively inventing the idea of their father’s discipline as a threat to keep in them line.
0.6
medium
6
258
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "social_studies", "arts_and_creativity" ]
{ "clarity": 0.4, "accuracy": 0.6, "pedagogy": 0.3, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
18e56d59-2f45-4bc2-8ed5-69ac9f645682
Worked Examples: Policy Evaluation
science
worked_examples
## Worked Examples: Policy Evaluation Policy evaluation is the systematic process of assessing the merit, worth, or significance of a policy, program, or intervention. For these introductory examples, we will focus on evaluating outcomes based on simple metrics. --- ### Example 1: Foundation (Simple Metric Comparison) **Problem Statement:** A city implemented a new "Safe Streets Initiative" (Policy A) in District X for one year. Before the initiative, the average daily traffic accident rate in District X was 5.0 accidents/day. After one year of Policy A, the average daily accident rate dropped to 4.2 accidents/day. A baseline comparison (Policy B, no intervention) suggests that in similar districts without intervention, the rate would have likely dropped to 4.8 accidents/day due to general seasonal trends. Evaluate the effectiveness of Policy A using the change in outcome relative to the baseline. **Solution Steps:** 1. **Establish the Baseline Outcome ($\text{Outcome}_B$):** This represents what would have happened without the policy intervention, accounting for external factors (like seasonality). $$\text{Outcome}_B = 4.8 \text{ accidents/day}$$ 2. **Establish the Actual Outcome ($\text{Outcome}_A$):** This is the observed result after the policy was implemented. $$\text{Outcome}_A = 4.2 \text{ accidents/day}$$ 3. **Calculate the Raw Impact ($\text{Impact}_{\text{Raw}}$):** Determine the total observed change from the initial state (Pre-Policy). $$\text{Impact}_{\text{Raw}} = \text{Pre-Policy Rate} - \text{Outcome}_A = 5.0 - 4.2 = 0.8 \text{ accidents/day reduction}$$ 4. **Calculate the Net Causal Impact ($\text{Impact}_{\text{Net}}$):** This is the crucial step in evaluation—isolating the policy's effect by subtracting the baseline trend from the raw impact. $$\text{Impact}_{\text{Net}} = \text{Impact}_{\text{Raw}} - (\text{Pre-Policy Rate} - \text{Outcome}_B)$$ $$\text{Impact}_{\text{Net}} = 0.8 - (5.0 - 4.8) = 0.8 - 0.2 = 0.6 \text{ accidents/day reduction attributable to Policy A.}$$ **Key Insight:** Policy evaluation must move beyond simply observing change. The **Net Causal Impact** requires accounting for external trends (the baseline) to isolate the true effect of the intervention. --- ### Example 2: Application (Cost-Effectiveness Analysis) **Problem Statement:** Two competing policies, Policy C (Increased Police Patrols) and Policy D (Public Awareness Campaign), are being considered for reducing petty theft in a large metropolitan area. Both policies are intended to run for one year. * **Policy C:** Total cost: $\$500,000$. Observed reduction in theft incidents: 1,200 incidents. * **Policy D:** Total cost: $\$150,000$. Observed reduction in theft incidents: 400 incidents. Evaluate which policy is more **cost-effective** based on the cost per incident prevented. (Assume both policies have already been evaluated for causality and the stated reductions are net effects.) **Solution Steps:** 1. **Identify the Metric for Comparison:** Since the goal is to choose the *better* policy given limited resources, the appropriate evaluation metric is **Cost-Effectiveness Ratio (CER)**, defined as $\text{Cost} / \text{Outcome Achieved}$. 2. **Calculate CER for Policy C:** $$\text{CER}_C = \frac{\text{Total Cost}_C}{\text{Incidents Prevented}_C} = \frac{\$500,000}{1,200 \text{ incidents}}$$ $$\text{CER}_C \approx \$416.67 \text{ per incident prevented}$$ 3. **Calculate CER for Policy D:** $$\text{CER}_D = \frac{\text{Total Cost}_D}{\text{Incidents Prevented}_D} = \frac{\$150,000}{400 \text{ incidents}}$$ $$\text{CER}_D = \$375.00 \text{ per incident prevented}$$ 4. **Decision Point:** Compare the CERs. The lower the CER, the more cost-effective the policy is. Since $\text{CER}_D (\$375.00) < \text{CER}_C (\$416.67)$, Policy D is the more cost-effective option. **Alternative Approach (Benefit-Cost Ratio - BCR):** If the monetary value of preventing one theft incident was known (e.g., estimated at $\$500$ in lost goods/cleanup), we could use BCR. $$\text{BCR} = \frac{\text{Total Monetary Benefit}}{\text{Total Cost}}$$ For Policy D: $\text{BCR}_D = \frac{400 \times \$500}{\$150,000} = \frac{\$200,000}{\$150,000} \approx 1.33$. (A BCR $> 1$ indicates the benefits outweigh the costs.) **Key Insight:** Policy evaluation often requires selecting an appropriate performance metric. When comparing different scales of intervention, **Cost-Effectiveness** (or Benefit-Cost analysis) provides a standardized basis for decision-making that raw outcome numbers obscure. --- ### Example 3: Advanced/Edge Case (Addressing Spillover Effects) **Problem Statement:** A city implements Policy E, a highly effective job training program, targeting unemployed residents in Neighborhood A. The evaluation shows a significant 15% decrease in unemployment in Neighborhood A. However, local business owners in adjacent Neighborhood B report a sudden *increase* in job vacancies and difficulty hiring. Explain this outcome using evaluation concepts. **Solution Steps:** 1. **Initial Evaluation (Internal Validity):** The 15% reduction in Neighborhood A confirms that Policy E appears successful *within its target boundary*. The internal validity of the primary outcome (unemployment in A) seems strong. 2. **Identifying Externalities (Spillover/Contamination):** The observation in Neighborhood B points to an **externality**—an effect of the policy felt outside the intended scope. Specifically, this is a **positive spillover** for residents of B (more job opportunities) but a **negative externality** for businesses in B (labor shortage). 3. **Revisiting the Causal Model (General Equilibrium Effect):** Standard evaluation often assumes independent units. Here, the labor market acts as a connected system. Policy E effectively shifted the labor supply curve outward in Neighborhood A, drawing workers who might have otherwise sought jobs in B, or who were previously counted as unemployed in A but whose movement affected the availability pool for B. 4. **Revised Policy Assessment:** The overall societal benefit must now account for the trade-off. If the disruption in B leads to reduced business productivity that outweighs the employment gains in A, the *net* societal value of Policy E might be lower than initially calculated. **Common Pitfalls:** Assuming that positive results in the target area (Neighborhood A) automatically translate to positive overall societal results without checking for **spillover effects** (both positive and negative) in untreated areas. **Key Insight:** Robust policy evaluation requires considering the **system boundary**. Interventions in connected systems (like labor markets, environmental systems, or social networks) can produce unintended or diffused effects that require broader assessment metrics (e.g., system-wide employment rates rather than just target area rates). --- ### Pattern Recognition & Application Guidelines **Pattern Recognition:** 1. **Causality First:** Every good evaluation seeks to isolate the policy's effect ($\text{Impact}_{\text{Net}}$) from external changes (baseline/counterfactual). 2. **Metric Selection:** The choice of evaluation metric (raw change, CER, BCR) dictates the final conclusion and must align with the policy goal (e.g., maximizing impact vs. maximizing efficiency). 3. **Boundary Awareness:** The scope of the evaluation must match the scope of the policy's influence, including all relevant stakeholders and externalities. **When to Apply Policy Evaluation:** * **Ex-Ante (Before Implementation):** Used for justification, setting targets, and selecting between competing proposals (often using predictive modeling or pilot studies). * **Ex-Post (After Implementation):** Used for accountability, learning, and determining whether to scale, modify, or terminate a program.
0.7
medium
8
1,899
[ "research methods", "statistics" ]
[ "doctoral research" ]
[ "technology" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.65, "creativity": 0.4 }
4740f6cb-8a4c-493f-a8b0-93254f21e08a
A Leonardo da Pisa Collection
interdisciplinary
experiment_design
A Leonardo da Pisa Collection del Duomo in Pisa, Italy is one of the most visited tourist sites in the world. Millions flock here annually to see the famous campanile, the LeaningTower, where Galileo experimented to Aristotle's dictum that the rate of a falling body toward the Earth's surface is proportional to its body weight. addition to experimenting with falling bodies, Galileo is reputed to have deduced the law of the pendulum from watching the oscillations of the great chandelier opposite the altar in the Duomo. A few historians dispute this saying the the lamp was not installed until 1587 and the observation of isochronism is supposed to have happened in 1583. Still visitors will enjoy seeing the chandelier as well as the Leaning Tower. pleasure awaits mathematicians. By tradition, there are three buildings in a cathedral complex - a large cathedral, a bell tower and a baptistery. The Piazza in Pisa has a fourth structure, a restored building honoring outstanding citizens of the area. In this open compound, the Camposanto Monumentale, is found the statue of Leonardo da Pisa, better know today as "Fibonacci" of the celebrated Fibonacci sequence 1, 1, 2, 3, 5, 8, 13, . . . . His statue is seen on the right. area lies just south of the Duomo. Tour books write that much of the exterior has remained unchanged since the time of Galileo 400 years earlier. But Fibonacci's life in the city is twice as long ago. Students still eat their lunch on the grass in Piazza when there are not too many tourists and souvenir stands. think that they join us in remembering two natives of Pisa who made lasting and significant contributions to civilization. A View of the Camposanto Monumentale housing the Fibonacci statue as seen on the left. A View of the dates of birth and death can only be The Camposanto Monumentale The Chandelier in the interior of the Duomo seen here just to the left of And your favorites? |We invite you to contribute a favorite photograph as a in the National Curve Bank. Naturally we give you full Leonardo of Pisa (c. 1175 - 1250) the neighboring area is another genius, Leonardo da Vinci seen here in a reproduction of his
0.65
medium
4
549
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "mathematics", "science", "technology" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
394f2d5c-332b-46f9-b2c1-3d09e0544fae
implementation focus challenging algorithmic
technology
algorithm_analysis
This implementation will focus on a challenging algorithmic problem common in robotics pathfinding: **Finding the shortest path in a grid with dynamic obstacles (or weighted nodes), using the A* Search Algorithm.** A* is chosen because it balances the thoroughness of Dijkstra's algorithm (guaranteeing optimality) with the efficiency of a greedy best-first search, making it highly suitable for robotics navigation. --- ## Algorithm Explanation The chosen algorithm is **A\* Search**. A\* is an informed search algorithm that finds the shortest path between a starting node and a goal node in a graph. It does this by maintaining a priority queue of nodes to visit, prioritized by a cost function $f(n)$: $$f(n) = g(n) + h(n)$$ Where: 1. **$g(n)$**: The actual cost (distance or time) from the start node to the current node $n$. This is the known path cost. 2. **$h(n)$ (Heuristic)**: The estimated cost from the current node $n$ to the goal node. For grid pathfinding, the **Manhattan Distance** or **Euclidean Distance** is typically used. We will use **Manhattan Distance** as it represents movement restricted to cardinal directions (up, down, left, right), common in grid-based robotics. The algorithm works by iteratively expanding the node with the lowest $f(n)$ value until the goal is reached. ## Implementation with Detailed Comments ```javascript # robotics_advanced: A* Pathfinding Implementation # Time Complexity: O(E log V) or O(V^2) depending on Priority Queue implementation. For a grid of size N*M, V = N*M. # Space Complexity: O(V) to store openSet, closedSet, and parent pointers. /** * Represents the state of a node in the grid. * @typedef {Object} Node * @property {number} r - Row index * @property {number} c - Column index * @property {number} g - Cost from start to this node * @property {number} h - Heuristic cost from this node to goal * @property {number} f - Total estimated cost (g + h) * @property {Node | null} parent - Pointer to the preceding node in the shortest path */ class PriorityQueue { // A simple array-based implementation for demonstration. // In production, a Binary Heap is preferred for O(log N) insertion/extraction. constructor() { this.queue = []; } // Adds an element and maintains the priority order (lowest F score first). enqueue(node) { this.queue.push(node); // Sort based on 'f' score. This is inefficient (O(N log N) or O(N^2) depending on JS engine sort) // but clearly demonstrates the priority concept for pedagogical purposes. this.queue.sort((a, b) => a.f - b.f); } // Removes and returns the element with the lowest 'f' score. dequeue() { return this.queue.shift(); // O(N) shift operation } isEmpty() { return this.queue.length === 0; } } /** * Calculates the Manhattan Distance heuristic. * H(n) = |r1 - r2| + |c1 - c2| * @param {number} r1 - Current row * @param {number} c1 - Current column * @param {number} r2 - Goal row * @param {number} c2 - Goal column * @returns {number} The estimated distance. */ function heuristic(r1, c1, r2, c2) { return Math.abs(r1 - r2) + Math.abs(c1 - c2); } /** * A* Pathfinding Algorithm for a weighted grid. * @param {number[][]} grid - The map (0 = traversable, 1+ = cost/obstacle) * @param {number[]} start - [row, col] * @param {number[]} goal - [row, col] * @returns {number[][] | null} The shortest path as an array of coordinates, or null if no path exists. */ function aStarSearch(grid, start, goal) { const R = grid.length; const C = grid[0].length; const [startR, startC] = start; const [goalR, goalC] = goal; // 1. Initialization const openSet = new PriorityQueue(); // closedSet tracks nodes already evaluated to prevent cycles and redundant work. const closedSet = new Set(); // Helper to create a unique key for the Set (e.g., "5,10") const getKey = (r, c) => `${r},${c}`; // Initialize the start node const startH = heuristic(startR, startC, goalR, goalC); const startNode = { r: startR, c: startC, g: 0, // Cost from start is 0 h: startH, f: startH, // f = g + h parent: null }; // Add the starting node to the priority queue openSet.enqueue(startNode); // Define possible movements (Up, Down, Left, Right) const directions = [ [0, 1], [0, -1], [1, 0], [-1, 0] ]; // 2. Main Search Loop while (!openSet.isEmpty()) { // Get the node with the lowest F score (the most promising node) const current = openSet.dequeue(); const currentKey = getKey(current.r, current.c); // Check if we reached the goal if (current.r === goalR && current.c === goalC) { // Goal reached! Reconstruct the path. const path = []; let temp = current; while (temp !== null) { path.unshift([temp.r, temp.c]); // Add to the front temp = temp.parent; } return path; } // Move current node from open to closed set closedSet.add(currentKey); // 3. Explore Neighbors for (const [dr, dc] of directions) { const neighborR = current.r + dr; const neighborC = current.c + dc; const neighborKey = getKey(neighborR, neighborC); // Boundary and Obstacle Check if (neighborR < 0 || neighborR >= R || neighborC < 0 || neighborC >= C) { continue; // Out of bounds } // Obstacle Check: If grid value is 0, it's an impassable wall. if (grid[neighborR][neighborC] === 0) { continue; } // Skip if already fully processed (in closed set) if (closedSet.has(neighborKey)) { continue; } // Calculate tentative G score for the neighbor // Cost of moving to the neighbor is the grid weight (>= 1) const moveCost = grid[neighborR][neighborC]; const tentativeG = current.g + moveCost; // Check if this neighbor is already in the open set and if the new path is better let neighborExistsInOpen = false; let existingNeighbor = null; // NOTE: This linear search for an existing neighbor in the openSet is the bottleneck // if using a simple array PQ. A map lookup is required for true efficiency. for (const node of openSet.queue) { if (node.r === neighborR && node.c === neighborC) { neighborExistsInOpen = true; existingNeighbor = node; break; } } if (!neighborExistsInOpen || tentativeG < existingNeighbor.g) { // This is a better path to the neighbor, or it's a new node. const neighborH = heuristic(neighborR, neighborC, goalR, goalC); const newNeighborNode = { r: neighborR, c: neighborC, g: tentativeG, h: neighborH, f: tentativeG + neighborH, parent: current // Set the parent pointer for path reconstruction }; if (neighborExistsInOpen) { // Update the existing node's properties (crucial for A* correctness) // In a proper heap implementation, we would "decrease-key". Here, we just re-enqueue // and rely on the closedSet check to ignore the older, more expensive version later. existingNeighbor.g = tentativeG; existingNeighbor.f = newNeighborNode.f; existingNeighbor.parent = current; // Re-sort the queue to reflect the updated priority openSet.queue.sort((a, b) => a.f - b.f); } else { // It's a completely new node to explore openSet.enqueue(newNeighborNode); } } } } // 4. Failure Case return null; // Open set is empty and goal was not reached } // --- Example Usage --- // Grid definition: 0 = Wall (impassable), 1 = Normal path, 5 = High cost area (slow terrain) const mapGrid = [ [1, 1, 1, 1, 1], [1, 5, 0, 5, 1], // 0 is a wall [1, 5, 1, 1, 1], [1, 1, 1, 5, 1], [1, 1, 1, 1, 1] ]; const startCoords = [0, 0]; // Top-left const goalCoords = [4, 4]; // Bottom-right const pathResult = aStarSearch(mapGrid, startCoords, goalCoords); // console.log("Found Path:", pathResult); ``` # Step-by-step breakdown: 1. **Initialization**: We define a `PriorityQueue` (simplified here) to manage the `openSet`, which holds nodes to be evaluated, prioritized by the lowest $f(n)$ score. A `closedSet` (using a JavaScript `Set` for fast lookups) tracks nodes already finalized. The start node is initialized with $g=0$ and its $f$ score calculated using the Manhattan heuristic $h$. 2. **Main Search Loop**: The loop continues as long as there are nodes left to explore in the `openSet`. 3. **Expansion**: In each iteration, the node with the minimum $f$ score is dequeued (`current`). If it is the goal, the path is reconstructed by tracing back through the `parent` pointers, and the function returns. 4. **Neighbor Exploration**: We iterate over the four cardinal neighbors. We perform boundary checks and check if the neighbor is an impassable obstacle (value 0). 5. **Cost Calculation**: For a valid neighbor, we calculate the `tentativeG` score (current $g$ + cost to move to the neighbor, which is the cell's weight). 6. **Optimality Check**: We check if the neighbor is already known (in the `openSet`). If it is, we only proceed if the `tentativeG` is *less* than the neighbor's currently recorded $g$ score. This is the core of A*'s ability to correct suboptimal early paths. 7. **Update/Enqueue**: If a better path is found, we update the neighbor's $g$, $f$, and `parent` pointer, and ensure it is correctly prioritized in the `openSet`. ## Example Walkthrough **Map:** ``` [1, 1, 1, 1, 1] (R0) [1, 5, 0, 5, 1] (R1) [1, 5, 1, 1, 1] (R2) [1, 1, 1, 5, 1] (R3) [1, 1, 1, 1, 1] (R4) ``` Start: (0, 0). Goal: (4, 4). 1. **Start**: Node (0, 0) is enqueued. $g=0$. $h=8$ (Manhattan distance to (4, 4)). $f=8$. 2. **Iteration 1**: Dequeue (0, 0). ClosedSet adds (0, 0). * Examine (0, 1): $g=1, h=7, f=8$. Enqueue. * Examine (1, 0): $g=1, h=7, f=8$. Enqueue. * *OpenSet now contains two nodes with $f=8$. Let's assume (0, 1) is dequeued next due to tie-breaking.* 3. **Iteration 2**: Dequeue (0, 1). ClosedSet adds (0, 1). * Examine (0, 2): $g=2, h=6, f=8$. Enqueue. * Examine (1, 1) [Cost 5]: $g=1+5=6, h=7, f=13$. Enqueue. 4. **Iteration 3**: Dequeue (1, 0). ClosedSet adds (1, 0). * Examine (2, 0) [Cost 1]: $g=1+1=2, h=6, f=8$. Enqueue. * *The node (2, 0) now has the lowest $f$ score (8).* 5. **Iteration N (Path Correction)**: The algorithm continues expanding nodes with $f=8$, eventually reaching nodes like (2, 2). Suppose a path reaches (2, 2) via the top route with $g=10$. Later, a path reaches (2, 2) via the left route with $g=6$. When the $g=6$ path is found, the node (2, 2) in the `openSet` is updated because $6 < 10$, ensuring the globally shortest path is found, even if it requires backtracking or re-evaluating parts of the map. 6. **Goal**: Once (4, 4) is dequeued, the parent chain is traced back to (0, 0) to yield the optimal path, favoring lower weighted cells and avoiding the wall at (1, 2). ## Key Insights - **Informed Search**: A\* is effective because the heuristic $h(n)$ guides the search toward the goal, preventing exhaustive exploration that Dijkstra's algorithm might perform unnecessarily. - **Common Mistake to Avoid**: If the heuristic $h(n)$ is **not admissible** (i.e., it overestimates the true cost), A\* is **not guaranteed** to find the shortest path. Manhattan distance is admissible for grid movement with uniform or increasing costs. - **Performance Consideration**: The bottleneck in this JavaScript implementation is the `PriorityQueue` using array sorting/shifting, leading to poor performance in dense graphs. A true Binary Heap implementation is critical for achieving the theoretical $O(E \log V)$ complexity. ## Alternative Approaches 1. **Dijkstra's Algorithm**: This is equivalent to A\* where the heuristic $h(n) = 0$. It guarantees the shortest path but explores outward uniformly, making it significantly slower than A\* in large, open maps. 2. **Breadth-First Search (BFS)**: Only suitable if all edge weights are uniform (cost = 1). If weights vary (as in our example where terrain costs 1 or 5), BFS fails to find the minimum cost path. 3. **Theta* ($\Theta*$)**: An extension of A\* that allows for non-grid-aligned movement (shortcuts) if the line of sight is clear, often yielding smoother and shorter paths in continuous space simulations, but significantly more complex to implement.
0.65
medium
6
3,435
[ "algorithms", "software design" ]
[ "distributed systems" ]
[ "science" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
4317f174-0875-43db-ad11-2c781d2c5e2a
Vincenzo Natali, director overhyped
interdisciplinary
creative_writing
Vincenzo Natali, director of the overhyped “Splice” and “Cube” has been officially hired to direct one of the greatest and potentially most visual Science Fiction novels in our beloved genre, “Neuromancer”. William Gibson kicks all sorts of ass. Previously, movies based on his short stories or books have left a lot to be desired (“Johnny Mnemonic”). William Gibson is probably my favorite writer so I’m ridiculously excited to hear that someone is giving “Neuromancer” a decent shot. “Neuromancer” is about a dude named Case that gets his brain fried for hacking the wrong fella. Case is offered repair in exchange for doing another job. “Neuromancer” is not your average Science Fiction novel. It’s going to take some serious skills to make the Matrix come alive. Vincenzo gave “Splice” a decent try that ended up a bit cheesy. He has passion for this project though and the blessing of the Gibson so I’m going to be optimistic. “Neuromancer” is home to… Continue Reading » Here is the first television video spot for “Predators”, coming soon from the minds of Robert Rodriquez and Nimrod Antal. What’s it all about? Imagine being snatched from your happy Earth life and your work as a killer of one sort or another. You’re comfortable, you’ve good at your job and suddenly, PoW!, you’re in an alien jungle hellhole running, arms akimbo, from gigantic and vicious hunters, eager to mount your various body parts over their big screen televisions. It’s the kind of entertainment that the whole family can enjoy! It’s coming out July 9th. It’s stars Adrien Brody, Topher Grace, Danny Trejo and Laurence Fishburne. Thanks ShocktilyouDrop. Well it’s the end. The first season of StarGate Universe is over. Drama just isn’t interesting unless the people you’re watching experience change. Space is stressful. Did the castaways evolve as human beings over the season? Did the year of dark corridors, invading aliens, bad food and grouchy Scotsmen culminate in a wicked finale or did it all just fizzle? The Destiny is a cursed ship. It is not friendly to it’s very sensitive biological inhabitants. For some reason though, that has not in any way been illuminated, the Destiny appears to be a big hunk of junk that everyone seems to want. Whether you are pink or blue skinned, the Destiny seems from afar like it’s brimming with secrets. Up close though, it turns out to be nothing more than a giant Space bus. At least this seems to be the case, in all that has been revealed so far. It’s probably in where this thing is headed that… Continue Reading » And what a force! Let the Leia in your life channel her dark side in these brilliant looking “Star Wars” Storm Trooper and She-Vader leather corsets. You will gladly comply with her every order when she pops into one of these babies. Priced reasonably at 500 for the Trooper getup and 600 for the Vader-ness outfit, I’m sure your Light Saber will get fully activated. Buy them now at Evening Arwen. Thanks Io9 for the word. Kind of, sort of, based on the King novella The Colorado Kid is the SyFy show Haven, about a FBI agent sent to a small Maine town to investigate a murder but instead finds a much weirder story waiting to be discovered. It premieres July 9th and is filmed about a half hour away from me in Lunenburg, Nova Scotia. I’ll be watching. Thanks Scifiwire. We’ve been hearing about Seth Green’s (Robot Chicken) Star Wars comedy in bits and pieces. On the red carpet at the MTV Movie Awards, Seth his own self provided a little more illumination. It’s in the early production stages and will be animated, which is a good thing if you have watched the hilarious Robot Chicken Star Wars stuff. Here’s Seth with more from his quick interview. “It’s just so early. When there’s good stuff to tell, I’ll tell you,” he said. “I don’t want to say anything. People are really dramatically misinterpreting it.” It’s encouraging to hear, but Green wouldn’t say much more. Josh asked him to set fans on the right path, but the “Robot Chicken” architect replied, “It’s too early to set you straight, that’s the worst part about it.” Okay, there were a few facts that he would reveal. “It’s gonna be CG animation, it’s gonna be really funny and it’s gonna be ‘Star Wars’ in… Continue Reading » “Ultramarines”, coming sometime this year, could be a blood buzzing good time. It’s based on the Warhammer 40K universe which is essentially a future where war is religion with a modern roman twist. It’s pretty fun stuff. John Hurt narrates this teaser trailer in his typically epic fashion. Looks like we’ll also have Terrance Stamp and Sean Pertwee lending their great and glorious voices to the tale as well. Thanks Quiet Earth for the tip. My, how the time has flied this season on Stargate Universe. It’s already season finale time. It’s a two parter and it’s called Incursion. Let’s see which side, the good guys or those nasty Lucians, get to pretend to drive the Destiny. When last we left the show, Colonel Young was putting a mighty squeeze play on the traitorous Telford by locking him away and venting all his beloved oxygen into space. As I said in my last recap, a fella with Hypoxic hypoxia isn’t going to be telling many coherent stories. Adequate oxygen is necessary for both subterfuge and truth telling. Obviously, Young was up to his usual shenanigans. The laws of television drama, of course, prohibit Young from telling his minions what he’s doing. To tell them his plan is to tell us and that just isn’t as much fun and we like fun, don’t we?l The gang, minus Mr. Greer, loyal to the core, almost mutiny but… Continue Reading » Here’s a hot little behind the scenes featurette from “Inception” with narration from Christopher Nolan and Leo Dicaprio. It’s over 2 minutes and has new footage that further pumps up my Sci-Fi adrenal glands (if that’s possible). “Inception” is going to be the real deal folks. Anyway, enjoy, “Inception” hits theatres July 16th. Embedding is disabled for this so click Here to watch it now. Thanks Slashfilm. I should say Victoria Secret supermodel Rosie Huntington-Whiteley . Director Michael Bay’s official website has made it all official. She’ll fit in nicely. In her honour, please enjoy this selection of her best work. Thanks Coming Soon. The Walking Dead, based on the delicious comic The Walking Dead by Robert Kirkman, started production today down Atlanta way. It’s going to be a good one kids, mark my words. We’ve got some seriously mouldy Zombie pics here from the set and as a connoisseur of fine Zombie fare, these are some mighty good looking Zombies. The Walking Dead is being brought to you by a network that takes it’s adult television seriously, having brought us the truely outstanding Breaking Bad and Mad Men. Frank Darabont is directing the story of Rick Grimes, small town sheriff, who wakes up from a coma to see his world has been overrun with brain thirsty savage dead people. While you can be assured that there will be plenty of Zombie on Bullet action, The Walking Dead also deals with what happens to the people and their relationships when society crumbles under the cadavourous enslaught of the hungry dead. It’ll be a 6… Continue Reading » A couple of gnarly looking character posters for you from the oncoming “Predators”, a franchise operation under new management. Robert Rodriquez is the new man behind the counter and he appears to mean business. No more long lunch breaks bucko, it’s a full time job now as these two posters clearly demonstrate. Royce and Isabelle, played by Adrien Brody (Splice!) and Alice Braga look grim and determined to make it all the way through the work day and avoid being being missiled, laser beamed or otherwise eviscerated by the local clientele, a snaggle toothed lot with high expectations from their hired “help”. “Predators” starts July 9th. Directed by Nimrod Antal. Thanks Beyond Disgusting. When last we left the crew of the Destiny, Telford, our evil Colonel was getting tortured to reveal all his and the Lucian Alliance’s dastardly plans for galactic mayhem. We have a preview of what they are saying is the first part of the two part finale, called “Incursion”, which is really a direct continuation of last weeks show. In it we witness Colonel Young making some serious command decisions. Death threats are useless really when I fella knows he’s the only source of info. That and pretty soon, lack of air makes for lack of reason. Will Young snuff out the body of our fair Dr. Rush to get at the truth? My prediction? Counterproductive. Thanks Scifiwire. Lets just get this out of the way, Richard Dean Anderson is great. Before my little trip into Universe, I didn’t have a lot of experience with Stargate and I only knew Richard from his fun filled MacGyver days. I love him. He’s got that gravitas thing and in accompaniment, a light and funny approach. Please Stargate guys, find a way to get him on the Destiny forever. OMG a cliffhanger! Awesome. But let me be a good recapper and start at the beginning. Rush has the most vivid dream in history. But it’s pretty obvious it is neither a dream or something originally in his own head as he sees the reflection of another person in a car window. That is really a hi rez dream. I’m jealous. Resolution like that would have made my dreams of Marina Sirtis way more fun. It’s not our favorite curmudgeonly Scottish scientist at all though in his own dream, No, it’s that… Continue Reading » Adewale Akinnuoye-Agbaje, otherwise known as the famous Mr. Eko on LOST, has joined the team and is heading into the frozen wastes of The Thing prequel. He plays Derek Jameson, pilot and soon to be Thing food, or destroyer? In the casting call, Derek was described thusly, [DEREK JAMESON] In his early 30s, African American, well-built, he is Carter’s friend and co-pilot. This is Jameson’s last season in Antarctica. He’s moving to Florida and starting a jetboat business with his brother, a former big league prospect. The trip to Thule Station is just another flight for him, another day in the countdown before he leaves the South Pole. Trapped at the site with Carter when the helicopter malfunctions, Jameson eventually falls prey to the THING. I love Mr. Agbaje and his character on LOST was one of my favorites. I had heard that he was booted from the show because of his, ah, temperament issues, but perhaps he’s got that… Continue Reading »
0.7
medium
5
2,517
[ "domain basics" ]
[ "expert knowledge" ]
[ "science", "technology", "language_arts" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.65, "creativity": 0.45 }
54c7b57e-1fd5-486b-9849-3db6516bee6b
barometer instrument used measure
interdisciplinary
historical_context
A barometer is an instrument used to measure atmospheric pressure. It can measure the pressure exerted by the atmosphere by using water, air, or mercury. From the variation of air pressure, one can forecast short-term changes in the weather. Numerous measurements of air pressure are used to help find surface troughs, high pressure systems, and frontal boundaries. This concept of "decreasing atmospheric pressure predicts stormy weather" is the basis for a primitive weather prediction device called a weather glass or thunder glass. It can also be called a "storm glass" or a "Goethe thermometer" (the writer Goethe popularized it in Germany). It consists of a glass container with a sealed body, half filled with water. A narrow spout connects to the body below the water level and rises above the water level, where it is open to the atmosphere. When the air pressure is lower than it was at the time the body was sealed, the water level in the spout will rise above the water level in the body; when the air pressure is higher than it was at the time the body was sealed, the water level in the spout temperature will raise the water level in the spout. A variation of this type of barometer can be easily constructed. A standard mercury barometer has a glass column of about 30 inches (about 76 cm) in height, closed at one end, with an open mercury-filled reservoir at the base. Mercury in the tube adjusts until the weight of the mercury column balances the atmospheric force exerted on the reservoir. When the atmospheric pressure is high, it places more downward force on the reservoir, forcing mercury higher in the column. When the pressure is low, allows the mercury to drop to a lower level in the column by lowering the downward force placed on the reservoir. If the temperature of the instrument is higher, it reduces the density of the mercury. For this reason, the scale for reading the height of the mercury column needs to be adjusted to compensate for this effect, according to the indication of a mercury thermometer included in the instrument case. The first barometer of this type was devised in 1643 by Evangelista Torricelli. Torricelli had set out to create an instrument to measure the weight of air, or air pressure, and to study the nature of vacuums. He first used water, but it required a glass tube 60 feet long. He then used mercury, perhaps on a suggestion from Galileo Galilei, because it is significantly denser than water. To create a vacuum with mercury takes less than three feet, which makes its use more practical than a water barometer. Torricelli documented that the height of the mercury in a barometer changed slightly each day and concluded that this was due to the changing pressure in the atmosphere. He wrote: "We live submerged at the bottom of an ocean of elementary air, which is known by incontestable experiments to have weight." The mercury barometer's design gives rise to the expression of atmospheric pressure in inches or millimeters (torr): the pressure is quoted as the level of the mercury's height in the vertical column. A pressure of one atmosphere is equivalent to about 29.9 inches, or 760 millimeters, of mercury. The use of this unit is still popular in the United States, although it has been disused in favor of SI or metric units in other parts of the world. Barometers of this type normally measure atmospheric pressures between 28 and 31 inches of mercury. Some barometers give the atmospheric pressure in millibars (one millibar = 100 Pascals, or one hectoPascal). To convert a reading in inches of mercury to millibars, divide the pressure in inches of mercury by 0.0295. Design changes to make the instrument more sensitive, simpler to read, and easier to transport resulted in variations such as the basin, siphon, wheel, cistern, Fortin, multiple folded, stereometric, and balance barometers. Fitzroy barometers combine the standard mercury thermometer with a thermometer, as well as a guide of how to interpret pressure changes. An aneroid barometer uses a small, flexible metal box called an aneroid cell. This aneroid capsule(cell) is made from an alloy of beryllium and copper. The box is tightly sealed after some of the air is removed, so that small changes in external air pressure cause the cell to expand or contract. This expansion and contraction drives mechanical levers and other devices which are displayed on the face of the aneroid barometer. Many models include a manually set needle that is used to mark the current measurement so a change can be seen. A barograph, which records a graph of some atmospheric pressure, uses an aneroid barometer mechanism to move a needle on a smoked foil or to move a pen upon paper, both of which are attached to a drum moved by clockwork. A barometer is commonly used for weather prediction, as high air pressure in a region indicates fair weather while low pressure indicates that storms are more likely. When used in combination with wind observations, reasonably accurate short term forecasts can be made. Simultaneous barometric readings from across a network of weather stations allow maps of air pressure to be produced, which were the first form of the modern weather map when created in the nineteenth century. When isobars (lines of equal pressure) are drawn on such a map, we get a contour map showing areas of high and low pressure. Localized high atmospheric pressure acts as a barrier to approaching weather systems, diverting their course. Low atmospheric pressure, on the other hand, represents the path of least resistance for a weather system, making it more likely that low pressure will be associated with increased storm activities. Thus, if barometric pressure is falling, it indicates the approach of deteriorating weather or some form of precipitation. Conversely, if barometric pressure is rising, it indicates the coming of fair weather or no precipitation. The density of mercury will change with temperature, so a reading must be adjusted for the temperature of the instrument. For this purpose a mercury thermometer is usually mounted on the instrument. No such compensation is required for an aneroid barometer. As the air pressure will be decreased at altitudes above sea level (and increased below sea level) the actual reading of the instrument will be dependent upon its location. This pressure is then converted to an equivalent sea-level pressure for purposes of reporting and for adjusting aircraft altimeters (as aircraft may fly between regions of varying normalized atmospheric pressure owing to the presence of weather systems). Aneroid barometers have a mechanical adjustment for altitude that allows the equivalent sea level pressure to be read directly and without further adjustment if the instrument is not moved to a different altitude. All links retrieved May 12, 2016. New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed.
0.55
low
5
1,543
[ "domain basics" ]
[ "expert knowledge" ]
[ "science", "technology", "arts_and_creativity" ]
{ "clarity": 0.4, "accuracy": 0.4, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
488e6946-b9d1-452d-9bcf-973369b2648d
space, I’ve often shared
science
research_summary
In this space, I’ve often shared my love for National Park-based research. I count myself among the researchers devoting time and energy to documenting how climate change affects the ecosystems and natural resources in U.S. National Parks; we study everything from pikas to forests, Joshua trees to birds. But, the underlying rate of warming in these National Parks was not on my radar and I had not given much thought to the climate exposure of National Parks versus the rest of the United States. It turns out, the parks are literal hotspots on the landscape. Last fall, Dr. Patrick Gonzalez and coauthors from the University of Wisconsin published ‘Disproportionate magnitude of climate change in United States national parks’ in Environmental Research Letters. This study looked at historical and projected temperature and precipitation across all 417 U.S. National Parks. Between 1895 and 2010, mean annual temperature of the national park area increased at double the U.S. rate — parks warmed by 1.0°C (±0.2°C) per century, the rest of the U.S. land area by 0.5 °C. Dr. Gonzalez is a forest ecologist and Associate Adjunct Professor at the University of California, Berkeley. He is also the Principal Climate Change Scientist of the U.S. National Park Service, but he answered my questions here under his Berkeley affiliation, not for the Park Service. I asked why he wanted to study a spatial analysis of historical and projected climate across all 417 US National Parks. What was the motivation for expanding on the earlier work of researchers who presented similar findings for the 289 large parks in the National Park System system? “Up until our research, the severity of climate change across all the US national parks was unknown.” Gonzalez writes. “The previous work had only looked at subsets of parks. I work at a national level and it is important for me to give national policy-makers scientific information that is robust and comprehensive. The time-consuming parts of the work were the individual analyses by park and the computational tasks of downscaling all available general circulation model output of future climate projections to 800 m spatial resolution, which had not previously been accomplished for the U.S.” In addition to the climate exposure of National Parks, Gonzalez and his team considered climate velocities. Climate velocity is the speed at which a plant or animal will need to move, migrate, or disperse — usually north or upslope — to “catch up” to their climate as it changes. Gonzalez found an interesting paradox in climate velocities: the park lands have experienced extreme temperature and precipitation shifts, but they also show lower climate velocity than the U.S. as a whole. They point out that this does not mean that plants and animals in National Parks are not in peril: “The lower climate velocities in the national park area are an artifact of that indicator being calculated as horizontal movement of areas of constant climate. Climate velocity can underestimate exposure in mountains.” The National Parks are more mountainous than the rest of the United States. This is a reflection of our unsystematic history of serendipitous-style protection; we collect the pretty places as national parks, without considering the underlying biophysical diversity, and mountains are very pretty places. So while moving a couple meters upslope might seem easier than moving hundred of meters north to track a suitable climate, this is often an oversimplification. “Despite the computational artifact, our results indicate that projected climate velocities in national parks could exceed maximum natural dispersal capabilities of many trees, small mammals, and herbaceous plants.” Gonzalez elaborates, “Any new protection of natural areas, whether close to or far from national parks, can add to global conservation of ecosystems for biodiversity and human well-being.” I asked Gonzalez if he had any thoughts on how the research could be interpreted for park visitors. I wanted to know if there is an effort to get this work not just to park managers on the ground, but to interpretive staff as well. “For national park interpreters, I’ve given many presentations directly to staff in individual parks, including interpreters,” he says. “I encourage all U.S. National Park Service staff to speak about the robust science of climate change and its human cause, which points us to solutions to saving America’s most special places.” Finally, I noticed that both this paper and the earlier National Park System climate exposure study, which covered 289 large parks, were published in open access journals. I asked if this was an intentional pattern and these research teams were hoping to reach managers who may not have access to peer-reviewed journal articles. Gonzalez confirmed that, “the open access of the journal of course enabled a much larger audience to directly download and read the original work. This greatly benefited national park staff and other natural resource managers, to whom we aimed to provide information useful for conservation under climate change. Intense interest immediately developed – people downloaded the pdf file more than once a minute in the first 24 hours of publication.” But their outreach was not limited to open access journals. Gonzalez points out, “public media published over 40 individual stories, including in the Washington Post, on page 1 of the San Francisco Chronicle, on public radio stations, and on television.” Gonzalez also wrote a concise summary for the website the Conversation. He says that the University of California, Berkeley, has greatly helped in the effort to reach natural resource managers by publicly posting the spatial data, and he directly provided customized analyses and maps for numerous individual national parks. Finally, Gonzalez writes, “I just presented the results to the U.S. Congress in a hearing where I testified on human-caused climate change in U.S. national parks. The open access of the journal was critical, but we engaged a broader effort to widely communicate the science.” Thank you to Dr. Gonzalez and his colleagues for providing the climate data that underlies so much ecological research across the National Park System! And thank you for modeling effective outreach and impressive science communication*! Gonzalez, P., Wang, F., Notaro, M., Vimont, D. J., & Williams, J. W. (2018). Disproportionate magnitude of climate change in United States national parks. Environmental Research Letters, 13(10), 104001–13. http://doi.org/10.1088/1748-9326/aade09 Monahan WB, Fisichelli NA (2014) Climate Exposure of US National Parks in a New Era of Change. PLoS ONE 9(7): e101302. https://doi.org/10.1371/journal.pone.0101302 *if I ever publish a paper that averages one pdf download every minute, I will throw the biggest party and give everyone temporary tattoos of the figures.
0.65
medium
6
1,456
[ "intermediate science", "statistics" ]
[ "specialized research" ]
[ "technology" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
a95278c6-3dd1-4112-9e47-a01a4ef5a230
Endodontic therapy, root canal
technology
historical_context
Endodontic therapy, or a root canal, is done when the nerve of the tooth becomes infected or the pulp of the tooth becomes damaged. When the nerve or pulp is damaged, it breaks down, and bacteria begins to multiply inside the pulp chamber. The bacteria can cause an infection or abscessed tooth. The root canal treatment entails removing the nerve and pulp of the tooth and then cleaning and sealing the inner chamber. If you have any of these signs, root canal therapy may be necessary: - Severe toothache pain when you chew or apply pressure - Ongoing sensitivity to hot or cold temperatures - “Darkening” of the tooth - The nearby gums are swollen or tender - A pimple on the gums that’s persistent or recurring Although most people think root canals are a painful procedure, most patients experience very little or no pain or discomfort. A root canal procedure usually takes one to three visits to the office. First, a small hole is drilled on the top of the tooth into the inner chamber. The diseased tissue is removed, the chamber is cleansed and disinfected and the tiny canals are reshaped. The chamber and canals are then filled with elastic material and you’re prescribed medication that helps prevent further infection. Sometimes, the drilled hole will be temporarily filled until a crown is placed to permanently seal it. After the procedure, you should feel relief from the pain. It’s best to avoid chewing with the tooth to prevent reinfecting the tooth and help prevent the tooth from breaking before it can be fully restored. If you’re experiencing any of the above symptoms, make an appointment right away.
0.65
medium
4
332
[ "programming fundamentals", "logic" ]
[ "system design" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.3 }
1350445c-0c42-43d9-82e9-fe95685086c8
Buy your Mac Italy
interdisciplinary
concept_introduction
Buy your Mac in Italy for a free extended warranty Italian law says Apple must provide two years of support free. Seriously. Here's a cultural difference that surprised me: Apparently, the law in Italy is that you get two years of free tech support. On anything you buy. Apple, which offers one year and charges beyond that, is finding itself in hot water in Italy. There's a noncompliance action pending against Apple, with a potential fine of 300,000 Euros (I'm guessing this is approximately the amount of money Apple makes in one second). According to Reuters: In Italy consumers who buy electronic products and other durable goods have the right to get two years of free assistance, irrespective of other warranties offered by a manufacturer. I wonder if this applies to anybody purchasing an Apple product in Italy, even if on vacation from the US? I also wonder what the street price of a MacBook Pro is in Rome. Anybody know? Maybe the best strategy is to buy your Mac in Italy. Newsletters You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center. Subscription failed. See All See All
0.7
medium
4
263
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
5771b63f-c170-47ad-a335-c40f439f7618
By Brad Ireland, 4th Virginia, Co. A
interdisciplinary
tutorial
By Brad Ireland, 4th Virginia, Co. A The personal sewing kit, affectionately called a “Housewife”, was an indispensable tool carried by Civil War soldiers both North and South. Soldiers were issued clothing in limited quantities. They couldn’t pop out to their local Wal-Mart to buy a new pair of pants every time they wore hole in them. The soldiers had to learn how to mend their own clothing. The Housewife is also an important part of the reenactor’s kit as rips occur in the field, and buttons always seem to pop off just as we are gearing up to go into battle or on a march. A typical housewife contained needles, pins, thread, scissors, extra buttons, and sometimes patches of extra fabric for mending holes in cloths. Housewives were made in all sorts of materials from wool to cotton to velvet, and even metal. They came in many different shapes and varieties, though the most common seems to be the rolled up rectangle. Some were extremely fancy and others were very plain. On the following pages of this essay, you will see documented original housewives. Use these as an example of what to look for when you decide to purchase one or make your own. Read the full article here.
0.65
medium
4
262
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
d5c2cc2c-051b-41db-bf25-95cc7cfab7f2
According Arthritis Foundation, arthritis
science
research_summary
According to the Arthritis Foundation, arthritis is – for the most part – a disease of inflammation. When joints swell, turn red and feel warm to the touch, that is inflammatory in processes. One way to calm inflammation is with medicine prescribed by a doctor. Another way is to add a few key anti-inflammatory foods to your balanced diet. Among the most potent edible inflammation fighters are essential fatty acids called omega-3s – particularly the kinds of fatty acids found in fish. Experts recommend at least 3 to 4 ounces of fish, twice a week. Omega-3-rich fish include salmon, tuna, mackerel and herring. How does this work? Eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are called marine fatty acids because they come from fish. These Omega-3s interfere with immune cells called leukocytes and enzymes known as cytokines, which are both key players in the body’s inflammatory response. Research finds that people who regularly eat fish high in omega-3s are less likely to develop rheumatoid arthritis (RA). And in those who already have the disease, marine omega-3s may help reduce joint swelling and pain. The best sources of marine omega-3s are fatty fish, such as salmon, tuna, sardines and mackerel. Eating a 3- to 6-ounce serving of these fish two to four times a week is recommended for lowering inflammation and protecting the heart.
0.65
medium
4
298
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
0096ee47-ae99-4498-b772-de5c08d25646
healthcare workforce development landscape
life_skills
research_summary
The healthcare workforce development landscape is rich with research, tool kits, resources, collaborative tables, and other supports. View By Category Healthcare Workforce Organizations & Collaboratives Collaboratives Chicagoland-specific Healthcare-specific Education & Training Local & Targeted Hiring Retention & Career Pathways Publications, Toolkits, & Other Resources Publications Toolkit CHWC Archived Materials & Recordings Quarterly Meetings Learning Events Publications Documents Other Learning Events WORC Frontline Employee Advancement Study Webinar How do frontline workers, specifically certified nursing assistants, medical assistants, and patient service representatives, perceive their opportunities for advancement in Learn More Document “What opportunities?”: Understanding Committed Frontline Healthcare Workers’ Perceptions of Career Advancement Opportunities How do frontline workers, specifically certified nursing assistants, medical assistants, and patient service representatives, perceive their opportunities for advancement in Learn More Publication Career Pathways: Building Tomorrow’s Workforce Today Published by The Josh Bersin Company in 2022, in partnership with Guild, an education and training partner company, this report Learn More Organizations & Collaboratives Illinois Health & Hospital Association The Illinois Health and Hospital Association is dedicated to advocating for Illinois’ more than 200 hospitals and nearly 40 health Learn More Organizations & Collaboratives Guild Guild’s Career Opportunity platform combines a curated marketplace of learning programs, dedicated career coaches, and tools to explore career pathways Learn More Organizations & Collaboratives RAPID-Illinois From the University of Illinois Chicago School of Public Health, the mission of RAPID-Illinois is to facilitate pathways to careers Learn More Learning Events Identifying and Addressing Challenges to Workforce Pathways Programs Presented on February 7th, 2023. Rukiya Curvey-Johnson and Grant Higgins of Rush University Medical Center discuss top challenges in implementing Learn More Organizations & Collaboratives Jobs for the Future Jobs for the Future (JFF) is a national nonprofit that drives transformation of the U.S. education and workforce systems to Learn More Organizations & Collaboratives Talent Solutions Connector The Talent Solutions Connector was created to allow employers to share their positive experiences with talent solutions providers and make Learn More Organizations & Collaboratives Catalyst Learning Catalyst Learning Company (CLC) provides education and career development solutions to healthcare organizations across the US. These programs upskill frontline Learn More Organizations & Collaboratives The National Institute for Medical Assistant Advancement NIMAA’s mission is to provide educational opportunities that address critical workplace shortages in primary care. UpSkill NIMAA provides practicing Medical Learn More Organizations & Collaboratives CareerSTAT at National Fund For Workforce Solutions CareerSTAT is a network of over 300 healthcare and workforce leaders. Together, they are promoting investment in the skills and Learn More
0.6
medium
7
697
[ "advanced basics" ]
[ "advanced research" ]
[ "science" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.45, "depth": 0.35, "creativity": 0.35 }
4db1e95c-630e-406f-b58f-0f095f8676b9
Reminder: What the Image Gently Alliance is About
technology
creative_writing
Reminder: What the Image Gently Alliance is About August 22, 2017 Statement from the Image Gently Alliance on the Potential Risk to Children Associated with Ionizing Radiation from Medical Imaging There is ongoing dialogue in the medical and scientific communities about the level of health risk to children – if any – from exposure to low-level radiation in diagnostic imaging. This discussion includes any increased lifetime risk of developing cancer. There is little to no disagreement that unnecessary radiation exposure should be avoided. This can be achieved through informed use of imaging examinations or procedures that use ionizing radiation (CT, nuclear medicine, etc.). - use imaging when clinically appropriate, - use the appropriate imaging modality, and - child-size the examination. This translates to the Right Exam, at the Right Time, done the Right Way. Frush DP, Lungren MP. The Image Gently Think-a-Head Campaign: Keep calm and Image Gently. J Am Coll Radiol 2017; 14(2): 301-302 Frush DP. Counterpoint: Image Gently: Should It End or Endure? J Am Coll Radiol 2016 13(10):1199-1202
0.6
medium
4
263
[ "programming fundamentals", "logic" ]
[ "system design" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
9638da2f-8465-408d-bac5-01cee6dfae2d
our modern era, where
interdisciplinary
data_analysis
In our modern era, where awareness of the intricate link between diet and health is growing, the focus is increasingly turning towards plant-based foods. A heightened consciousness regarding personal well-being and the environmental impacts of dietary choices has led to a significant surge in the adoption of plant-based diets. The interconnection between diet and health has never been more pronounced, with individuals seeking holistic wellness beyond mere sustenance. Plant-based diets, centered around fruits, vegetables, legumes, nuts, seeds, and whole grains, offer a compelling solution to this pursuit of well-being. As nutritional research consistently underscores the positive impact of plant-based diets on reducing the risk of chronic diseases such as heart disease, diabetes, and certain cancers, the appeal extends beyond individual health to encompass broader ecological considerations. A study by Oxford Martin School researchers emphasizes that a global shift towards diets rich in vegetables and fruits, with reduced reliance on meat, could save around 8 million lives by 2050. Furthermore, such a dietary transition could mitigate climate damages amounting to US$ 1.5 trillion and curtail greenhouse gas emissions by two-thirds. This exploration goes beyond mere dietary choices, extending to a broader understanding of the ecological consequences of our food preferences. As individuals embrace the notion of nourishing their bodies with plant-based foods, they actively contribute to a more sustainable and eco-friendly planet. This discourse navigates the intricate landscape of plant-based nutrition, unveiling its nutritional richness, environmental benefits, and the diverse palette it offers to those seeking a healthier and more sustainable lifestyle. Explore the thriving plant-based food market! Uncover key insights and trends with Data Bridge Market Research. Stay informed for strategic decisions in this rapidly growing industry. Click for comprehensive analysis and forecasts: Consumer Trends: A Shifting Landscape towards Plant-Based Diets In recent years, two significant consumer trends have emerged, reshaping the landscape for food and ingredient companies. According to a 2019 global consumer survey addressing plant-based diets and climate change concerns, a substantial 40% of consumers reported actively attempting to reduce their intake of animal-sourced protein. Additionally, 10% declared abstaining from red meat altogether, showcasing a growing awareness of the environmental impact associated with meat consumption. A noteworthy shift in dietary preferences is evident from data illustrating the rise of plant-based lifestyles. In 2015, only 1% of the population identified as vegans or vegetarians, a number that escalated to over 2% in 2017. This trajectory indicates a significant surge in plant-based dietary choices over a short period. Contrarily, 60% of respondents reported no restrictions on animal consumption, emphasizing the coexistence of varied dietary preferences within the consumer landscape. Region-specific trends further underscore this transformative shift. In Italy, the vegetarian population surged by an impressive 94.4% between 2011 and 2016, reflecting a growing inclination towards plant-based diets. Germany witnessed a substantial increase as well, with 7% of the population adopting mainly plant-based lifestyles in 2018, a noteworthy ascent from a mere 1% in 2015. The data also illuminates a rise in the adoption of meat-free practices among consumers. In Denmark, 51% of respondents in 2017 reported having at least one meat-free day per week. The trend persisted, with 30% stating in 2019 that they had significantly reduced meat consumption over the past five years, indicating a considerable shift towards plant-centric eating habits. However, amidst the surge in plant-based preferences, concerns have surfaced regarding ultra-processed plant-based foods. While evidence remains inconclusive, critics argue that the level of processing and consumption patterns associated with these foods may impact their overall healthiness. Notably, healthful plant-based diets have shown protective effects, whereas unhealthful plant-based diets, characterized by convenience and ultra-processed foods, seem comparable to diets of animal origin. What stands out in this evolving landscape is that while the segment of strict vegans or vegetarians reaches a maximum of 10% of the population, a more considerable 30–40% identify as flexitarians or express an interest in reducing meat consumption. This signals a broader consumer shift towards plant-centric diets, reflecting not just a niche preference but a significant transformation in dietary habits across diverse demographics. As this trend continues to gain momentum, food and ingredient companies are presented with both challenges and opportunities in meeting the evolving demands of an increasingly plant-focused consumer base. Consumption of Protein in Plant-Based Diets: A Comprehensive Exploration As the world shifts towards plant-based diets, one of the primary concerns often raised is the adequacy of protein intake. Protein, a fundamental building block for bodily functions, is traditionally associated with animal products. However, a closer examination reveals that plant-based diets can provide ample protein, with numerous benefits for both individual health and the planet. Plant-based protein sources represent a diverse and abundant array, ranging from legumes and nuts to grains and seeds. This article explores the richness of these sources, emphasizing their vital role in not only promoting individual health but also contributing to a more sustainable and environmentally friendly food system. Legumes: A Nutrient-Rich Foundation Legumes, including beans, lentils, and chickpeas, stand out as exemplary plant-based protein sources. Rich in protein, these legumes offer a versatile foundation for plant-based meals. Their protein content is complemented by a bounty of essential nutrients such as fiber, vitamins, and minerals. Beyond contributing to muscle development and repair, legumes play a crucial role in maintaining overall health and well-being. Nuts and Seeds: Powerhouses of Nutrition Nuts and seeds, including almonds, walnuts, chia seeds, and hemp seeds, contribute not only protein but also healthy fats and a spectrum of essential nutrients. Incorporating these into daily meals enhances protein intake while promoting overall health and satiety. These plant-based powerhouses add flavor and nutrition to a variety of dishes, showcasing the versatility and culinary appeal of plant-based eating. Whole Grains: A Versatile Nutrient Profile Whole grains, such as quinoa, brown rice, and oats, play a pivotal role in plant-based diets by offering a combination of carbohydrates, fiber, and protein. Serving as a versatile base for numerous plant-based dishes, these grains ensure a well-rounded nutrient profile. The inclusion of whole grains contributes to the satiety of meals while providing a nutrient-rich foundation for a balanced diet. Plant-Based Protein Alternatives: A Growing Trend The market has witnessed a surge in plant-based protein alternatives, catering to the increasing demand for convenient and nutritionally dense options. Pea protein, soy protein, and products like tofu and tempeh have become staples in the plant-based repertoire. These alternatives not only offer protein but also introduce a variety of textures and flavors, expanding the culinary possibilities of plant-based eating. Digestibility and Bioavailability: Addressing Concerns The digestibility and bioavailability of plant-based protein sources have been subjects of inquiry, often in comparison to their animal-derived counterparts. However, a diverse range of plant-based foods, when thoughtfully incorporated into the diet, can easily meet protein needs. Combining different protein sources throughout the day ensures the intake of a complete array of essential amino acids, dispelling concerns about the adequacy of plant-based protein. Protein Consumption in Developed Countries: A Snapshot Examining the consumption of protein-rich foods per capita in developed countries provides insights into dietary patterns. According to FAO data, Danish per capita meat consumption is approximately 80 kg, with vegetables at 100 kg and pulses at 1.09 kg. The data underlines the variation in protein sources, emphasizing the need for a diversified and plant-centric approach to address both health and sustainability concerns. The Protein Puzzle: Balancing Global Trends Protein consumption is entwined with global trends such as climate change, overconsumption of resources, population growth, urbanization, and an increase in life expectancy. This interconnection defines a "Protein Puzzle," encompassing complex issues and trade-offs. Animal agriculture, a significant source of protein, is resource-intensive, contributing to deforestation and greenhouse gas emissions. In brief, the exploration of plant-based protein sources transcends individual health benefits. It intertwines with global challenges, offering a sustainable solution to the protein puzzle. As consumers increasingly recognize the nutritional richness and environmental advantages of plant-based diets, a transformative shift towards a more plant-centric food system becomes not only a personal choice but a collective step towards a nourished planet. Impact of COVID-19 on Plant-Based Food Market The COVID-19 pandemic has ushered in significant changes for the plant-based food industry, prompting shifts in consumer preferences and highlighting the connection between public health and animal meat consumption. As the traditional meat processing industry faced disruptions, leading to the closure of slaughterhouses, plant-based alternatives experienced a surge in demand. In the U.S., meat substitute sales soared by 200% in the week ending April 18, 2020, reflecting a substantial shift in consumer behavior amid the global crisis. The Dairy Alternative Industry witnessed a notable upswing in sales, with alternative dairy products gaining momentum in North America and Europe. In April 2020, oat milk retail sales in the U.S. surged by an impressive 476.7%, while dairy milk sales increased by 32.4% compared to the previous year. This surge, attributed to increased demand for oat milk and nutritional plant-based butter, showcased a unique opportunity for dairy alternatives. Moreover, the pandemic accelerated the "Free-From" trend, with consumer preferences leaning towards soy and gluten-free products. Soy and gluten-free diets, driven by health benefits and therapeutic considerations, gained popularity. Rising incidences of food sensitivities and an increase in coeliac disease diagnoses contributed to the growth of soy and gluten-free food products, impacting the soy and wheat protein-based food market. In essence, the global crisis has not only underscored the vulnerabilities of traditional meat processing but has also presented a substantial opportunity for the plant-based food sector, reshaping consumer choices and industry strategies. Negative Impact on Eco-Systems: A Consequence of Animal-Based Protein Production The production of animal-based protein, particularly in the form of meat, exacts a substantial toll on the earth's ecosystems, posing a formidable challenge to the delicate balance of planetary boundaries. Earth's ecosystems, classified into nine planetary boundaries, are at risk or already overused due to human activities, with agricultural production significantly contributing to this predicament. Agriculture, covering 38% of the Earth's surface, is a key player in environmental degradation. It withdraws a staggering 70% of freshwater, and 35% of global crop production is directed toward animal feed. The combination of land for feed production and grazing results in 75% of agricultural land being utilized for raising animals. The impact extends further, with animal-based protein production alone contributing to 15% of annual CO2 emissions. European livestock, a major player in the production of animal-based protein, is often fed with protein-rich feeds, including those imported from South America, particularly soy. The link between deforestation in South America and land-use changes associated with soy and livestock production exacerbates the environmental toll, contributing to climate change. The inefficiency of land use for animal-based protein production becomes starkly evident when compared to plant-based alternatives. The same amount of agricultural land that yields a specific quantity of meat could be used up to ten times more efficiently to produce plant protein, potentially feeding 10 to 20 times more people. Shifting dietary focus away from meat has been identified as a crucial strategy for reducing environmental impact. To put it briefly, the negative impact on ecosystems stemming from animal-based protein production underscores the urgency for re-evaluating dietary choices and promoting sustainable practices. A shift towards plant-based diets not only addresses the environmental challenges associated with current consumption patterns but also contributes to the preservation of planetary boundaries critical for the health of our ecosystems. Nourishing the planet and our bodies through plant-based eating is a powerful and sustainable choice with far-reaching benefits. As we navigate an era of increased awareness about health and environmental sustainability, plant-based diets offer a compelling solution. From diverse nutritional profiles to culinary creativity, the world of plant-based foods is rich and rewarding. By embracing the benefits and diversity of plant-based foods, individuals can contribute to their well-being while making a positive impact on the planet. As we continue to explore the intersection of nutrition, environmental consciousness, and culinary innovation, plant-based diets emerge as a pathway to a healthier and more sustainable future for both individuals and the planet.
0.65
medium
7
2,668
[ "advanced basics" ]
[ "advanced research" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.65, "creativity": 0.35 }
890f2d7f-b29a-405d-bb56-f9c7d1345900
Kung Hei Fat Choy
arts_and_creativity
problem_set
Kung Hei Fat Choy! This week many people around the world will be celebrating the Chinese New Year, and Hawaii is definitely included in the party. Our own Chinatown celebrates for almost the entire month with block parties, music, food, crafts, lion dances, parades and a Narcissus Queen Pageant. The entire community comes together for these festive events to bring in the New Year, hoping for good luck and prosperity.Chinese believed that hanging red lanterns outside of their homes would scare away a certain mythical beast. According to legend, the first Chinese New Year began centuries ago with a fight against the mythical beast, Nian. Nian would arrive in villages on the first day of the New Year to ravage crops, livestock and eat children. The villagers began to put food outside of their doors, hoping that if Nian ate it, he wouldn’t attack anyone and leave their village in peace. One day, a villager saw Nian run away from a small child wearing red clothing and realized that Nian was afraid of that particular color. From then on, the villagers would hang red lanterns and scrolls on their windows and doors around the New Year, as well as light loud firecrackers to frighten away Nian. Their efforts worked, and the mythical creature never bothered them again.VIDEO: Chinatown doesn’t mess around when ringing in the Year of the Dragon. The date of the Chinese New year is determined by the Chinese lunisolar calendar and usually ranges between Jan. 21 and Feb. 20. This year, it fell on Jan. 23, 2012. The Chinese zodiac relates each year to a particular animal and its attributes, revolving in a 12-year cycle. This year is the year of the Dragon. The Dragon is the mightiest of signs in the Chinese zodiac and is a symbol of good fortune and ambition. It is the only mystical creature in the zodiac, as the rest are all earthly animals, and is regarded with much respect in Chinese culture. People born in the year of the Dragon are said to have certain attributes such as being quick tempered, innovative, enterprising, self-assured, scrutinizing and passionate, to name a few. Many people in Hawaii follow Chinese astrology and relate to their zodiac signs. One festive way to celebrate the New Year is to attend a lion or dragon dance. Lion dances are performed to scare away evil spirits and bring good luck and fortune to a NewYear, as well as to crowds of bystanders and even businesses. Two people usually operate a lion costume and incorporate basic martial arts moves in the performance. Elaborate dragon dances are also performed, in which many people hold a huge dragon on poles and mimic this mythical river spirit’s undulated movements. Dragon dances represent and bring good luck to people and the New Year.A busy night in Honolulu’s Chinatown that includes fresh jin dui, a black bean pastry. Although the Chinese New Year celebrations are coming to a close, an entire year awaits full of promise and good fortune. I hope that your year is full of happiness, luck, adventure and laughter. Kung Hei Fat Choy! Video and Photo Credit: Alyssa S. Navares
0.6
medium
3
674
[ "foundational knowledge" ]
[ "advanced concepts" ]
[ "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
00dd0861-d950-4d78-823b-6661cbb27490
work e-mail, any individual
language_arts
research_summary
With my work e-mail, any individual would also incredibly promptly know where by I go from 9-5, Monday-Friday. Subsequent-I did simply click on the mapquest url, certain enough, I had directions to my residence. A definitely great way of guaranteeing your do the job is mistake-cost-free is to hire a experienced proofreader – it truly is not as costly as you could possibly believe and if you use a very good company, they are going to scrutinise anything about your work to ensure that what you have composed gets attention – for the proper factors!College Apps Completed – What’s Following?My childhood was my basis for how I write and how I think. I caught on at an early age that producing is an art and it has a really unique way of reaching out to reddit essay help men and women relying on how you apply it. I employed my personal strategy of writing and utilized it to the conditions my academics essential. You see, the requirements is normally switching from teacher to trainer or professor to professor but the process that is used can be simply molded into position to make it in good shape completely. In right now context, blogs are really well-liked instrument. Research engines adore them so a paying someone to write an essay reddit lot because of new contents are added regularly. Your individual weblog could be an best put to share with readers of what and how you are performing so much. Affiliate marketing- The concept of affiliate marketing and advertising is quite simple. It consists of advertising and marketing other peoples product or service and getting paid a fee when folks order that product or service primarily based on your recommendation. The affiliate advertising industry is a person of the major on the world wide web and there is a extensive spectrum of niches and fields you very best essay producing assistance reddit can pick from. N: The best level of math examined on the SAT is geometry, so nearly anything beyond that is superfluous. There are normally a number of concerns that stump you, but test not to concentration on those people for much too long. Best Online Essay Writing Service Go in advance and operate on the types you know, then appear back to the types you really don’t. The initial just one is a practical profit Often identified as a essay writing assist company characteristic the purposeful gain can quickly be determined as a thing like the rapidly overnight shipping and delivery from a company like FedEx or all the business materials you need to have less than a single roof at your neighborhood Staples. There are several good freelance writers who are utilized to composing for print medium. These offline writers are certainly capable of becoming great Internet writers, but they may need to retrain them selves a little bit. The form of creating that may perhaps be successful when composing a guide, essay, or newspaper posting could not work as effectively on the World-wide-web. Research your products. If you are promoting a item or company then you must know more about this product or service than the normal person. What are the positive aspects to the conclude person? How is this product or service remarkable to other solutions in the industry? How is it designed, distributed, and applied? Is the merchandise dependent on a new plan or previous and enhanced strategy? What have the beta buyers becoming saying about the product or service?Know your report form. Assuming you are composing an report, you require to know what kind your short article will acquire – create my essay low-priced, news release, profile, pattern piece, information story.
0.65
medium
4
727
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "mathematics", "technology", "social_studies" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.4, "depth": 0.45, "creativity": 0.3 }
a95ebddd-eee3-4586-84f9-8603159be03a
Some products cause harm
science
worked_examples
Some products may cause harm to people or ecosystems, either because of the way they are designed, or because there is a reasonable chance that customers could misuse them or dispose of them incorrectly. The company must make potential customers aware of any such risks, to empower them to make well-informed decisions regarding their purchase, use and end-of-life processing of its products. Products may cause harm in a wide variety of ways. Some examples include: - Foods and beverages that affect a person’s health if consumed in excessive amounts. - Products that contain known carcinogens or endocrine disrupters. - Consumables containing non-biodegradable particles (e.g. plastic micro-beads in shampoo), whose emission into rivers and oceans, via wastewater, harms organisms. - Complex financial products, whose use may undermine a customer’s livelihood if their associated risks are not properly understood.
0.65
high
4
185
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
b99fb6bf-0e52-4fbc-94e3-31cc067ffae7
Genesis 19:1-29 – The Sins of Sodom
interdisciplinary
technical_documentation
Genesis 19:1-29 – The Sins of Sodom SummaryGenesis 19 illustrates the nature of the wicked situation in Sodom that has prompted the divine inquiry, and its destruction takes place. The location of the city of Sodom is not known for certain; many scholars consider it to have been located southeast of the Dead Sea. The area lies in a geological rift, extending from Turkey to East Africa; the Dead Sea area is its lowest point (some 1300 feet below sea level). The area has extensive sulfur and bitumen deposits, with petrochemical springs (see Genesis 14:10; 19:24; Deuteronomy 29:23). An earthquake with associated fires ("brimstone" is sulfurous fire) may have ignited these deposits, producing an explosion that "overthrew" Sodom and neighboring cities (including Gomorrah). In other words, this was an ecological catastrophe occasioned by the geology of the region, but clearly linked here to human wickedness. What were the sins of Sodom? God's initial report regarding the sins of Sodom refers to an "outcry" (18:20-21; 19:13), language also used to describe oppressed Israel in Egypt (Exodus 2:23; 3:7). This language suggests that the sins of Sodom essentially involve issues of social injustice, as do the references to Sodom in Jeremiah 23:14 and Ezekiel 16:49. A wide range of behavior is mentioned in these texts, including neglect of the poor and needy, lies, greed, and luxury. Jesus' single reference to the sins of Sodom refers to the inhospitable treatment of the visitors by the townsmen (Matthew 10:14-15; 11:23-24). Of the nearly thirty references to Sodom and Gomorrah in the rest of the Bible, only one text explicitly refers to sexual behavior, and that seems to have reference to sex with angels, the visitors to Sodom (Jude 7). Many traditional interpretations of the sins of Sodom have focused on same-sex activity (19:5-8). At the same time, the text specifically states that the threat against the angelic visitors comes from every man in the city (19:4). If all these men had succeeded in doing what they had threatened to do, the result would have been gang rape, abusive violence, and savage inhospitality. The text thus does not speak of nonviolent sexual behavior or of homosexual orientation and activity as such. The abusive activity is thus best seen as parallel to male/male rape in time of war (witness, say, Bosnia) or in prisons, where the resident prisoners thereby seek to dominate newcomers (see also the similar story in Judges 19).
0.6
medium
4
601
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
f9c6b521-7ffc-4210-8ebe-855785b48512
Algorithm Explanation implementation focus
science
problem_set
## Algorithm Explanation This implementation will focus on a fundamental financial mathematics concept: calculating the **Future Value (FV)** of a single sum of money with compound interest. The approach chosen is the standard compound interest formula, which is widely applicable and forms the basis for more complex financial calculations. The formula for future value with compound interest is: $FV = PV * (1 + r)^n$ Where: * $FV$ is the Future Value of the investment/loan, including interest * $PV$ is the Present Value (the principal amount) * $r$ is the annual interest rate (as a decimal) * $n$ is the number of years the money is invested or borrowed for This formula is chosen because it directly models the growth of an investment where interest earned in one period is added to the principal for the next period, thus earning "compound" interest. It's a cornerstone of financial mathematics. ## Implementation with Detailed Comments ```matlab % financial_mathematics: Compound Interest Future Value Calculation % % This script calculates the future value of a single sum of money % given the present value, annual interest rate, and the number of years. % % Author: AI Programming Instructor % Date: 2023-10-27 % % Time Complexity: O(log n) due to the use of the power operator (^). % While theoretically O(log n) for arbitrary precision, % for fixed-precision floating-point arithmetic in MATLAB, % it's often considered closer to O(1) in practice for typical % integer exponents. However, to be precise about the % underlying operation, O(log n) is more accurate for % general exponentiation. % Space Complexity: O(1) as we only use a few variables to store input and output. function future_value = calculate_future_value(present_value, annual_interest_rate, num_years) % calculate_future_value: Computes the future value of an investment. % % Inputs: % present_value (double): The initial amount of money (principal). % annual_interest_rate (double): The annual interest rate, expressed as a decimal % (e.g., 5% should be entered as 0.05). % num_years (double): The number of years the money will be invested. % % Output: % future_value (double): The calculated future value of the investment. % --- Input Validation --- % It's crucial to validate inputs to ensure the function behaves correctly % and to prevent errors from invalid data. % Check if present_value is a positive scalar number. % A non-positive present value doesn't make sense for an investment. % A scalar ensures we're dealing with a single principal amount. if ~isscalar(present_value) || ~isnumeric(present_value) || present_value <= 0 error('Present value must be a positive scalar numeric value.'); end % Check if annual_interest_rate is a numeric scalar. % While rates can be negative in some financial contexts, for basic % compound interest, we typically expect non-negative rates. % We allow any numeric scalar here as the formula handles negative rates. if ~isscalar(annual_interest_rate) || ~isnumeric(annual_interest_rate) error('Annual interest rate must be a numeric scalar value.'); end % Check if num_years is a non-negative numeric scalar. % The number of years cannot be negative for a future value calculation. % It can be zero, in which case FV = PV. if ~isscalar(num_years) || ~isnumeric(num_years) || num_years < 0 error('Number of years must be a non-negative numeric scalar value.'); end % --- Core Calculation --- % This section implements the compound interest formula: FV = PV * (1 + r)^n % Step 1: Calculate the growth factor. % The growth factor is (1 + annual_interest_rate). This represents the % multiplier for each year's investment growth. For example, a 5% % interest rate means the money grows by 1.05 times each year. growth_factor = 1 + annual_interest_rate; % Step 2: Calculate the total compounding effect over the years. % The power operator (^) raises the growth factor to the power of the % number of years. This accounts for the compounding effect: interest % earned in previous years is added to the principal and earns interest % in subsequent years. % Example: If rate is 5% (0.05) and years are 2: % Year 1: PV * (1.05) % Year 2: (PV * 1.05) * (1.05) = PV * (1.05)^2 compounding_effect = growth_factor ^ num_years; % Step 3: Calculate the Future Value. % Multiply the present value by the total compounding effect to get the % final future value. future_value = present_value * compounding_effect; end % End of the calculate_future_value function ``` * **Time Complexity: O(log n)** * The primary operation is `growth_factor ^ num_years`. For arbitrary exponents, exponentiation by squaring (or similar algorithms) is often used, which has a logarithmic time complexity with respect to the exponent (`num_years`). * In practice, for typical integer exponents within the range of `double` precision in MATLAB, the operation is highly optimized and might be perceived as near constant time. However, for a strict theoretical analysis of the exponentiation operation itself, O(log n) is more accurate. * **Space Complexity: O(1)** * The function uses a fixed number of variables (`present_value`, `annual_interest_rate`, `num_years`, `growth_factor`, `compounding_effect`, `future_value`) regardless of the input values. Therefore, the memory usage does not grow with the input size, making it constant space. # Step-by-step breakdown: 1. **Input Validation**: The code first checks if the provided `present_value`, `annual_interest_rate`, and `num_years` are valid. This includes ensuring they are numeric, scalar (a single value), and meet certain logical constraints (e.g., `present_value` must be positive, `num_years` must be non-negative). This is a crucial step in robust programming to prevent unexpected behavior or errors. 2. **Calculate Growth Factor**: The `annual_interest_rate` is added to 1. This `growth_factor` represents how much the investment grows each year. For instance, a 5% rate (0.05) results in a `growth_factor` of 1.05, meaning the investment will be 1.05 times its value at the start of the year. 3. **Calculate Compounding Effect**: The `growth_factor` is raised to the power of `num_years`. This operation captures the essence of compound interest. If the rate is 5% and the period is 2 years, the money grows by 5% in the first year and then by 5% on the new, larger amount in the second year. This is mathematically represented by squaring the growth factor (1.05 * 1.05). 4. **Calculate Future Value**: The initial `present_value` is multiplied by the `compounding_effect`. This final multiplication yields the total amount of money at the end of the specified period, including all accumulated compound interest. ## Example Walkthrough Let's trace the execution with the following inputs: * `present_value = 1000` * `annual_interest_rate = 0.05` (which is 5%) * `num_years = 10` 1. **Function Call**: `calculate_future_value(1000, 0.05, 10)` is called. 2. **Input Validation**: * `present_value` (1000) is a positive scalar numeric. Validation passes. * `annual_interest_rate` (0.05) is a numeric scalar. Validation passes. * `num_years` (10) is a non-negative numeric scalar. Validation passes. 3. **Calculate Growth Factor**: * `growth_factor = 1 + 0.05;` * `growth_factor` becomes `1.05`. 4. **Calculate Compounding Effect**: * `compounding_effect = 1.05 ^ 10;` * MATLAB computes `1.05` raised to the power of `10`. * `compounding_effect` becomes approximately `1.62889462677744`. 5. **Calculate Future Value**: * `future_value = 1000 * 1.62889462677744;` * `future_value` becomes approximately `1628.89462677744`. 6. **Output**: The function returns `1628.89462677744`. This means an initial investment of $1000 at a 5% annual interest rate, compounded annually for 10 years, will grow to approximately $1628.89. ## Key Insights * **The Power of Compounding**: This example clearly demonstrates how compound interest makes money grow exponentially over time. The interest earned in earlier years contributes to higher interest earnings in later years. * **Importance of Decimal Rates**: A common mistake is entering interest rates as percentages (e.g., `5` instead of `0.05`). The code requires the rate as a decimal, so users must remember this conversion. The input validation helps catch some of these issues if a percentage value is too large to be a realistic decimal rate, but explicit user understanding is key. * **Performance Consideration**: For very large numbers of years or extremely high precision requirements, the `^` operator's performance might become a factor. However, for typical financial calculations with standard floating-point types, it's highly efficient. ## Alternative Approaches 1. **Iterative Calculation**: Instead of using the direct power formula, one could simulate the growth year by year using a `for` loop. This would involve updating the principal in each iteration: `principal = principal * (1 + rate)`. While conceptually simpler for some to grasp, it is computationally less efficient for a large number of years (O(n) time complexity) and more prone to floating-point error accumulation compared to the direct formula. 2. **Using Financial Toolboxes**: MATLAB has dedicated Financial Toolboxes that offer more advanced functions (e.g., `fv` function) for various financial calculations, including annuities, loans, and more complex cash flow scenarios. These toolboxes are optimized and often provide more features but might be overkill for a simple FV calculation or unavailable in basic MATLAB installations.
0.7
medium
6
2,346
[ "intermediate science", "statistics" ]
[ "specialized research" ]
[ "mathematics", "technology", "life_skills" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
b1cda109-a955-4196-89af-d0fd943123aa
Groupthink: Collective Delusions Organizations
interdisciplinary
historical_context
Groupthink: Collective Delusions in Organizations and Markets Roland Bénabou* Princeton University This version: December 2011 Abstract This paper investigates collective denial and willful blindness in groups, organizations and markets. Agents with anticipatory preferences, linked through an interaction structure, choose how to interpret and recall public signals about future prospects. Wishful thinking (denial of bad news) is shown to be contagious when it is harmful to others, and self-limiting when it is beneficial. Similarly, with Kreps-Porteus preferences, willful blindness (information avoidance) spreads when it increases the risks borne by others. This general mechanism can generate multiple social cognitions of reality, and in hierarchies it implies that realism and delusion will trickle down from the leaders. The welfare analysis differentiates group morale from groupthink and identifies a fundamental tension in organizations’ attitudes toward dissent. Contagious exuberance can also seize asset markets, generating investment frenzies and crashes. *I am grateful for valuable comments to Daron Acemoglu, George Akerlof, Bruno Biais, Alan Blinder, Patrick Bolton, Philip Bond, Markus Brunnermeier, Andrew Caplin, Sylain Chassang, Rafael Di Tella, Xavier Gabaix, Bob Gibbons, Boyan Jovanovic, Alessandro Lizzeri, Glenn Loury, Kiminori Matsuyama, Wolfgang Pesendorfer, Ben Polak, Eric Rasmusson, Ricardo Reis, Jean-Charles Rochet, Tom Romer, Julio Rotemberg, Tom Sargent, Hyun Shin, David Sraer, Jean Tirole, Glen Weyl, Muhamet Yildiz and participants at many seminars and conferences. Rainer Schwabe, Andrei Rachkov and Edoardo Grillo provided superb research assistance. Support from the Canadian Institute for Advanced Research and the Institute for Advanced Study in Toulouse are gratefully acknowledged. “The Columbia accident is an unfortunate illustration of how NASA’s strong cultural bias and its optimistic organizational thinking undermined effective decision-making.” (Columbia Accident Investigation Board Final Report, 2003) “The ability of governments and investors to delude themselves, giving rise to periodic bouts of euphoria that usually end in tears, seems to have remained a constant. (Reinhart and Rogoff, “This Time Is Different: Eight Centuries of Financial Folly”, 2009). 1. Introduction In the aftermath of corporate and public-sector disasters, it often emerges that participants fell prey to a collective form of willful blindness and overconfidence: mounting warning signals were systematically cast aside or met with denial, evidence avoided or selectively reinterpreted, dissenters shunned. Market bubbles and manias exhibit the same pattern of investors acting “color-blind in a sea of red flags”, followed by a crash.\(^1\) To shed light on these phenomena, this paper analyzes how distorted beliefs spread through organizations such as firms, bureaucracies and markets. Janis (1972), studying policy decisions such as the Bay of Pigs invasion, the Cuban missile crisis and the escalation of the Vietnam war, identified in those that ended disastrously a cluster of symptoms for which he coined the term “groupthink”.\(^2\) Although later work was critical of his characterization of those episodes, the concept has flourished and spurred a large literature in social and organizational psychology. Defined in Merriam-Webster’s dictionary as “a pattern of thought characterized by self-deception, forced manufacture of consent, and conformity to group values and ethics”, groupthink was strikingly documented in the official inquiries conducted on the Challenger and Columbia space shuttle disasters. It has also been invoked as a contributing factor in the failures of companies such as Enron and Worldcom, decisions relating to the second Iraq war, and the recent financial crisis.\(^3\) \(^1\)I borrow here the evocative title of Norris’ (2008) account of Merrill Lynch’s mortgage securitization debacle. A year later, the Inspector General’s Report (2009) on the SEC’s failure concerning the Madoff scheme contained over 130 mentions of “red flags”. \(^2\)The eight symptoms were: (a) illusion of invulnerability; (b) collective rationalization; (c) belief in inherent morality; (d) stereotyped views of out-groups; (e) direct pressure on dissenters; (f) self-censorship; (g) illusion of unanimity; (h) self-appointed mindguards. The model developed here will address (a) to (g). \(^3\)On the shuttle accidents, see Rogers Commission (1986) and Columbia Accident Investigation Board (2003). On Enron, see Samuelson (2001), Cohan (2002), Eichenwald (2005) and Pearlstein (2006). On Iraq, see e.g., Hersh (2004), Suskind (2004) and Isikoff and Corn (2007). the same time, one must keep in mind that the mirror opposite of harmful “groupthink” is valuable “group morale” and therefore ask how the two mechanisms differ, even though both involve the maintenance of collective optimism despite negative signals. To analyze these issues, I develop a model of (individually rational) *collective denial* and *willful blindness*. Agents are engaged in a joint enterprise where their final payoff will be determined by their own action and those of others, all affected by a common productivity shock. To distinguish groupthink from standard mechanisms, there are no complementarities in payoffs, nor any private signals that could give rise to herding or social learning. Each agent derives anticipatory utility from his future prospects, and consequently faces a tradeoff: he can accept the grim implications of negative public signals about the project’s value (realism) and act accordingly, or maintain hopeful beliefs by discounting, ignoring or forgetting such data (denial), at the risk of making overoptimistic decisions. The key observation is that this tradeoff is shaped by how others deal with bad news, creating cognitive linkages. When an agent benefits from others’ overoptimism, his improved prospects make him more accepting of the bad news which they ignore. Conversely, when he is made worse off by others’ blindness to adverse signals, the increased loss attached to such news pushes him toward denial, which is then contagious. Thinking styles thus become strategic substitutes or complements, depending on the sign of externalities (not cross-partials) in the interaction payoffs. When interdependence among participants is high enough, this *Mutually Assured Delusion* (MAD) principle can give rise to multiple equilibria with different “social cognitions” of the same reality. The same principle also implies that, in organizations where some agents have a greater impact on others’ welfare than the reverse (e.g., managers on workers), strategies of realism or denial will “trickle down” the hierarchy, so that subordinates will in effect *take their beliefs from the leader*. The underlying insight is quite general and, in particular, does not depend on the assumptions of anticipatory utility and malleable memory or awareness. To demonstrate this point, I analyze a variant of the model in which both are replaced by Kreps-Porteus (1978) preferences for late resolution of uncertainty. This also serves, importantly, to address collective willful ignorance (ex-ante avoidance of information) in the same way as the benchmark model addresses collective denial (ex-post distortion of beliefs). In line with the MAD principle, I show that if an agent’s remaining uninformed about the state of the world leads him to increase the *risks* born by others, this pushes them toward also delaying becoming informed; as a result, ignorance becomes contagious and risk spreads through the organization. Conversely, when information avoidance has beneficial hedging spillovers, it is self-dampening.\(^4\) The model’s welfare analysis makes clear what factors distinguish valuable group morale from harmful groupthink, irrespective of anticipatory payoffs, which average out across states of the world. It furthermore explains why organizations and societies find it desirable to set up ex-ante commitment mechanisms protecting and encouraging dissent (constitutional guarantees of free speech, whistle-blower protections, devil’s advocates, etc.), even when ex-post everyone would unanimously want to ignore or “kill” the messengers of bad news. In market interactions, finally, prices typically introduce a substitutability between supply decisions that works against collective belief. Nonetheless, in asset markets with limited liquidity (new types of securities, startup firms, housing), *contagious exuberance* can again take hold, leading to investment frenzies followed by deep crashes. When signals about fundamentals turn from green to red, each participant who keeps investing contributes to driving the final market-clearing price further down. This makes it ultimately more costly for others to also overinvest, but at the same time magnifies the capital losses that realism would require them to immediately acknowledge on their outstanding positions. In equilibrium the stock effect dominates the flow effect, so that all prefer to keep believing in strong fundamentals than recognize the warning signals of a looming crash. ### 1.1. Related evidence **Asymmetric updating and information avoidance.** Besides the vast literature on overconfidence and overoptimism, there is a long-standing body of work more specifically documenting people’s tendency to selectively process, interpret and recall data in ways that lead to more favorable beliefs about their own traits or future prospects.\(^5\) While earlier studies relied on self-reports rather than incentivized choices, several recent papers offer rigorous confirma- --- \(^4\)Thus, as in the benchmark (anticipatory utility) version, agents’ “patterns of thought” become substitutes or complements in a way that turns entirely on the first derivatives of the payoff structure. The difference is that these externalities now operate on the variance rather than the conditional expectation of agents’ utilities. The MAD mechanism is also shown to be robust along many other dimensions, such as nonseparable payoffs or limited sophistication (adaptive learning). \(^5\)See, e.g., Mischel et al. [1976] and Thompson et al. [1992] on the differential recall of favorable and unfavorable, information, and Kunda [1987] on the biased processing of self-relevant data. tions of a differential response to good and bad news. Eil and Rao (2010) and Möbius et al. (2010) provide subjects with several rounds of objective data on their IQ rankings; the first paper uses physical attractiveness as well. They also elicit, using incentive-compatible scoring rules, subjects’ prior and posterior beliefs about their rank. Eil and Rao find that, compared with Bayes’ rule, subjects systematically underrespond to negative news and are much closer to proper updating for positive news. Möbius et al. similarly find significant underupdating in response to bad news; subjects also update less than fully in response to good news, but the gap with Bayes’ rule is significantly smaller. In both studies, a significant fraction of subjects also display information aversion, paying money to avoid learning their exact IQ or beauty score after the last round.\footnote{In contrast, no updating bias or information avoidance occurs when rank is randomly assigned. For self-relevant information, both findings of underadjustment to bad news and a lesser underadjustment (possibly none) to good news accord well with the awareness-management model of Bénabou and Tirole (2002), which corresponds to equation (6) below (see also footnote 26). The experimental design is also such that the feedback received by participants is not credibly transferable to outsiders (and, for beauty, redundant), ruling out any Hirshleifer-type effect to explain a negative value of information.} Mijovic-Prelec and Prelec (2010) demonstrate costly self-deception about the likelihood of an exogenous binary event: although incentivized for accuracy, subjects reverse their predictions as a function of their stakes in the outcome. Similarly, Mayraz (2011) finds that subjects assigned to be buyers or sellers at some future price make (incentivized) predictions about it that vary systematically with their monetary stakes in its being high or low. These results establish the role of the anticipatory motive in belief distortion and show that the latter responds to incentives, as will be the case in the model. Hedden et al. (2008) use fMRI on subjects engaged in the first paper’s task to identify the neural correlates of self-deception. Self-deceivers (as revealed by their more systematic prediction reversals) exhibit distinctive activity patterns in regions of the brain generally associated to reward processing and in those associated with attentional and cognitive control. In the field, Choi and Lou (2010) find evidence of self-serving, asymmetric updating by mutual fund managers. Using a large panel of actively managed funds, they measure a manager’s confidence in his stock-picking ability or private signal quality by the deviation, attributable to his active trades, between his portfolio weights and the relevant market index. Following confirming signals (positive realized excess returns over the year), fund managers trade more actively, thereby exhibiting increased self-confidence. Following disconfirming ones (negative realized excess returns) there is no equivalent decrease—in fact, zero adjustment cannot be rejected. Furthermore, this selective updating leads to suboptimal investments, as positive past excess returns are found to negatively predict subsequent risk-adjusted fund performance. Individual investors also display a good-news / bad news asymmetry, both in the recall of their portfolios’ past returns (Goetzman and Peles (1997)) and in informational decisions, where far more go online to look up the value of their portfolios on days when the market is up than when it is down (Karlsson et al. (2009)). The avoidance of decision-relevant information for fear of learning of a bad outcome is extensively documented in the medical sphere, where significant fractions of people avoid checkups, refuse to take tests for HIV infection or genetic predispositions to certain cancers, even when anonymity is ensured and in countries with universal health insurance and strict anti-discrimination regulations. This body of evidence and its relationship to anticipatory anxiety are reviewed in Caplin and Leahy (2001) and Caplin and Eliaz (2003). **Organizational and market blindness.** These individual propensities to cognitive distortion naturally raise the question of equilibrium: what environments will make such behaviors socially contagious or self-limiting, and with what welfare implications? Surprisingly, this question has never been considered, even in the large literature on informational attitudes that followed Kreps and Porteus (1978). Yet the issue is not only theoretically interesting, but also potentially important to make sense of notions such as “optimistic organizational thinking” and “governments and investors deluding themselves”. While there is yet no formal study of motivated cognition at the level of a firm or market, a number of case studies and official investigation reports provide supporting evidence for the idea.\(^7\) I summarize in online Appendix D several “patterns of denial”—including, once again, actively avoiding information ex-ante and changing standards of evidence ex-post—that recur strikingly from NASA to the FED, SEC and Fannie Mae, from Enron to investment banks, AIG and individual investors.\(^8\) \(^7\)This type of data is of course largely qualitative and selected on failure, but also notable for its depth and extensiveness: records of meetings, confidential emails and memos, sworn testimony, financial transactions, technical tests and analyses by experts. \(^8\)Another point made there is the insufficiency of pure moral hazard as the sole explanation. Instead, self-serving rationalizations (“ethical fading”, e.g., Tenbrunsel, and Messick (2004), Bazerman and Tenbrunsel (2011)) and overoptimistic hubris are key enablers of most corporate misconduct and financial fraud (see also Huseman and Driver (1979), Sims (1992), Anand et al. (2005) and Schrand and Zechman (2008)). Kindleberger and Aliber (2005), Shiller (2005) and Reinhart and Rogoff (2009) provide many similar examples, from which their conclusions of contagious “delusions”, “manias”, “irrational exuberance” and “financial folly” are derived. 1.2. Related theories This work ties into multiple literatures. The first one centers on cognitive dissonance and other forms of self-deception, the second on anticipatory feelings and attitudes toward information.\(^9\) Most papers so far have focused on individual rather than social beliefs, and none has asked what makes wishful thinking infectious or self-limiting. The analysis of group morale and groupthink in organizations relates the paper to a third line of work, which deals with heterogeneous beliefs and overoptimism in firms.\(^10\) Beliefs there are most often exogenous (reflecting different priors), whereas here they endogenously spread, horizontally or vertically, through all or part of the organization. Beyond economics, the paper relates to the work in management on corporate culture and to that in psychology on “social cognition”. In models of social conformity and in models of herding, collective errors arise from divergences between individuals’ private signals and their publicly observable statements or actions. Departing from these standard channels, the paper identifies a novel mechanism generating interdependent beliefs and behaviors, which: (i) requires neither private information nor lack of anonymity; (ii) accounts for both conformism and contrarianism, with clear predictions as to when each should be observed; (iii) is in line with the micro-experimental and case-study evidence of biased updating and information avoidance; (iv) generates many distinctive and potentially testable comparative-statics results. A first alternative source of group error is social pressure to conform.\(^11\) For instance, if \(^9\)On cognitive dissonance, see Akerlof and Dickens (1982), Schelling (1986), Kuran (1993), Rabin (1994), Bénabou and Tirole (2002, 2004, 2006b), Compte and Postlewaite (2004) and Di Tella et al. (2007). On anticipation, see Loewenstein (1987), Caplin and Leahy (2001), Landier (2000), Caplin and Eliaz (2005), Brunnermeier and Parker (2005), Bernheim and Thomadsen (2005), Köszegi (2006, 2010), Eliaz and Spiegler (2006), Brunnermeier et al. (2007) and Bénabou and Tirole (2011). For an evolutionary account of self-deception see, e.g., von Hippel and Trivers (2011), who argue that it initially evolved to facilitate the deception of others, but once developed also affected different aspects of behavior. \(^10\)On the theoretical side, see, e.g. Rotemberg and Saloner (1993), Bénabou and Tirole (2003), Fang and Moscarini (2005), Van den Steen (2005), Gervais and Goldstein (2007) and Landier et al. (2009). On the empirical side, see, e.g., Malmendier and Tate (2005, 2008) or Camerer and Malmendier (2007). \(^11\)One could also invoke some exogenous preference for agreeing with the majority, but this has little predictive content as to which settings are more conducive to the phenomenon (e.g., congruent or dissimilar objectives), or whether conformist preferences apply to genuine beliefs or only publicly stated opinions. agents are heard or seen by both a powerful principal (boss, group leader, government) and third parties whom he wants to influence, they may just toe the line for fear of retaliation. Their true beliefs should still show up ex-post in any unmonitored actions they were able to take, yet in many cases of organizational failure no such discrepancy is observed.\footnote{Thus, in the weeks and days preceding the collapses of Enron and Lehman Brothers, most employees did not significantly alter or hedge their retirement portfolios, which were heavily loaded with company stock.} Self-censorship should also not occur when agents can communicate separately with the boss, who should then want to hear both good and bad news. There are nonetheless many instances where deliberately confidential and highly credible warnings were flatly ignored, with disastrous consequences for the decision-maker.\footnote{For instance, Enron V.P. Sharon Watkins’ memo to CEO Ken Lay, and FED governor Edward Gramlich’s warnings to Chairman Greenspan (see online Appendix D).} A second important source of conformity is signaling or career concerns. Thus, when the quality of their information is unknown, agents whose opinion is at odds with most already expressed may keep it to themselves, for fear of appearing incompetent or lazy (Ottaviani and Sørensen (2001), Prat (2005)). Significant mistakes in group decisions can result, in contexts where differential information is important and anonymous communication or voting not feasible.\footnote{With anonymity, information aggregation should be achievable even when agents have different a priori levels of expertise, by allocating ballots in proportion to these priors. Private information is also not always a key issue in collective errors. In the two space shuttle disasters, for instance, NASA mission managers and engineers were all looking at the same data; see online Appendix D.} The mechanism explored here, by contrast, is portable between environments with and without anonymity, including financial markets and the electoral arena, where investors and voters make decisions privately. The model’s application to market manias and crashes links the paper to the literatures on bubbles and herding, but the mechanism is very different from those of existing models. First, in a standard cascade, each investor acts exactly as a cool-headed and benevolent statistician would advise him to. He thus goes against his own signal only in instances where the herd is truly more likely to have it right, and more generally displays the usual desire for accurate knowledge.\footnote{See, e.g., Banerjee (1992), Bikchandani, Hirshleifer and Welch (1992), Caplin and Leahy (1994), Chamley and Gale (1994). In versions of herding models with naive agents (e.g., Eyster and Rabin (2009)), agents put excessive weight on the actions of others, but still without any kind of wishful thinking or motivated reasoning—they just lack statistical or strategic sophistication. Experimental tests show that people in fact overweigh their \textit{own} information (a form of overconfidence) relative to that embodied in other players’ moves, rationalizations (“new economy”, this “time is different”, “they are not making any more land”, etc.) repeatedly described by observers and historians. Second, in herding models the problem is a failure to aggregate private signals, which becomes less relevant when more of this data becomes common knowledge, for example through statistical sources or the media. In market groupthink, by contrast, investors have access to very similar information, but their processing of it is distorted by a contagious form of motivated thinking.\footnote{In the recent financial crisis, most of the key data on household debt, mounting mortgage defaults, historical boom and bust cycles in real estate prices, etc., was easily accessible to the major players, including regulators, and even loudly advertised by a few but prominent Cassandras.} Section 2 presents the benchmark model and propositions on collective realism and denial. Section 3 derives related results for risk preferences and the contagion of informational attitudes. Section 4 examines welfare and the treatment of dissent. Section 5 deals with asset-market manias and crashes, and Section 6 concludes. In online Appendix B, the model is extended to deal with fatalism and apathy in the face of crises. Key proofs are gathered in the paper’s main Appendix A, more technical ones in online Appendix C. \section{Groupthink in teams and organizations} \subsection{Benchmark model} - \textit{Technology}. A group of risk-neutral agents, $i \in \{1, \ldots n\}$, are engaged in a joint project (team, firm, military unit) or other activities generating spillovers. At $t = 1$, each chooses effort $e^i = 0$ or 1, with cost $ce^i, c > 0$. At $t = 2$, he will reap expected utility \begin{equation} U_2^i \equiv \theta \left[ \alpha e^i + (1 - \alpha)e^{-i} \right], \end{equation} where $e^{-i} \equiv \frac{1}{n-1} \sum_{j \neq i} e^j$ denotes the average effort of others and $1 - \alpha \in [0, 1 - 1/n]$ the degree of interdependence, reflecting the joint nature of the enterprise or the presence of cross-interests.\footnote{Another source of interdependence is social preferences: altruism, family or kinship ties, social identity, etc. Thus, (1) is equivalent to $U_2^i \equiv \beta \theta e^i + (1 - \beta)U_2^{-i}$ with $1 - \alpha \equiv (1 - \beta)(n - 1)/(n - \beta)$. Altruistic concerns are explicitly studied in online Appendix B.} Depending on $\alpha$, the choice of $e^i$ ranges from a pure private good (or bad) to a pure public one. This payoff structure is maximally simple: all agents play symmetric roles, there is a fixed value to inaction $e = 0$, normalized to 0, and no interdependence of any making cascades relatively rare and short-lived (e.g., Goeree et al. (2007), Weiszacker (2010)). kind between effort decisions. These assumptions serve only to highlight the key mechanism, and are all relaxed later on. The productivity of the venture is a priori uncertain: its expected value is $\theta = \theta_H$ in state $H$ and $\theta = \theta_L$ in state $L$, with $\Delta \theta \equiv \theta_H - \theta_L > 0$ and $\theta_H > 0$ without loss of generality. Depending on the context, $\theta$ can represent the value of a firm’s product or business plan, the state of the market, the suitability of a political or military strategy, or the quality of a leader. Given (1), $\theta$ defines the expected social value of a choice $e^j = 1$, relative to what the alternative course of action would yield. Thus, for $\theta_L \geq 0$ each agent would prefer that others always choose $e^j = 1$, whereas for $\theta_L < 0$ he would like them to pursue the “appropriate” course of action for the organization, choosing $e^j = 1$ in state $H$ and $e^j = 0$ in state $L$.\footnote{It is thus not the sign of $\theta_L$ per se that is relevant, but how $\theta_L$ compares to the (social) return to taking the alternative action $e = 0$ in state $L$. The latter’s normalization to zero is relaxed in Section 2.4.} - **Preferences.** The payoffs received during period 1 include the cost of effort, $-ce^i$, but also the anticipatory utility experienced from thinking about one’s future prospects, $sE_1^i[U_2^i]$, where $s \geq 0$ (for “savoring” or “susceptibility”) parametrizes the well-documented psychological and health effects of hopefulness, dread and similar emotions.\footnote{The parameter $s$ typically increases with the length of period 1, during which uncertainty remains. The linear specification $sE_1^i[U_2^i]$ avoids building in either information-loving or information aversion (which will be studied in Section 3).} At the start of period 1, agent $i$ chooses effort to maximize the expected present value of payoffs, discounted at rate $\delta \in (0, 1]$: \begin{equation} U_1^i = -ce^i + sE_1^i[U_2^i] + \delta E_1^i[U_2^i]. \end{equation} Given (1), his effort is determined solely by his beliefs about $\theta : e^i = 1$ if $(s + \delta)\alpha E_1^i[\theta] > c,$ independently of what any one else may be doing. I shall assume that \begin{equation} \theta_L < \frac{c}{(s + \delta)\alpha} < \frac{c}{\delta\alpha} < q\theta_H + (1 - q)\theta_L. \end{equation} An agent acting on his sole prior will thus choose $e^i = 1$, whereas one who knows for sure that the state is $L$ will abstain. Actual beliefs at $t = 1$ will depend on the news received at $t = 0$ and how objectively or subjectively the agent processes them, as described below. In doing so, he aims to maximize the discounted utility of all payoffs \begin{equation} U_0^i = -M^i + \delta E_0^i \left[ -ce^i + sE_1^i \left[ U_2^i \right] \right] + \delta^2 E_0^i \left[ U_2^i \right], \end{equation} where $E_t^i$ denotes expectations at $t = 0, 1$ and $M^i$ the date-0 costs of his cognitive strategy. The main behavioral implications of these preferences arise from the tradeoff between accurate and hopeful beliefs embodied in (4).\footnote{On the general behavioral implications of models with utility from anticipation, see Kőszegi (2010).} To the extent that his cognitive “technology” allows it, an agent will update in a distorted manner (underadjusting to bad news as in Rao and Eil (2010) and Möbius et al. (2010)), and consequently invest even after seeing data showing that he should not. In short, he will engage in wishful thinking.\footnote{Namely, “the attribution of reality to what one wishes to be true or the tenuous justification of what one wants to believe” (Merriam Webster), and “the formation of beliefs and making decisions according to what might be pleasing to imagine instead of by appealing to evidence, rationality or reality” (Wikipedia).} - **Information and beliefs.** To represent agents’ “patterns of thought”, I use an extended version of the selective-recall technology in Bénabou and Tirole (2002). At $t = 0$, everyone observes a common signal that defines the state of the world: $\sigma = H, L$, with probabilities $q$ and $1 - q$ respectively.\footnote{Note that $\theta_\sigma$ is only the expected value of the project conditional on $\sigma$, so a low (high) signal need not preclude a high (low) final realization.} Each agent then chooses (consciously or not) how much attention to pay to the news, how to interpret it, whether to “keep it in mind” or “not think about it”, etc. Formally, after observing $\sigma$ he can: (a) Accept the facts realistically, truthfully encoding $\hat{\sigma}^i = \sigma$ into memory or awareness (his date-1 information set). (b) Engage in denial, censoring or rationalization, encoding $\hat{\sigma}^i = H$ instead of $\sigma = L$, or $\hat{\sigma}^i = L$ instead of $\sigma = H$. In addition to impacting later decisions, this may entail an immediate cost $m \geq 0$.\footnote{This can involve material resources (eliminating evidence, avoiding certain people, searching for and rehearsing desirable signals) or mental ones (stress from repression, cognitive dissonance, guilt). As explained below, any arbitrarily small $m > 0$ suffices to rule out uninteresting equilibria in which there is signal distortion in both states (“inefficient encoding”). Beyond this, all the paper’s key results apply equally with $m = 0$, though non-zero costs are more realistic, particularly for the welfare analysis.} (c) When indifferent between these two courses of actions, use a mixed strategy.\footnote{Agents thus do not commit in advance to a (state-contingent) mixture of realism and denial, but respond optimally to the news they receive. It seems unlikely that someone could constrain a priori how he will interpret or recall different signals, particularly in a social context where he may be exposed to others’ response to the news. Such commitment is more plausible at the organizational level, and this is analyzed in Section 4. For a sophisticated Bayesian, cognitive commitment (when feasible) would be equivalent to coarsening the signal structure $\sigma = H, L$; such ex-ante informational choices are studied in Section 3.} This simple informational structure captures a broad range of situations. The perfect correlation between agents’ signals could be relaxed, but serves to make clear that the model has nothing to do with herding or cascades, where privately informed agents make inferences from each other’s behavior. The prior distribution $(q, 1 - q)$ could be conditional on some earlier signal being good news, such as the appearance of a new technology or market opportunity. These positive news may also have warranted some initial investment in the activity, including the formation of the group itself. Intuition suggests that it is only in state $L$ that an agent may censor his signal: given (1) and the utility from anticipation, he would never want to substitute bad news for good ones.\footnote{An agent who likes pleasant surprises and dislikes disappointments, on the other hand, may want to. Such preferences correspond (maintaining linearity) to $s = -\delta s'$, $0 < s' < 1$, so that the last two terms in (4) become $\delta^2 E_0 \left[ U_2 - s' E_1 \left[ U_2 \right] \right]$. All the results could be transposed to the case $s \leq 0$, leading to a (less empirically relevant) model of collective “defensive pessimism”. By focussing on $s \geq 0$, I am implicitly assuming that the disappointment-aversion motive, if present, is dominated by anticipatory concerns. Such is the case, for instance, if the “waiting” period 1 is long enough. The potential social or evolutionary value of anticipatory concerns is discussed in Section 4.} Verifying in Appendix C that such is indeed the case as long as $m > 0$, no matter how small, I focus here on cognitive decisions in state $L$ and denote $$\lambda^i \equiv \Pr \left[ \hat{\sigma}^i = L | \sigma = L \right]$$ the awareness strategy of agent $i$. Later on I will consider payoffs structures more general than (1), under which either state may be censored. While people can selectively process information, their latitude to self-deceive is generally not unconstrained. At $t = 1$, agent $i$ no longer has direct access to the original signal, but if he is aware of his tendency to discount bad news he will take it into account. Thus, when when $\hat{\sigma}^i = L$ he knows for sure that the state is $L$, but when $\hat{\sigma}^i = H$ his posterior belief is only $$\Pr \left[ \sigma = H \mid \hat{\sigma}^i = H, \lambda^i \right] = \frac{q}{q + (1 - q)\chi(1 - \lambda^i)} \equiv r(\lambda^i),$$ where $\lambda^i$ is his equilibrium rate of realism (awareness of bad news) and $\chi \in [0, 1]$ parametrizes cognitive sophistication. I shall focus on the benchmark case of rational Bayesians ($\chi = 1$), but the analysis goes through for any $\chi$, including full naiveté ($\chi = 0$).\footnote{The paper’s positive results become only stronger with $\chi < 1$, as self-deception is more effective. In the welfare analysis, an extra term is simply added to the criterion computed with $\chi = 1$; see footnote 50. Note also that (6) generates both empirical findings discussed in footnote 6, for any $\lambda^i < 1$ and $\chi < q/(1-q)$.} To analyze the equilibria of this game, I proceed in three steps. First, I fix everyone but agent $i$’s awareness strategy at some arbitrary $\lambda^{-i} \in [0, 1]$ and look for his “best response” $\lambda^i$.\footnote{With imperfect recall, each agent’s problem is itself a game of strategic information transmission between his date-0 and date-1 “selves”. Condition (3) and $m > 0$ will rule out any multiplicity of intrapersonal equilibria, simplifying the analysis and making clear that the groupthink phenomenon is one of \textit{collectively sustained} cognitions. Note also that the focus on symmetric group equilibria, implicit in equating all $\lambda^i$’s to a common $\lambda^{-i}$, is without loss of generality when there are many identical agents, as all best-respond to the aggregate. For finite $n$ and/or heterogenous groups, there can also be asymmetric equilibria; see Section 2.4.} Second, I identify the general principle that governs whether individual cognitions are strategic \textit{substitutes} (the more others delude themselves, the better informed I want to be) or \textit{complements} (the more others delude themselves, the less I also want to face the truth). Finally, I derive conditions under which groupthink arises in its most striking form, where both collective realism and collective denial constitute self-sustaining \textit{social cognitions}. ### 2.2. Best-response awareness Following bad news, agents who remain aware that $\theta = \theta_L$ do not exert effort, while those who managed to ignore or rationalize away the signal have posterior $r(\lambda^j) \geq q$ and choose $e^j = 1$. Responding as a realist to a signal $\sigma = L$ thus leads for agent $i$ to intertemporal expected utility ($R$ is for “realism”) $$U_{0,R}^i = \delta(\delta + s) \left[ \alpha \cdot 0 + (1 - \alpha)(1 - \lambda^{-i}) \right] \theta_L,$$ reflecting his knowledge that only the fraction $1 - \lambda^{-i}$ of other agents who are in denial will exert effort. If he censors, on the other hand, he will assign probabilities $r(\lambda^i)$ to the state being $H$, in which case everyone exerts effort with productivity $\theta_H$, and $1 - r(\lambda^i)$ to it being really $L$, in which case only the other optimists like him are working and their output is $(1 - \lambda^{-i})\theta_L$. Hence ($D$ is for “denial”): $$U_{0,D}^i = -m + \delta \left( -c + \delta \left[ \alpha + (1 - \alpha)(1 - \lambda^{-i}) \right] \theta_L \right) + \delta s \left( r(\lambda^i)\theta_H + (1 - r(\lambda^i)) \left[ \alpha + (1 - \alpha)(1 - \lambda^{-i}) \right] \theta_L \right).$$ Agent $i$’s incentive to deny reality, given that a fraction $1 - \lambda^{-i}$ of others do so, is thus: $$U_{0,D}^i - U_{0,R}^i = -m - \delta \left[ c - (\delta + s) \alpha \theta_L \right] + \delta s r(\lambda^i) \left[ (1 - \alpha) \lambda^{-i} \theta_L + \Delta \theta \right].$$ The second term is the net loss from mistakenly choosing $e^i = 1$ due to overoptimistic beliefs. The third term is the gain in anticipatory utility, proportional to $s$ and the posterior belief $r(\lambda^i)$ that the state is $H$, which has two effects. First, the agent raises his estimate of the fraction choosing $e = 1$, from $1 - \lambda^{-i}$ to 1; at the true productivity $\theta_L$, this contributes $(1 - \alpha) \lambda^{-i} \theta_L$ to his expected welfare. Second, he believes the project’s value to be $\theta_H$ rather than $\theta_L$, so that when everyone chooses $e = 1$ his welfare is higher by $\Delta \theta = \theta_H - \theta_L$. Let $\Psi(\lambda^i, s | \lambda^{-i})$ denote the right-hand side of (9), representing agent $i$’s net incentive for denial. Since it is increasing in his “habitual” degree of realism $\lambda^i$, there is a unique fixed point (personal equilibrium), which characterizes the optimal awareness strategy: (a) $\lambda^i = 1$ if $\Psi(1, s | \lambda^{-i}) \leq 0$. By (9), and noting that $\alpha \theta_L + \Delta \theta + (1 - \alpha) \lambda^{-i} \theta_L \geq \min \{ \Delta \theta, \theta_H \} > 0$, this means $$s \leq \frac{m/\delta + c - \delta \alpha \theta_L}{\alpha \theta_L + \Delta \theta + (1 - \alpha) \lambda^{-i} \theta_L} \equiv \underline{s}(\lambda^{-i}).$$ (b) $\lambda^i = 0$ if $\Psi(0, s | \lambda^{-i}) \geq 0$. By (9), and noting that $\alpha \theta_L + q \left[ \Delta \theta + (1 - \alpha) \lambda^{-i} \theta_L \right] \geq \min \{ q \Delta \theta, q \theta_H + (1 - q) \theta_L \} > \min \{ q \Delta \theta, c/(s + \delta) \} > 0$, this means $$s \geq \frac{m/\delta + c - \delta \alpha \theta_L}{\alpha \theta_L + q \left[ \Delta \theta + (1 - \alpha) \lambda^{-i} \theta_L \right]} \equiv \bar{s}(\lambda^{-i}).$$ Moreover, $\underline{s}(\lambda^{-i}) < \bar{s}(\lambda^{-i})$, since $\Delta \theta + (1 - \alpha) \lambda^{-i} \theta_L \geq \Delta \theta + (1 - \alpha) \lambda^{-i} \min \{ \theta_L, 0 \} \geq \Delta \theta + \min \{ \theta_L, 0 \} = \min \{ \theta_H, \Delta \theta \} > 0$. (c) $\lambda^i \in (0, 1)$ is the unique solution to $\Psi(\lambda^i, s | \lambda^{-i}) = 0$ for $\Psi(0, s | \lambda^{-i}) < 0 < \Psi(1, s | \lambda^{-i})$, which corresponds to $\underline{s}(\lambda^{-i}) < s < \bar{s}(\lambda^{-i})$. This best response to how others think is illustrated by the dashed curves in Figures 2-3, as a function of either $s$ or $c$, which have opposite effects. Variations in $s$ provide more transparent intuitions (e.g., $s = 0$ is the classical benchmark), whereas variations in $c$ are directly observable and experimentally manipulable. All results are therefore stated in a dual form that covers both approaches. **Lemma 1. (Optimal awareness)** For any cognitive strategy $\lambda^{-i}$ used by other agents, there is a unique optimal awareness rate $\lambda^i$ for agent $i$: (i) $\lambda^i = 1$ for $s$ up to a lower threshold $\underline{s}(\lambda^{-i}) > 0$, $\lambda^i$ is strictly decreasing in $s$ between $\underline{s}(\lambda^{-i})$ and an upper threshold $\bar{s}(\lambda^{-i}) > \underline{s}(\lambda^{-i})$, and $\lambda^i = 0$ for $s$ above $\bar{s}(\lambda^{-i})$. (ii) Similarly, $\lambda^i = 0$ for $c$ below a threshold $\underline{c}(\lambda^{-i})$, $\lambda^i$ is strictly increasing in $c$ between $\underline{c}(\lambda^{-i})$ and a threshold $\bar{c}(\lambda^{-i}) > \underline{c}(\lambda^{-i})$, and $\lambda^i = 0$ for $c$ above $\underline{c}(\lambda^{-i})$. As one would expect, the more important anticipatory feelings are to an agent’s welfare, and the lower the cost of mistakes, the more bad news will be repressed. The next result brings to light the key insight concerning the social determinants of wishful thinking. **Proposition 1. (MAD principle)** (i) An agent’s degree of realism $\lambda^i$ decreases with that of others, $\lambda^{-i}$, (substitutability) if $\theta_L > 0$, and increases with it (complementarity) if $\theta_L < 0$. (ii) $\lambda^i$ increases with the degree of spillovers $1 - \alpha$ if $\theta_L > 0$, and decreases if $\theta_L < 0$. The intuition for what I shall term the “Mutually Assured Delusion” (MAD) principle is simple. If others’ blindness to bad news leads them to act in a way that is better for an agent than if they were well informed ($\theta_L > 0$), it makes those news not as bad, thus reducing his own incentive to engage in denial. But if their avoidance of reality makes things worse than if they reacted appropriately to the true state of affairs ($\theta_L < 0$), future prospects become even more ominous, increasing the incentive to look the other way and take refuge in wishful thinking. In the first case, individual’s ways of thinking are strategic substitutes, in the latter they are strategic complements. It is worth emphasizing that this “psychological multiplier”, less than 1 in the first case and greater in the second, arises even though agents’ payoffs are separable and there is no scope for social learning. Proposition 1 shows that the scope for contagion hinges on whether overoptimism has positive or negative spillovers. Examples of both types of interaction are provided below, using financial institutions as the main illustration. - **Limited-stakes projects, public goods:** $\theta_L > 0$. The first scenario characterizes activities with limited downside risk, in the sense that pursuing them remains socially desirable for the organization even in the low state where the private return falls short of the cost. This corresponds for instance to a bank’s employees issuing “plain vanilla” mortgages or lending to safe, brick-and mortar companies – activities that remain generally profitable even in a mild recession, though less so than in a boom. Other areas in which an individual’s motivation and “can-do” optimism is always valuable to others include team sports, political mobilization and other forms of good citizenship. • **High-stakes projects:** $\theta_L < 0$. The second scenario corresponds to ventures in which the downside is severe enough that persisting has *negative social value* for the organization. The archetype is a firm like Enron, Lehman Brothers, Citigroup or AIG, whose high-risk strategy could be either extremely profitable (state $H$) or dangerously misguided (state $L$), in which case most stakeholders are likely to bear heavy losses: layoffs, firm bankruptcy, evaporated stock values, pensions and reputations, costly lawsuits or even criminal prosecution. In such contexts, the greater is other players’ tendency to ignore danger signals about “tail risk” and forge ahead with the strategy—accumulating yet more subprime loans and CDO’s on the balance sheet, increasing leverage, setting up new off-the-books partnerships—the deeper and more widespread the losses will be if the scheme was flawed, the assets “toxic”, or the accounting fraudulent. Therefore, when red flags start mounting, the greater is the temptation for everyone whose future is tied to the firm’s fate to also look the other way, engage in rationalization, and “not think about it”. The proposition’s second result shows how cognitive interdependencies (of both types) are amplified, the more closely tied an individual’s welfare is to the actions of others. Groupthink is thus most important for closed, cohesive groups whose members perceive that they largely share *a common fate* and have few exit options. This is in line with Janis’ (1972) findings, but with a more operational notion of “cohesiveness”, $1 - \alpha$. Such vesting can be exogenous or arise from a prior choice to join the group, in which case wishful beliefs about its future prospects also correspond to ex-post rationalizations of a sunk decision. ### 2.3. Social cognition I now solve for a full social equilibrium in cognitive strategies, looking for fixed points of the mapping $\lambda^{-i} \rightarrow \lambda^i$. The main intuition stems from Proposition 1 and is illustrated by the solid lines in Figures 2 and 3. From (10)-(11), $\lambda = 1$ is an equilibrium (realism is the best response to realism) for $s \leq s(1)$, and similarly $\lambda = 0$ is an equilibrium (denial is the best --- 28 Enron’s employees, whose pension portfolios had on average 58% in company stock, could have moved out at nearly any point, but most never did (Samuelson (2001)). At Bears Stearns, 30% of the stock was held until the last day by employees—with presumably good access to diversification and hedging instruments—who thus lost their capital together with their job. The pattern was similar at many other financial institutions. 29 This intuition is reflected in (9), through the term $(1 - \alpha)\lambda^{-1}\theta_L$. A lower $\alpha$ also increases the cost of suboptimal effort when $\theta_L > 0$ and raises it when $\theta_L < 0$, reinforcing this effect (term $c - \alpha(\delta + s)\alpha\theta_L$). 30 Such a prior investment stage is modeled in Section 5, in the context of asset markets. response to denial) for $s \geq \bar{s}(0)$, where \begin{align} \underline{s}(1) &= \frac{m/\delta + c - \delta \alpha \theta_L}{\theta_H}, \\ \bar{s}(0) &= \frac{m/\delta + c - \delta \alpha \theta_L}{\alpha \theta_L + q \Delta \theta}. \end{align} When $\theta_L > 0$ (cognitive substitutes), $s(\lambda^{-i})$ and $\bar{s}(\lambda^{-i})$ are both decreasing in $\lambda^{-i}$, so $\underline{s}(1) < \bar{s}(1) < \bar{s}(0)$ and the two pure equilibria correspond to distinct ranges. When $\theta_L < 0$ (cognitive complements), on the other hand, both thresholds are increasing in $\lambda^{-i}$, and if that effect is strong enough one can have $\bar{s}(0) < \underline{s}(1)$, creating a range of overlap. **Proposition 2. (Groupthink)** (i) If the following condition holds, \begin{equation} (1-q)(\theta_H - \theta_L) < (1-\alpha)(-\theta_L), \end{equation} then $\bar{s}(0) < \underline{s}(1)$ and for any $s$ in this range, both realism ($\lambda = 1$) and collective denial ($\lambda = 0$) are equilibria, with an unstable mixed-strategy equilibrium in between. Under denial agents always choose $e^j = 1$, even when it is counterproductive. (ii) If (14) is reversed, $\underline{s}(1) < \bar{s}(0)$. The unique equilibrium is $\lambda = 1$ to the left of $(\bar{s}(1), \underline{s}(0))$, a declining function $\lambda(s)$ inside the range, and $\lambda = 0$ to the right of it. (iii) The same results characterize the equilibrium set as a function of $c$, with a nonempty range of multiplicity $[\bar{c}(1), \underline{c}(0)]$ if and only if (14) holds. Equation (14) reflects the MAD principle at work. The left-hand side is the basic incentive to think that actions are highly productive ($\theta_H$ rather than $\theta_L$) when there are no spillovers ($\alpha = 1$) or, equivalently, fixing everyone else’s behavior at $e = 1$ in both states. The right-hand side corresponds to the expected losses –relative to what the correct course of action would yield– inflicted on an agent by others’ delusions, and which he can (temporarily) avoid recognizing by denying the occurrence of the bad state altogether. These endogenous losses, which transform reality from second best to third best, must be of sufficient importance relative to the first, unconditional, motive for denial. - **Comparative statics.** The proposition also yields several testable predictions. First, there is the stark reversal in how agents respond to others’ beliefs (or actions) depending on the sign of $\theta_L$. Second, complete comparative statics on the equilibrium set are obtained. Focusing on the more interesting case where (14) holds: (a) The more vested in the group outcome are its members, the more likely is collective denial – a form of *escalating commitment*: as $1-\alpha$ increases, both $\bar{s}(0)$ and $\underline{s}(1)$ decrease (since $\theta_L < 0$) and therefore so do the highest and lowest equilibrium values of $\lambda$. In particular, it is easy to find (Corollary 1 in the Appendix) a range of parameters for which an isolated agent *never* self-deceives, but when interacting with others, all of them *always* do so. (b) A more desirable high state $\theta_H$ has the same effects. A more likely one (higher $q$) also lowers the equilibrium threshold for $\lambda = 0$, but leaves that for $\lambda = 1$ unchanged; consequently, it expands the range where multiplicity occurs. (c) A worse low state $\theta_L$ has two effects. First, the private cost of a wrong decision rises, making a realistic equilibrium easier to sustain as there is no harmful delusion of others to “escape from”: $\underline{s}(1)$ increases. When others are in denial, however, a lower $\theta_L$ also worsens the damage they do.\footnote{From (13), $\text{sgn}\{\partial \bar{s}(0)/\partial \theta_L\} = \text{sgn}\{1/\alpha - 1/q - \delta \theta_H/(m/\delta + c)\}$, with $1/\alpha - 1/q > 0$ by (14).} If $1/\alpha - 1/q$ is small this effect is dominated by the previous one, so $\bar{s}(0)$ increases: sufficiently bad news will force people to “snap out” of collective delusion. With closely tied fates or high priors ($1/\alpha - 1/q$ large enough), on the other hand, the “scaring” effect dominates. Thus $\bar{s}(0)$ decreases, the range of multiplicity widens, and a worsening of bad news can now cause a previously realistic group to take refuge in groupthink. - **Implications**. The types of enterprises most prone to collective delusions are thus: (a) Those involving new and complex technologies or products that combine a generally profitable upside with a lower-probability but potentially disastrous downside – a “black swan” event. High-powered incentives, such as performance bonuses affected by common market uncertainty, have similar effects, as do highly leveraged investments that put the firm at risk of bankruptcy. (b) Those in which participants have only *limited exit options* and, consequently, a lot riding on the soundness or folly of other’s judgements. Such dependence typically arises from irreversible or illiquid *prior investments*: specific human capital, company pension plan, professional reputation, etc. Alternatively, it could reflect the *large-scale* public good nature of the problem: state of the economy, quality of the government or other society-wide institutions which a single individual has little power to affect, global warming, etc.\footnote{This point is pursued in Bénabou (2008), where I study the dynamics of country-level ideologies concerning the relative efficacy of markets and governments.} Finally, the model shows how a propensity to “can-do” optimism (high $s$) can be very beneficial at the entrepreneurial stage—starting a business, mobilizing energies around a new project ($\theta_L > 0$)—but turn into a source of danger once the organization has grown and is involved in more high-stakes ventures (e.g., a mean-preserving spread in $\theta$, with $\theta_L < 0$).\footnote{Similarly, through most of human history collective activities (hunting, foraging, fighting, cultivation) were typically characterized by $\theta_L > 0$, making group morale valuable and susceptibility to optimism (a high $s$ or low $m$) an evolutionary advantageous trait. (For a related account, see von Hippel and Trivers (2011)). Modern technology and finance now involve many high-stakes activities ($\theta_L << 0 << \theta_H$), for which those same traits can be a source of trouble. With leverage, for instance, payoffs become $\theta'_H \equiv \theta_H + B(\theta_H - R)$ and $\theta'_L \equiv \theta_L + B(\theta_L - R)$, where $B$ is borrowing and $R \in (\theta_L, \theta_H)$ the gross interest rate.} \subsection{Asymmetric roles: hierarchies and corporate culture} I now relax all symmetry assumptions, as well as the state-invariance of payoffs to “inaction” ($e = 0$). I then use this more general framework to show how, in hierarchical organizations, cognitive attitudes will “trickle down” and subordinates follow their leaders into realism or denial. Let the payoff structure (1) be extended to \begin{equation} U^i_2 \equiv \sum_{j=1}^{n} \left( a^{ji}_\sigma \ e^j + b^{ji}_\sigma \ (1 - e^j) \right), \text{ for all } i = 1, \ldots n \text{ and } \sigma \in \{H, L\}. \end{equation} Each agent $j$’s choice of $e^j = 1$ thus creates a state-dependent value $a^{ji}_\sigma$ for agent $i$, while $e^j = 0$ generates value $b^{ji}_\sigma$; for $i = j$, these correspond to agent $i$’s private returns to action and inaction. All payoffs remain linearly separable for the same expositional reason as before, but complementarities or substitutabilities are easily incorporated (see Section 2.5). Agents may also differ in their preference and cognitive parameters $c^i, m^i, \delta^i$, their proclivity to anticipatory feelings $s^i$ or even their priors $q^i$. The generalization of (3) is then \begin{equation} a^{ii}_L - b^{ii}_L < \frac{c^i}{s^i + \delta^i} < q^i \left( a^{ii}_H - b^{ii}_H \right) + (1 - q^i) \left( a^{ii}_L - b^{ii}_L \right), \end{equation} while that of $\theta_H > \theta_L$ ($H$ is the better state under full information), is \begin{equation} \sum_{j=1}^{n} a^{ji}_H > \sum_{j=1}^{n} b^{ji}_L. \end{equation} Following the same steps as in the symmetric case and denoting $\Lambda^{-i}$ the vector of other agents’ strategies, it is easily seen that agent $i$’s best response $\lambda^i$ is similar to that in Lemma 1, but with the cutoffs for realism and denial now given by \begin{align} \underline{s}^i(\Lambda^{-i}) &\equiv \frac{m^i/\delta^i + c^i - \delta^i(a_L^{ii} - b_L^{ii})}{\sum_{j=1}^{n}(a_H^{ji} - a_L^{ji}) + \sum_{j \neq i}\lambda^j(a_L^{ji} - b_L^{ji}) + a_L^{ii} - b_L^{ii}}, \\ \bar{s}^i(\Lambda^{-i}) &\equiv \frac{m^i/\delta^i + c^i - \delta^i(a_L^{ii} - b_L^{ii})}{q\left[\sum_{j=1}^{n}(a_H^{ji} - a_L^{ji}) + \sum_{j \neq i}\lambda^j(a_L^{ji} - b_L^{ji})\right] + a_L^{ii} - b_L^{ii}}. \end{align} Thus $\lambda^i$ is (weakly) increasing in $\lambda^j$, representing cognitive complementarity, whenever $a_L^{ji} - b_L^{ji} < 0$, meaning that $j$’s delusions (leading to $e^j = 1$ when $\sigma = L$) are harmful to $i$; conversely, $a_L^{ji} - b_L^{ji} > 0$ leads to substitutability. This is a bilateral version of the MAD principle. Similarly, agent $i$ is more likely to engage in denial when surrounded by deniers ($\lambda^j \equiv 1$) than by realists ($\lambda^j \equiv 1$) if and only if $\sum_{j=1}^{n}(a_L^{ji} - b_L^{ji}) < 0$, meaning that others’ mistakes are harmful on average, and generalizing $\theta_L < 0$. Multiple equilibria occur when this (expected) loss is sufficiently large relative to the “unconditional” incentive to deny: \begin{equation} (1-q)\sum_{j=1}^{n}(a_H^{ji} - a_L^{ji}) < \sum_{j \neq i}(b_L^{ji} - a_L^{ji}), \end{equation} which clearly generalizes (14). **Proposition 3. (Organizational cultures)** Let (16)-(20) hold for all $i = 1, \ldots, n$. There exists a non-empty range $[\bar{s}^i(0), \underline{s}^i(1)]$ (respectively, $[\bar{c}^i(1), \underline{c}^i(0)]$) for each $i$, such that if $(s^1, \ldots s^n) \in \Pi_{i=1}^{n}[\bar{s}^i(0), \underline{s}^i(1)]$ (respectively, if $(c^1, \ldots c^n) \in \Pi_{i=1}^{n}[\bar{c}^i(1), \underline{c}^i(0)]$) both collective realism ($\lambda^i \equiv 1$) and collective denial ($\lambda^i \equiv 0$) are equilibria.\footnote{As usual, there is also an odd number of mixed-strategy equilibria in-between. I do not focus on these, as they are complicated to characterize (especially with asymmetric agents) and do not add any insight.} - **Directions of cognitive influence.** Going beyond multiplicity, interesting results emerge for organizations in which members play asymmetric roles. Thus, (18)-(19) embody the intuition that an agent’s way of thinking is most sensitive to how the people whose decisions have the greatest impact on his welfare (in state $L$) deal with unwelcome news: \begin{equation} \frac{\partial s^i}{\partial \lambda^j} >> \left|\frac{\partial s^j}{\partial \lambda^i}\right| \quad \text{and} \quad \frac{\partial \bar{s}^i}{\partial \lambda^j} >> \left|\frac{\partial \bar{s}^j}{\partial \lambda^i}\right| \quad \text{iff} \quad \frac{b_L^{ji} - a_L^{ji}}{|b_L^{ij} - a_L^{ij}|} >> \max\left\{\left(\frac{\bar{s}^j}{\bar{s}^i}\right)^2, \left(\frac{\bar{s}^j}{\bar{s}^i}\right)^2\right\}. \end{equation} This condition is ensured in particular when \(|a_L^{ij} - b_L^{ij}| << |a_L^{ii} - b_L^{ii}|\) and \[ b_L^{ji} - a_H^{ji} >> \max \left\{ \sum_{k \neq i, j} |a_L^{ki} - b_L^{ki}|, \quad |a_L^{ii} - b_L^{ii}|, \sum_{j=1}^{n} |a_H^{ji} - a_L^{ji}| \right\}. \] Consider, for instance, the simplest form of hierarchy: two agents, 1 and 2, such as a manager and worker. If \(a_L^{12} - b_L^{12}\) is sufficiently negative while \(|a_L^{21} - b_L^{21}|\) is relatively small, agent 2 suffers a lot when agent 1 loses touch with reality, while the converse is not true. Workers thus risk losing their job if management makes overoptimistic investment decisions, whereas the latter has little to lose if workers put in more effort than realistically warranted. When the asymmetry is sufficiently pronounced it leads to a testable pattern of predominantly top-down cognitive influences, illustrated in Figure 4. **Proposition 4. (Cognitive trickle-down)** There exists a nonempty range of parameters such that \([\underline{s}^1(1), \bar{s}^1(0)] \subset [\underline{s}^2(0), \underline{s}^2(1)] \equiv S\) and, for all \((s^1, s^2) \in S \times S\), the equilibrium is unique and such that: (i) The qualitative nature of the manager’s cognitive strategy –complete realism, complete denial, or mixing– depends only on her own \(s^1\), not on the worker’s \(s^2\). (ii) If the manager behaves as a systematic denier (respectively, realist), so does the worker: where $\lambda^1 = 1$ it must be that $\lambda^2 = 1$, and similarly $\lambda^1 = 0$ implies $\lambda^2 = 0$. (iii) Only when both agents are in partial denial (between the two curves in Figure 4) does the worker’s degree of realism also influence that of the manager. Let agent 2 now be replicated into $n - 1$ identical workers, each with influence $[a_\sigma^{1j} e^j + b_\sigma^{1j} (1 - e^j)]/(n - 1)$ over the manager, but subject to the same influence from him as before, $a_\sigma^{1j} e^1 + b_\sigma^{1j} (1 - e^1)$. Figure 4 then remains operative, showing how the leader’s attitude toward reality tends to spread to all his subordinates, while being influenced by theirs only in a limited way, and over a limited range. This result has clear applications to corporate and bureaucratic culture, explaining how people will contagiously invest excessive faith in a leader’s “vision”. Likewise in the political sphere, a dictator need not exert constant censorship or constraint to implement his policies, as crazy as they may be: he can rely on people’s mutually reinforcing tendencies to rationalize as “not so bad” the regime they (endogenously) have to live with. The above is of course an oversimplified representation of an organization; yet the same principles will carry over to more complex hierarchies with multiple tiers (by “chaining” condition (21) across levels $i, j, k$, etc.), strategic interactions, control rights, transfer payments, etc. Such extensions lie outside the scope of this paper and are left to future work. 2.5. Robustness While the benchmark model developed in this section involves a number specific assumptions, the insights it delivers are very general. This subsection (which can be skipped) explains how the main results extend to a series of increasingly different settings. \footnote{In Rotemberg and Saloner (1993), a manager’s “vision” (prior beliefs or preferences that favor some activities over others) serves as a commitment device to reduce workers’ concerns about ex-post expropriation of their innovations. In Prendergast (1993), managers’ use of subjective performance evaluations to assess subordinates’ effort at seeking new information leads the latter to distort their reports in the direction of the manager’s (expected) signal. Both mechanisms thus lead workers to conform their behavior to managers’ prior beliefs. Unlike here, however, in neither case do they actually espouse those beliefs, nor would the manager ever want them to report anything but the truth. In Hermalin (1998), a leader with private information about the return to team effort works extra-hard to convince his coworkers to do so; the resulting separating equilibrium shifts up the whole profile of efforts (ameliorating the free-rider problem) but involves no mistaken belief by anyone. Manager and workers also share beliefs in Van den Steen (2005) but, rather than via learning, this arises from agents with diverse priors sorting themselves through the labor market. Managers with a strong “vision” thus tend to attract employees with similar priors, as this helps alleviate incentive and coordination problems within the firm.} • **Strategic interactions.** The focus has so far been on environments in which an agent’s welfare depends on others’ actions, but his return to acting does not. Quite intuitively, strategic complementarities in payoffs will reinforce the tendency for contagion, whereas substitutabilities will work against it.\(^{36}\) To see this, let agent \(i\)’ expected payoff in state \(\sigma = H, L\) now be \(\Pi^i_\sigma(e^i, e^{-i})\), where \(e^{-i}\) denotes the vector of others’ actions; his incentive to act is then \(\pi^i_\sigma(e^{-i}) \equiv \Pi^i_\sigma(1, e^{-i}) - \Pi^i_\sigma(0, e^{-i})\). In state \(L\), the differential in \(i\)’s anticipatory value of denial that results from others’ “blind” persistence, previously given by \(-s(1-\alpha)\theta_L\), is now \(-s[\Pi^i_L(1, 0) - \Pi^i_L(1, 1)]\), which embodies the same MAD intuition. The new ingredient is that others’ persistence now also changes the return to investing in state \(L\) (previously a fixed \(\alpha\theta_L\)), by \(\pi^i_L(1) - \pi^i_L(0)\), with sign governed by \(\Sigma_{j \neq i} \frac{\partial^2 \Pi^i_L}{\partial e^i \partial e^j}\). When actions are complements, delusion is thus less costly if others are also in denial, whereas with substitutes (as in the asset market of Section 5) it is more costly. • **Signal structure.** Instead of “tuning out” bad news, selective awareness can take the form of spending resources to retain good ones –through rehearsal, preserving evidence, etc. This case, in which attention or recall is naturally imperfect but can be raised at some cost, is equivalent to setting \(m < 0\), with all key results unchanged. The use of binary signals and actions is also inessential: with a richer state space, self-deception takes the form of a partitional coarsening of signals, as is standard in models of communication. • **Sophistication.** While the model is an equilibrium one, strategic sophistication and common knowledge of rationality are inessential to the main results. For denial to be contagious, for instance, an agent does not need to know *why* others around him are escalating a risky corporate strategy, or accumulating dubious assets (Section 5) in spite of mounting red flags. It suffices that he see that they do (\(e^j = 1\) when \(\sigma = L\)) and simply understand that this worsens his prospects: greater leverage implies a higher probability of firm bankruptcy if profits fall short, greater market buildup a deeper crash if fundamentals are weak, etc. Formally, the key property is that the slope of an agent’s cognitive best-response \((\lambda^i)\) to others’ material actions \((e^{-i})\) in state \(L\) hinges on whether he is made worse or better or worse \(^{36}\)Sources of complementarity may include technological gains from coordination or a desire for social conformity –whether intrinsic or resulting from sanctions imposed on norm violators. At the same time, without anticipatory feelings, preferences for late resolution of uncertainty or some other non-standard role of beliefs, no amount of complementarity can generate results similar to the model’s: agents with standard preferences, including “social” ones, always have (weakly) positive demand for knowledge and thus never engage in reality denial or information avoidance. by their mistakes (e.g., the sign of $\theta_L$). A bounded-rationality version of the model, in which agents simply best-respond to past aggregate investment, is thus shown in the Appendix to yield results very similar to those of the fully rational case.\footnote{The same would be true with other standard specifications of adaptive learning, such as fictitious play (e.g., Fudenberg and Levine (1998)) or replicator dynamics.} - \textit{Preferences and cognition}. Most importantly, the model’s findings on cognitive influences (complements and substitutes, horizontal and vertical) and their determinants are \textit{entirely independent} of the assumptions of anticipatory utility and malleable memory used to represent individual belief distortion. As shown below, replacing this specification with Kreps-Porteus (1978) preferences leads to closely related (but also complementary) results. More generally, the MAD idea provides a portable template for belief contagion that could be applied to any individual-decision model generating informational preferences. \section*{3. Contagious ignorance: the role of risk} In this section I derive versions of the MAD principle and groupthink results that are based on intertemporal risk attitudes rather than anticipatory utility, and where willful blindness takes the form of \textit{ex-ante} information avoidance (not wanting to know) rather than \textit{ex-post} belief distortion (reality denial). There are three reasons for doing so. First, as seen earlier, both types of behaviors are observed in experiments and real-world situations. Second, the role of risk in cognitive distortions is of intrinsic interest. Finally, this will make clear that the paper’s results are not tied to any particular assumption about the individual motive for non-standard updating, nor the form that the latter takes. They concern instead the \textit{social transmission of beliefs}, which a simple and general insight relates to the structure of interactions among agents. In the present case, it implies that willful ignorance will be contagious (complementarity) when its collateral effect is to \textit{magnify the risks} borne by others, and self-dampening (substitutability) when it \textit{attenuates} those risks. - \textit{Technology}. I maintain the general interaction structure of Section 2.4, which will bring to light most clearly the roles played by different types of risks.\footnote{In the restricted symmetric model of Section 2.1, by contrast, parameters such as $\theta_L$ or $\alpha$ affect both the variance and mean of payoffs. Thus, while results qualitatively similar to those of Proposition 6 can be obtained, they are not easily interpretable and the conditions required are much more constraining.} For simplicity, all payoffs are now received in the last period ($t = 2$), with\footnote{Any costs incurred in period 1 are thus “folded into” the final payoffs, with appropriate discounting:} \begin{align*} (23) \quad a_H^{ii} - b_H^{ii} &> 0 > a_L^{ii} - b_L^{ii} \equiv -f_L^i, \\ (24) \quad qa_H^{ii} + (1-q)a_L^{ii} &> qb_H^{ii} + (1-q)b_L^{ii}, \\ (25) \quad d_L^i &\equiv \sum_{j \neq i} (b_L^{ji} - a_L^{ji}) \geq 0, \\ (26) \quad A_H^i &\equiv \sum_{i=1}^n a_H^{ii} \geq \sum_{i=1}^n b_L^{ii} \equiv B_L^i. \end{align*} The first equation specifies that the privately optimal action for agent $i$ is $e^i = 1$ in state $H$ and $e^i = 0$ in state $L$. The second one implies that when uninformed, a risk-neutral agent will choose $e^i = 1$; if the state turns out to be $L$, he then incurs a loss of $f_L^i > 0$ ($f$ stands for “fault”). The third equation defines the total impact on agent $i$ that results when everyone else chooses $e^j = 1$ in state $L$, which they will do if uninformed. The most natural case is that where $d_L^i \geq 0$ (so $d$ stands for collateral “damage”), but I also allow $d_L^i < 0$. The last equation compares which of state $H$ or $L$ is better for agent $i$ when everyone is informed; the most plausible case is $A_H^i > B_L^i$, but this is not required for any of the results. - **Preferences.** I simply replace the combination of anticipatory preferences and malleable memory used so far with Kreps-Porteus (1978) preferences. Thus, at date 1 agents evaluate final lotteries according to an expected utility function $U_1 = E_1[u(x)]$, and at date 0 they evaluate lotteries over date-1 utilities $U_1$ according to an expected utility function $E_0[v(U_1)]$. Expectations are now standard rational forecasts (there is no forgetting) and agents’ only informational choice is *whether or not to learn* the signal $\sigma = H, L$ at $t = 0$. Both options are taken to be costless, but it would be trivial to allow for positive costs of becoming informed or remaining uninformed. For comparability with the previous results I take agents to be risk-neutral at date 1, $u(x) \equiv x$. The function $v(x)$, on the other hand, is strictly concave, generating a *ceteris paribus* preference for the *late resolution of uncertainty*. To avoid corner solutions I take $v(x)$ to be defined over all of $\mathbb{R}$, and for some results will also require (without much loss of generality) that there exist $\gamma > 1$ and $\gamma' > 1$ such that\footnote{For instance, $v(x) = 1 - \gamma + \gamma (x+1)^{1/\gamma}$ for $x \geq 0$, $v(x) = 2 - (1-x/\gamma)^\gamma$ for $x \leq 0$. More generally, any strictly increasing and concave function $v(x)$ defined on $\mathbb{R}_+$, with $0 < v'(0) < +\infty$ and $v(+\infty) = +\infty$ can be extended by symmetry around the perpendicular to its tangent at $(0,v(0))$: for all $x \leq 0$, $v(x) \equiv v(0) - v'(0)v^{-1}(v(0) - v'(0)x)$. The assumptions $\text{supp}(v) = \mathbb{R}$ and (27) could also be substantially weakened.} \begin{equation} \lim_{x \to +\infty} [v(x)/x^{1/\gamma}] \quad \text{and} \quad \lim_{x \to -\infty} [-v(x)/(-x)^{\gamma'}] \quad \text{are well-defined and positive}. \end{equation} thus $a_H^{ii}$ corresponds here to $a_H^{ii} - c^i/\delta^i$ in Section 2.4. At $t = 0$, when deciding whether or not to learn the state of the world, agents face a tradeoff between their preference for late resolution and the decision value of information. The novel feature of the problem considered here is that each one’s prospects also depend on how others act, and therefore on who else chooses to be informed or remain ignorant. - **The MAD principle for risks.** Consider an agent $i$ and let $d \in \mathbb{R}$ parametrize the losses he will incur due to the mistakes of those who choose $e^j = 1$ in state $L$. Thus $d = \sum_{j \in J} (b_L^{ij} - a_L^{ij}) \geq 0$, where $J$ denotes the uninformed subset. Agent $i$’s final payoffs are given by the lottery $\mathcal{I}(d)$ if he finds out the state at $t = 0$ and by $\mathcal{N}(d)$ if he does not, where: \begin{equation} \mathcal{I}(d) \equiv \begin{cases} q : & A_H^i \\ 1 - q : & B_L^i - d \end{cases}, \quad \mathcal{N}(d) \equiv \begin{cases} q : & A_H^i \\ 1 - q : & B_L^i - f_L^i - d \end{cases}. \end{equation} He therefore prefers to remain ignorant if \begin{equation} \varphi^i(d) \equiv v \left( qA_H^i + (1 - q) \left( B_L^i - f_L^i - d \right) \right) - qv(A_H^i) - (1 - q)v(B_L^i - d) > 0. \end{equation} Consider first the case in which everyone else is informed or, equivalently, agent $i$ is insulated from their mistakes.\footnote{For simplicity, agents here have a common prior, $q^i = q$. This can easily be relaxed, as in the previous section, and so can their having the same utility function $v$.} Thus $d = 0$, and he prefers to know the state if \begin{equation} \varphi^i(0) = v \left( qA_H^i + (1 - q) \left( B_L^i - f_L^i \right) \right) - qv(A_H^i) - (1 - q)v(B_L^i) < 0. \end{equation} Since $v$ is strictly increasing, this holds when faulty decisions are costly enough, \begin{equation} f_L^i > \underline{f}^i, \end{equation} where $\underline{f}^i > 0$ is defined by equality in (30). Consider now the role of $d$: as it rises, (28) makes clear how others’ ignorance renders agent $i$’s future more risky, increasing the variance in both feasible prospects $\mathcal{I}(d)$ and $\mathcal{N}(d)$. This extra risk, which he cannot avoid, makes finding out whether the state is $H$ or $L$ more frightening and thus reduces his willingness to know. The following results, illustrated in Figure 5, characterize more generally each agent’s attitude towards information. \footnote{This corresponds in particular to a single agent facing payoffs given by (28) in which $d = 0$ and $A_H^i$, $B_L^i$ represent exogenous state-contingent prizes.} Lemma 2. The function $\varphi^i(d)$ is strictly quasiconvex, reaching a negative minimum at $$d_*^i \equiv -\left(A_H^i - B_L^i\right) + \left(\frac{1-q}{q}\right)f_L^i,$$ independent of $v(\cdot)$. Furthermore, if $v(\cdot)$ satisfies (27) then $\varphi^i(d) \to +\infty$ as $|d| \to +\infty$, so there exists finite thresholds $\underline{d}^i < d_*^i < \overline{d}^i$ such that $\varphi^i(d) > 0$ if and only if $d \notin [\underline{d}^i, \overline{d}^i]$. The intuition is clearest when $d$ is positive and relatively large, meaning that others’ mistakes impose nontrivial collateral damages in state $L$; this is also the most empirically relevant case. What matters is payoff risk, however, so information aversion also occurs when others’ ignorance has a sufficiently positive payoff—that is, when $d$ is negative enough.\footnote{Note also that $(\varphi^i)'(d) > 0$ on $\mathbb{R}_+$ as long as $d_*^i < 0$, or equivalently $qA_H^i + (1-q)(B_L^i - f_L^i) > B_L^i$. This condition is most plausible, as it means that a single risk-neutral agent at date 1 prefers the lottery $\mathcal{N}(0)$ to the degenerate one in which the state is $L$ with probability 1. In the benchmark model of Section 2.1, for instance, $A_H^i = \theta_H - c/\delta^i$, $B_L^i = 0$ and $f_L^i = c^i/\delta^i - \alpha\theta_L$, so $d_*^i < 0$ is always implied by (24).} The size of the collateral stakes $|d|$, or more precisely its contribution to $|d - d_*^i|$, plays here the same role for agents who dislike variance in their date-1 utility $U_1^i$ as $d$ itself (or $-(1-\alpha)\theta_L$ in the symmetric case) played earlier for agents disliking a low level of $U_1^i$. The term $d_*^i$ corrects in particular for the fact that it is not just the sum of risks that matters, but also their correlation: remaining uninformed leads to a costly mistake ($f_L^i$) when $L$ occurs, which is also when the agent incurs $d$ from others’ ignorance.\footnote{This increases the value of information for $d > 0$ and lowers it for $d < 0$, thus raising the threshold $d_*^i$ beyond which higher $d$'s makes the agent less willing to become informed ($\varphi' > 0$). For $f_L^i = 0$, $|d - d_*^i| = |A_H^i - (B_L^i - d)|$ is just the spread in payoffs common to $\mathcal{I}(d)$ and $\mathcal{N}(d)$. Note also how the opposite roles of avoidable and unavoidable risks are reflected in $\varphi^i$, which is concave in $f_L^i$ and quasiconvex in $d$.} These results lead to a full characterization of agents’ cognitive best responses. Proposition 5 (MAD principle for risks). (i) Given any two subsets of agents $J$ and $J'$ not containing $i$, denote $d = \sum_{j \in J} (b_L^{ji} - a_L^{ji})$ and $d' = \sum_{j \in J'} (b_L^{ji} - a_L^{ji})$. Agent $i$’s incentive to avoid information is higher when the set of uninformed agents is $J'$ rather than $J$ if and only if $(d' - d)(d - d^i_*) > 0$. (ii) Let each agent be equally affected by the mistakes of all others: $b_L^{ji} - a_L^{ji} = d$ for all $i, j$ with $j \neq i$. The informational choices of all agents are strategic complements if $d$ lies outside the interval $[\min\{d^i_*, 0\}, \max\{d^i_*, 0\}]$, and strategic substitutes if it lies within. The first part of the proposition demonstrates the role of collateral risk most generally. First, if $J \subset J'$, more agents remaining ignorant make $i$ more averse to information when they add to the total risk he bears, in the sense of moving $d$ further away from $d^i_*$. Second, taking $J$ and $J'$ disjoint (for example, $i$’s hierarchical superiors and subordinates, respectively) shows that an agent’s wanting or not wanting to know is most sensitive to how the people whose ignorance imposes the greatest risk on him deal with uncertainty. This naturally leads, as in Section 2.4, to a trickle-down of attitudes towards information –from management to workers, political leader to followers, etc. The second part of the proposition is illustrated in Figure 5 by a simple rescaling of $d$. In this “horizontal” case the value of ignorance is $\varphi^i((1 - \lambda^{-i})d)$, where $1 - \lambda^{-i}$ is the fraction of others who choose to remain uninformed and $d$ is now the “normalized” damage. - **Groupthink as contagious ignorance.** When the total uncertainty he faces due to the ignorance of others ($d = d^i_L$ defined in (25)) is large enough, an agent who would otherwise have positive demand for information ($f^i_L > f^i$) will prefer to also avoid learning the state of the world. Thus $\varphi^i(0) < 0 < \varphi^i(d^i_L)$, meaning that knowledge is a best reply to knowledge and ignorance a best reply to ignorance, in a manner that echoes Propositions 2 and 3. As a consequence, *risk also spreads* and becomes systemic throughout the organization. Proposition 6 (endogenous systemic risk). Let (23), (24) and (31) hold for all $i$, and $v(\cdot)$ satisfy (27). There exists a non-empty set $D^i \equiv (-\infty, d^i] \cup (\bar{d}^i, +\infty)$ for each $i$, with $\underline{d}^i < 0 < \bar{d}^i$, such that if $(d^1_L, \ldots d^n_L) \in \prod_{i=1}^n D^i$, for all $i$, both collective realism (every agent becoming informed at date 0) and collective willful ignorance (every agent choosing to remain uniformed) are equilibria. In the latter, each agent $i$’s willingness to pay to avoid information is positive and increasing in $|d^i_L|$ on each side of $D^i$. • **The role of risk preferences.** Given a structure of interactions, intuition suggests that for multiple regimes to arise, agents’ preference for late resolution should be neither too large nor too small. Indeed, if (29) (respectively, (30)) holds for some function $v$, it also holds for any $w$ that is increasing and more (respectively, less) concave.\footnote{By definition, $w$ is more concave than $v$ if $w = \omega \circ v$, for some increasing and concave function $\omega$.} **Proposition 7.** Let $\{v_\gamma(x), \gamma \geq 1\}$ be a family of concave functions on $\mathbb{R}$ such that $v_{\gamma'}$ is strictly more concave than $v_\gamma$ whenever $\gamma' > \gamma$. Given a payoff structure $(a^{ij}_\sigma, b^{ij}_\sigma)_{\sigma=H,L}^{i,j=1,\ldots,n}$ satisfying (23)-(26), there exists a range $[\underline{\gamma}, \bar{\gamma}]$ such that the informed and uniformed organizational equilibria coexist if and only if $\gamma \in [\underline{\gamma}, \bar{\gamma}]$.\footnote{For arbitrary $v_\gamma$’s and parameter configurations, the interval could be empty ($\underline{\gamma} > \bar{\gamma}$). By Lemma 2, sufficient conditions for $\underline{\gamma} < \bar{\gamma} \leq +\infty$ are that: (i) $v_\gamma$ satisfy the asymptotic conditions (27) for at least one value of $\gamma$; (ii) for this $v_\gamma$, the threshold $\underline{f}^i$ is less than $f^i_L$, for all $i$; (iii) $|d^i_L - d^i_*|$ is large enough, for all $i$.} The bounds $\underline{\gamma}$ and $\bar{\gamma}$ can be derived explicitly in the case of quadratic utility: $v(x) = x - \gamma x^2/2$ for $x \in (-\infty, 1/\gamma)$. Conditions (29) and (30) then become \begin{align} \frac{2f^i_L}{\gamma} &< q \left( A^i_H - B^i_L + d^i_L + f^i_L \right)^2 - f^i_L \left( f^i_L - 2B^i_L + 2d^i_L \right), \\ \frac{2f^i_L}{\gamma} &> q \left( A^i_H - B^i_L + f^i_L \right)^2 - f^i_L \left( f^i_L - 2B^i_L \right), \end{align} which respectively define $\underline{\gamma}$ and $\bar{\gamma}$. Proposition 11, given in the Appendix, shows that $\underline{\gamma} < \bar{\gamma}$ and a range of equilibrium multiplicity exists, provided $|d^i_L|$ is large enough. • **Modeling choices.** Compared to anticipatory utility and imperfect recall, Kreps-Porteus preferences have the advantage of well-established axiomatic foundations. On the other hand, the results they lead to are much less tractable analytically. The thresholds determining equilibrium do not generally admit closed-form solutions, whereas in Propositions 1-3 they were obtained explicitly, with readily interpretable comparative statics. It may also be quite difficult for an agent to avoid informative signals, especially in a social context, so the relevant question is more often how to deal with the information one does have. From here on I therefore revert to the benchmark specification of Section 2.1, simply noting that for each application one could again derive parallel results based on risk attitudes. 4. Welfare, Cassandra’s curse and free speech protections Are members of a group in collective denial worse or better off than if they faced the truth –as an alternative equilibrium or by means of some collective commitment mechanism? I adopt here the ex-ante, behind-the-veil perspective of organizational designers who could choose the structure of payoffs (activities, incentives, employees’ types) and information (hard or soft signals, treatment of dissenters) to maximize total surplus. Computing welfare as of $t = 0$ is also consistent with a revealed-preferences approach: from agents’ willingness-to-pay to ensure collective realism or denial, inferences can be made about their deep preferences parameters, such as $s$. Consider first state $\sigma = L$. When agents are realists (setting $\lambda^j = 1$ in (7)), equilibrium welfare is $U_{L,R}^* = 0$. When they are deniers (setting $\lambda^j = 0$ in (8)), it is given by: $$U_{L,D}^*/\delta = -m - c + \delta \theta_L + sq\theta_H + s(1-q)\theta_L.$$ As illustrated in Figure 6, whether collective denial of bad news is harmful or beneficial thus depends on whether $s$ lies below or above the threshold $$s^* \equiv \frac{m/\delta + c - \delta \theta_L}{q\theta_H + (1-q)\theta_L}. $$ **Proposition 8.** Welfare following bad news (state $L$): 1. If $\theta_L < 0$, then $s^* > \max \{\bar{s}(0), \underline{s}(1)\}$. Whenever realism ($\lambda = 1$) is an equilibrium, it is superior to denial ($\lambda = 0$). Moreover, there exists a range in which realism is not an equilibrium but, if it can be achieved through collective commitment, yields higher welfare. 2. If $\theta_L > 0$, then $s^* < \bar{s}(0)$. The equilibrium involves excessive realism for $s \in (s^*, \bar{s}(0))$ and excessive denial for $s \in (\underline{s}(1), s^*)$, when this interval is nonempty. Given how damaging collective delusion is in state $L$ with $\theta_L < 0$, it makes sense that when realism can also be sustained as an equilibrium it dominates, and that when it cannot the group may try to commit to it. Conversely, with $\theta_L > 0$, *boosting morale* in state $L$ ameliorates the *free-rider problem*, so the group would want to commit to ignoring adverse --- 47 One may nonetheless ask what would change if welfare was evaluated based on $U_1^i$ rather than $U_0^i$ (though it would then not be measurable through organizational-design decisions). This turns out to make no difference, apart from a trivial parameter renormalization: see footnote 50. signals when $s \geq s^*$ but the only equilibrium involves realism.\footnote{If $\theta_L$ is high enough that $\delta \theta_L > c + m/\delta$, then $s^* < 0$: overoptimism in state $L$ is socially beneficial even absent anticipatory emotions ($s = 0$). A good example is team morale in sports.} Consider now welfare in state $H$. Given (3), everyone chooses $e^i = 1$ in both equilibria. Under denial, however, agents can never be sure of whether the state is truly $H$, or it was really $L$ and they censored the bad news. As a result of this “spoiling” effect, welfare is only \begin{equation} U_{H,D}^*/\delta = -c + \delta \theta_H + s \left[ q \theta_H + (1-q) \theta_L \right] < -c + (\delta + s) \theta_H = U_{H,R}^*/\delta. \end{equation} Averaging over the two states, finally, the mean belief about $\theta$ remains fixed (by Bayes’ rule), so the net welfare impact of denial, $\Delta W_0 \equiv q \left( U_{H,D}^* - U_{H,R}^* \right) + (1-q) \left( U_{L,D}^* - U_{L,R}^* \right)$, is just \begin{equation} \Delta W_0 \equiv (1-q)\delta \left[ (\delta + s) \theta_L - c - m/\delta \right], \end{equation} realized in state $L$. In assessing the overall value of social beliefs one can thus focus on material outcomes and ignore anticipatory feelings, which are much more difficult to measure but wash out across states of nature.\footnote{This is also true when evaluating (unconditional) utilities from the point of view of date 1. The welfare differential across denial and realistic group outcomes is then $\Delta W_1 = (1-q) \left[ (\delta + s) \theta_L - c \right]$, which just amounts to renormalizing $c$ to $c + m/\delta$ in $\Delta W_0/\delta$. Furthermore, $m$ can be taken (if desired) as arbitrarily small or even zero; see footnote 23.} **Proposition 9.** (1) Welfare following good news (state $H$) is always higher, the more realistic agents are when faced with bad news (the higher is $\lambda$). (2) If $\theta_L \leq 0$, denial always lowers ex-ante welfare. If $\theta_L > 0$, it improves it if and only if $(\delta + s)\theta_L > c + m/\delta$. These results, also illustrated in Figure 6, lead to a clear distinction between two types of collective beliefs and the settings that give rise to them. They are also testable, since $\Delta W_0$ measures agents’ willingness to pay (positive or negative) for organizational designs or commitment devices that ensure collective realism. - **Beneficial group morale.** When $\theta_L > 0$, $e = 1$ is socially optimal even in state $L$, but since $\alpha(s + \delta)\theta_L < c$ it is not privately optimal. If agents can all manage to ignore bad news at relatively low cost, either as an equilibrium or through commitment, they will be better off not only ex-post but also ex-ante: $\Delta W_0 > 0$. This is in line with a number of recent results showing the functional benefits of overoptimism (achieved through information manipulation or appropriate selection of agents by a principal) in settings where agents with the correct beliefs would underprovide effort.\footnote{In a team or firm context see, e.g., Bénabou and Tirole (2003), Fang and Moscarini (2005), Van den Steen (2005) and Gervais and Goldfstein (2007). In a self-control context see Bénabou and Tirole (2002) and in an intergenerational context see Dessi (2008).} - **Harmful groupthink.** The novel case is the one in which contagious delusions can arise, $\theta_L < 0$, and it also leads to a more striking conclusion: not only can such reality avoidance greatly damage welfare in state $L$, but even when it improves it those gains are always dominated by the losses induced in state $H$: $\Delta W_0 < 0$.\footnote{The “shadow of doubt” cast over the good state by the censoring of the bad state could also distort some decisions in state $H$, given more than two action choices. If, on the other hand, agents are less than fully aware of their own tendency to self deception, the losses in state $H$ are attenuated and ex-ante gains become possible. Thus, with $\chi < 1$ in (6), $q$ is simply replaced by $q/[q + \chi(1-q)]$ in (35) and (37), and $\Delta W_0$ consequently augmented by $s\delta(1-\chi)q(1-q)/[q + \chi(1-q)]$.} This normative result also has positive implications for how organizations and polities deal with dissenters, revealing an important form of time inconsistency between ex ante and ex post attitudes. - **The curse of Cassandra.** Let $\theta_L < 0$ and consider a denial equilibrium, as in Figure 6. Suppose now that, in state $L$, an individual or subgroup with a lower $s$ or different payoffs attempts to bring the bad news back to everyone’s attention. If this occurs after agents have sunk in their investments it simply amounts to deflating expectations in (2), so they will refuse to listen, or may even try to “kill the messenger” (pay a new cost to forget). Anticipating that others will behave in this way, in turn, allows everyone to more confidently invest in denial at $t = 0$. To avoid this deleterious outcome, organizations and societies will find it desirable to set up *ex-ante guarantees* such as whistle-blower protections, devil’s advocates, constitutional rights to free speech, independence of the press, etc. These will ensure that bad news will most likely “resurface” ex-post in a way that is hard to ignore, thus lowering the ex-ante return of investing in denial. Similar results apply if the dissenter comes at an interim stage, after people have censored but before investments are made. For $s < s^*$ they should welcome the opportunity to correct course, but in practice this can be hard to achieve, requiring full coordination. With payoff heterogeneity, dissenters’ motives may also be suspect. Things are even starker for $s > s^*$, meaning that people strongly value hope and dislike anxiety. Facing the truth (state $L$) now lowers everyone’s utility, generating a *universal unwillingness to listen* – the curse of Cassandra. Free-speech guarantees, anonymity and similar protections nonetheless remain *desirable ex-ante*, as they avoid welfare losses in state $H$ and, on average, save the organization or society from wasting resources on denial and repression. 5. Market exuberance 5.1. The dynamics of manias and crashes I now consider delusions in asset markets. To take recent examples, state $H$ may correspond to a “new economy” in which high-tech startups will flourish and their prospects are best assessed using “new metrics”; to a permanent rise in housing values; or to any other positive and lasting shift in fundamentals. Conversely, state $L$ would reflect an inevitable return to “old” economy and valuations, the unsustainability of many adjustable-rate mortgages, no-docs loans and other subprime debt, or the presence of extensive fraud. Investors finding reasons to believe in $H$ even as evidence of $L$ accumulates corresponds to what Shiller (2005) terms “new-era thinking”, and of which he relates many examples. This section will provide the first analytical model of this phenomenon.\footnote{As explained earlier, neither rational bubbles nor informational cascades involve any element of wishful thinking, motivated rationalization or information avoidance. In both cases, all investors act exactly as a benevolent statistician would advise or allow them to.} To this end, I extend the basic framework in two ways, adding an ex-ante investment stage and deriving final payoffs from market prices: see Figure 7.\footnote{The initial investment stage is an example of endogenizing the degree (previously, $1 - \alpha$) of agents’ interdependence or “vesting” in the collective outcome.} A continuum of firms or investors $i$ can each produce $k^i \leq K$ units of a good or asset (housing, office space, mortgage-backed security, internet startup) in period 0 and an additional $e^i \leq E$ units in period 1, where $K$ and $E$ reflect capacity constraints or “time to build” technological limits. The cost of production in period 0 is set to 0 for simplicity, while in period 1 it is equal to $c$. All units are sold at $t = 2$, at which time the expected market price $P_\sigma(Q)$ will reflect total supply $Q \equiv \bar{k} + \bar{e} \in [0, K + E]$ and stochastic market conditions $\theta_\sigma$, with $\sigma = H, L$ and $P'_\sigma(Q) < 0$. Between the two investment phases agents all observe the signal $\sigma$, then decide how to process it, with the same information structure and preferences as before. The absence of an interim or futures market before date 2 is a version (chosen for simplicity) of the kind of “limits to arbitrage” commonly found in the finance literature. Specifically, I assume that: (i) goods produced in period 0 cannot be sold before period 2, for instance because they are still work-in-progress whose quality or market potential is not verifiable: startup company, unfinished residential development or office complex, new type of financial instrument, etc.; (ii) short sales are not feasible. Limited liquidity and arbitrage are empirically descriptive of the types of markets which the model aims to analyze.\footnote{Shiller (2003) cites several studies documenting the fact that short sales have never amounted to more than 2% of stocks, whether in number of shares or value. Gabaix et al. (2007) provide specific evidence of limits to arbitrage in the market for mortgage-backed securities.} In the recent financial crisis, a dominant fraction of the assets held by major U.S. investment banks did not have an active trading market and objective price, but were instead valued according to the bank’s own models and projections, or even according to management’s “best estimates”.\footnote{Reilly (2007) reports that only 36% of Lehman Brothers’ 2007-QII balance sheet and 18% of Bear} Similarly, the notional value of outstanding Collateralized Debt Obligation (CDO) tranches stood in 2008 at about $2$ trillion worldwide, and that of Credit Default Swaps (CDS) at around $50$ trillion; and yet for most of them there was and still is no established, centralized marketplace where they could easily be traded. These are instead very illiquid (“buy and hold”) and hard-to-price assets: originating in private deals, highly differentiated and exchanged only over-the-counter.\footnote{Stearns’ were Level 1 assets in the FASB nomenclature, namely those which “trade in active markets with readily available prices”. Level 2 assets (“mark to model”) accounted for 56\% and 74\% respectively, and Level 3 (“reflect management’s best estimates of what market participants would use in pricing the assets”) for 8\% in both cases. For Level 2, moreover, the major trading houses commonly used computer programs designed for “plain vanilla” loans to value novel and highly complex securities (Hansell, (2008)).} Suppose that, ex-ante, the market is sufficiently profitable that everyone invests up to capacity at the start of period 0: $k^j = \overline{k} = K$.\footnote{In housing, the market for regional-index futures (Case-Shiller) is also still small and fairly illiquid.} Moreover, following (3), let $$P_L(K) < \frac{c}{s + \delta} < \frac{c}{\delta} < qP_H(K + E) + (1 - q)P_L(K + E). \tag{39}$$ It is thus a dominant strategy for an agent at $t = 1$ to invest the maximum $e^i = E$ if his posterior is no worse than the prior $q$, and to abstain if he is sure that the state is $L$. Consider now what unfolds when agents observe the signal $L$ at the end of period 0. - **Realism.** If market participants acknowledge and properly respond to bad news ($\lambda^i \equiv 1$) they will not invest further at $t = 1$, so the price at $t = 2$ will be $P_L(K)$. For an individual investor $i$ with stock $k^i$, the net effect of ignoring the signal is then $$U_{0,D}^i - U_{0,R}^i = -m + \delta \left[ (\delta + s)P_L(K) - c \right]E + \delta sr(\lambda^i) \left[ P_H(K + E) - P_L(K) \right](k^i + E). \tag{40}$$ The second term reflects the expected losses from investing at $t = 1$, while the last one represents the value of maintaining hope that the market is strong or will eventually recover, in which case total output will be $K + E$ and the price $P_H(K + E)$. Realism is an equilibrium if $U_{0,D}^i \leq U_{0,R}^i$ for $\lambda^i = 1$ and $k^i = K$, or $$s \leq \frac{m/\delta + [c - \delta P_L(K)]E}{[P_H(K + E) - P_L(K)](K + E) + P_L(K)E} \equiv s(1). \tag{41}$$ - **Denial.** If the other participants remain bullish in spite of adverse signals, they will keep investing at $t = 1$, causing the already weak market to crash: at $t = 2$, the price will fall to $P_L(K + E) < P_L(K)$. The net value of denial for investor $i$ is now \begin{equation} U_{0,D}^i - U_{0,R}^i = -m + \delta \left[ (\delta + s) P_L(K + E) - c \right] E \\ + \delta sr(\lambda^i) \left[ P_H(K + E) - P_L(K + E) \right] (k^i + E). \end{equation} In the second term, the expected losses from overinvestment are higher than when other participants are realists. Through this channel, which reflects the usual substitutability of investments in a market interaction, each individual’s cost of delusion increases when others are deluded. On the other hand, the third term makes clear that the psychological value of denial is also greater, since acknowledging the bad state now requires recognizing an even greater capital loss on preexisting holdings. This is again the MAD principle at work. Denial is an equilibrium if $U_{0,D}^i \geq U_{0,R}^i$ for $\lambda^i = 0$ and $k^i = K$, or \begin{equation} s \geq \frac{m/\delta + [c - \delta P_L(K + E)] E}{q \left[ P_H(K + E) - P_L(K + E) \right] (K + E) + P_L(K + E)E} \equiv \bar{s}(0). \end{equation} In such an equilibrium, each investor keeps optimistically accumulating assets that have in fact become “toxic”, both to his own balance sheet and to the market at large. When does other participants’ exuberance make each individual more likely to also be exuberant? Intuitively, contagion occurs when the substitutability effect, which bears on the marginal units $E$ produced in period 1, is dominated by the capital-loss effect on the outstanding position $K$ inherited from period 0. Formally, $\bar{s}(0) < \underline{s}(1)$ requires that $K$ be large enough relative to $E$, though not so large as to preclude (41). **Proposition 10. (Market manias and crashes)** If \begin{equation} P_H(K + E) \left( 1 + E/K \right) < c/\delta < P_H(K + E), \end{equation} there exists $q^* < 1$ such that, for all $q \in [q^*, 1]$, there is a non-empty interval for $s$ (or $c$) in which both realism and evidence-blind “exuberance” are equilibria, provided $m$ is not too large. Contagious exuberance leads to overinvestment, followed by a deep crash. The model provides a microfounded and psychologically-based account of market groupthink, investment frenzies and ensuing crashes.\footnote{As always, equilibrium multiplicity represents more broadly the potential to greatly amplify small shocks, translating here into a “fragility” of the market to recurrent manias.} It also identifies key features of the markets prone to such cycles, distinguishing it from traditional models of bubbles or herding. First, there must be a “story” about shifts in fundamentals that is minimally plausible \textit{a priori} ($q$ must not be too low): technology, demographics, globalization, etc. The key result is that investors’s beliefs in the story can then quickly become resistant to any contrary evidence.\footnote{By contrast, in standard models of stochastic bubbles everyone realizes they are trading a “hot potato” whose value does not reflect any fundamentals, must eventually collapse and can do so at any instant. Limited liquidity also plays no role there, nor does it in models of herding.} Second, when the new opportunity first appears ($q$ rising above the threshold), there is an initial phase of investment buildup and rising price expectations.\footnote{In the interim period there is no objective market price, but all participants’ “mark to model” or “best estimates” values remain at $qP_H(K+E)+(1-q)P_L(K+E)$, which reflects only the increased prior $q$ instead of falling to the very low $P_L(K+E)$ actually warranted by the red flags which they are ignoring ($\sigma = L$). Note also that the most economically important aspect of market manias is not price volatility or mispricing per se but the resulting misallocation of resources, which is what the present analysis focuses on.} Finally, the assets in question must involve both significant uncertainty and limited liquidity, as discussed earlier. These conditions are typical of assets tied to new technologies or financial instruments, whose potential will take a long time to be fully revealed. The model’s comparative statics also shed light on other puzzles. From (40)-(43), we have: (a) \textit{Escalating commitment} at the individual level: the more an agent has invested to date, the more likely he is to continue in spite of bad news, thus displaying a form of the \textit{sunk cost fallacy}: by (42), $\partial(U^i_{0,D} - U^i_{0,R})/\partial k^i > 0$. Moreover, while $k^i$ represents here an outstanding inventory or financial position, any other illiquid asset with market-dependent value, such as sector-specific human capital in banking or finance, has the same effect.\footnote{An initial stake raises the propensity to wishful exuberance, but is not a precondition. Equation (40) or (42) can be positive (for $\lambda^i = 0$) even with $k^i = 0$, given a sufficient sensitivity to anticipatory feelings, $s^i$.} (b) \textit{Market momentum}: the larger the market buildup ($k^{-i} = K$), the more likely is each agent to continue investing in spite of bad news, if demand is (sufficiently) less price sensitive in the low state than in the high one. Indeed, the incentive to discount bad news rises with prospective capital losses, which in a denial equilibrium are proportional to $P_H(K+E) - P_L(K+E)$ and therefore increasing in $K$ when $\partial^2 P/\partial Q \partial \theta > 0$. This occurs for instance with linear demand $Q(P,\theta) = \theta (a - bP)$, or when demand is concave and good fundamentals correspond to a scarcity of a close substitute: $P_\sigma(Q) = P(Q + Z(\theta_\sigma))$, with $Z', P', P'' < 0$.\footnote{By (42), $\partial(U^i_{0,D} - U^i_{0,R})/\partial K|_{e^j=E} > 0$ at $r(\lambda^i) = q$, so that agent $i$’s best response is $\lambda^i = 0$ (and $e^i = E$),} This simple asset-market model could be extended in several ways. First, in a dynamic context, outstanding stocks will result stochastically from the combination of previous investment decisions and demand realizations. Second, one could relax the strong form of limits to arbitrage imposed by the assumption that trades occur only at $t = 2$. Forward or short trades could instead involve transactions costs, risk due to limited market liquidity or, for large positions, an adverse price impact.\footnote{Trying to sell (or sell short) in period 1 could also be self-defeating, as it would reveal again to the market that the state is $L$, generating an immediate price collapse. For a model of how market thinness generates endogenous limits to arbitrage and delays in trade, see Rostek and Weretka (2008).} Finally, rather than ignoring red flags, the contagion analysis could be recast (as in Section 3) in terms of market participants’ unwillingness to seriously examine the true nature –investment-grade, or highly “toxic”– of the assets being accumulated. 5.2. Regulators, politicians and economists Another set of actors with “value at risk” in an exuberant market are politicians and regulators, whose reputation and career will suffer if the disaster scenario (state $L$, worsened by market participants’ overinvestment) occurs. This should normally make them try to dampen the market’s enthusiasm, but if the buildup has proceeded far enough (high $K$) that large, economy-wide losses are unavoidable in the bad state, they will also become “believers” in a rosy future or smooth landing. Consequently, they will fail to take measures that could have limited (tough not avoided) the damage, thus further enabling the investment frenzy and subsequent crash.\footnote{On serial blindness to red flags and deliberate information-avoidance by former FED chairman Alan Greenspan and other top financial regulators, see Goodman (2008), SEC (2008, 2009) and Appendix D.} Some academics and policy advisers may also have intellectual capital vested in the virtues of unfettered financial markets: a severe crisis proving such faith to be excessive would damage its value and the general credibility of laissez-faire arguments, increasing demand for regulation in other parts of the economy. \[\text{if and only if } \left[ P_H'(K + E) - P_L'(K + E) \right] / \left[ -P_L'(K + E) \right] > \left[ (\delta + s)/sq \right] \left[ E/(k^3 + E) \right].\] This inequality holds if $\partial^2 P/\partial Q \partial \theta$ is large enough and $k^4/E$ (equal to $K/E$ in equilibrium) high enough that the right-hand side is less than 1. With linear demand, it becomes $(\theta_H - \theta_L)/\theta_H > \left[ (\delta + s)/sq \right] \left[ E/(k^4 + E) \right].$ 6. Conclusion This paper developed a model of how wishful thinking and reality denial spread through organizations and markets. The underlying mechanism does not rely on complementarities in technology or preferences, agents’ herding on a subset of private signals, or exogenous biases in inference. It is also quite robust to alternative ways of modeling the psychological motives and cognitive operations underlying individual belief distortion, as well as agents’ degree of sophistication. This “Mutually Assured Delusion” principle is broadly applicable, helping to explain corporate cultures characterized by dysfunctional groupthink or valuable group morale, why willful ignorance and delusions flow down hierarchies, and the emergence of market manias sustained by “new-era” thinking, followed by deep crashes. In each of these applications, the institutional and market environment was kept simple, so as to make clear the workings of the underlying mechanism. Enriching these context-specific features would be valuable and permit new applications. For hierarchical organizations, richer payoff and information structures could be incorporated, along with greater heterogeneity of interests among agents. Potential applications include the spread of organizational corruption (e.g., Anand et al. (2005)), corporate politics (e.g. Zald and Berger (1998)) and organizational-design questions such as the optimal mix of agents, network structure and communication mechanisms (e.g. Calvó-Armengol et al. (2009), Van den Steen (2010)). For financial institutions, one could examine how different contractual and regulatory structures may create complementarities in their willingness to find out, or avoid finding out, the true quality of the assets on their balance sheets. A somewhat different class of collective delusions are mass panics and hysterias. While the model does generate episodes of excessive doubt, overcautiousness and even fatalistic apathy (see online Appendix B), these seem too mild to capture what goes on in a full-fledged panic.\footnote{Recall first that when agents censor bad news, they never fully believe in the good state ($\sigma = H$), even when it actually occurs ($r(\lambda^i) < 1$ for any $\chi > 0$). Second, investors who fear (perhaps from having been burned once) falling prey to the next wave of collective overoptimism will shy away from even positive expected-value investments (this occurs when condition (A.24) in the appendix is reversed).} Understanding the sources and transmission mechanisms that underlie delusional group pessimism, rather than optimism, is an interesting question for further research. Appendix A: Main Proofs In the proofs given here, I maintain the text’s focus on cognitive decisions in state $L$, fixing everyone’s recall strategy in state $H$ to $\lambda_H = 1$. In online Appendix C (Lemmas 5 and 6), I show that this is not a binding restriction: with the payoffs (1) there is no equilibrium with $\lambda_H < 1$ and no profitable individual deviation to $\lambda_H^i < 1$ from an equilibrium with $\lambda_H = 1$.\footnote{Under the very weak condition that each agent encodes his own information (for future recall) in a cost-effective manner, which Lemma 5 shows can always be ensured. This is seen most clearly for $\lambda_H^i = \lambda_L^i = 0$, which is informationally equivalent to $\lambda_H^i = \lambda_L^i = 1$ but wastes $m$ in each state.} These results, as well as Proposition 12, are proved using the more general specification \begin{equation} U_2^i \equiv \theta \left[ \alpha e^i + (1 - \alpha)e^{-i} \right] + \gamma, \end{equation} where $\gamma$, like $\theta$, is now also state-dependent and $\Delta \gamma \equiv \gamma_H - \gamma_L$ can be of either sign. **Proof of Proposition 1.** Parts (ii) and (iii) follow from the monotonicity of $\Psi$ in $\theta_L$ and $\alpha$. Note that no assumption of symmetry in strategies was imposed ($\lambda^{-i}$ could, a priori, be the mean of heterogenous recall rates). Therefore, the only equilibria are the symmetric ones described in the proposition. ■ **Proof of Proposition 2.** By Lemma 1, $\lambda = 1$ is an equilibrium when $s \leq \underline{s}(1)$, or $\Psi(1, s|1) \leq 0$ and $\lambda = 0$ is an equilibrium when $s \geq \bar{s}(0)$, or $\Psi(0, s|0) \geq 0$. Finally, $\lambda \in (0, 1)$ is an equilibrium if and only if $\Psi(\lambda, s|\lambda) = 0$. Now, from (9) and (6), \begin{equation} \Psi(\lambda, s|\lambda) = -m/\delta - c + (\delta + s) \alpha \theta_L + sq \left( \frac{\Delta \theta + (1 - \alpha)\lambda \theta_L}{q + (1 - q)(1 - \lambda)} \right). \end{equation} This function is either increasing or decreasing in $\lambda$, depending on the sign of $(1 - \alpha)\theta_L + (1 - q) \Delta \theta$. One can also check, using (10)-(11), that the same expression governs the sign of $\underline{s}(1) - \bar{s}(0)$. The equilibrium set is therefore determined as follows: (a) If (14) does not hold, $\Psi(\lambda, s|\lambda)$ is increasing, so $\Psi(0, s|0) < \Psi(1, s|1)$, or equivalently $\underline{s}(1) < \bar{s}(0)$ by (10)-(11). There is then a unique equilibrium, equal to $\lambda = 1$ if $\Psi(1, s|1) \leq 0$, interior if $\Psi(0, s|0) < 0 < \Psi(1, s|1)$, and equal to $\lambda = 0$ if $0 < \Psi(0, s|0)$. (b) If (14) does hold, $\Psi(\lambda, s|\lambda)$ is decreasing, so $\Psi(1, s|1) < \Psi(0, s|0)$, or equivalently $\bar{s}(0) < \underline{s}(1)$ by (10)-(11). Then: (i) $\lambda = 1$ is the unique equilibrium for $\Psi(0, s|0) \leq 0$, meaning that $s \leq \bar{s}(0)$, while $\lambda = 0$ is the unique equilibrium for $\Psi(1, s|1) \geq 0$, meaning that \( s \geq \underline{s}(1) \); for \( \Psi(1, s|1) < 0 < \Psi(0, s|0) \), or \( \bar{s}(0) < s < \underline{s}(1) \), both \( \lambda = 1 \) and \( \lambda = 0 \) are equilibria, together with the unique solution to \( \Psi(\lambda, s|\lambda) = 0 \), which is interior. ■ **Corollary 1.** Denote by \( \underline{s}(\lambda^{-i}, \alpha) \) and \( \bar{s}(\lambda^{-i}, \alpha) \) the thresholds respectively given by (10) and (11), and by \( \tilde{s} \equiv \underline{s}(\lambda^{-i}, 1) \), which is independent of \( \lambda^i \). Let \( \alpha' < 1 \) be such that \( \delta [\alpha'\theta_L + \theta_H] > c \) and (14) holds. Then, for all \( m \) small enough, \( \bar{s}(0, \alpha') < \underline{s}(1, \alpha') < \tilde{s} \) and: (i) For \( \underline{s}(1, \alpha') < s < \tilde{s} \), \( \lambda = 1 \) is the unique equilibrium when \( \alpha = 1 \), and \( \lambda = 0 \) the unique equilibrium when when \( \alpha = \alpha' \); (ii) For \( \bar{s}(0, \alpha') < s < \underline{s}(1, \alpha) \), \( \lambda = 1 \) is the unique equilibrium when \( \alpha = 1 \), and \( \{0, 1\} \) is the stable equilibrium set when \( \alpha = \alpha' \). **Proof.** The fact that \( \bar{s}(0, \alpha') < \underline{s}(1, \alpha') \) is simply equation (14), while \( \underline{s}(1, \alpha') < \tilde{s} \) if \[ [m/\delta + c - \delta \alpha'\theta_L] [\alpha'\theta_L + \Delta \theta] < [m/\delta + c - \delta \theta_L] [\alpha'\theta_L + \Delta \theta + (1 - \alpha')\theta_L] \] For \( m = 0 \), this becomes: \[ (c - \delta \alpha'\theta_L) (\alpha'\theta_L + \Delta \theta) < (\alpha'\theta_L + \Delta \theta + (1 - \alpha')\theta_L) (c - \delta \theta_L) \iff \] \[ (1 - \alpha')\delta \theta_L [\alpha'\theta_L + \Delta \theta] < (1 - \alpha')\theta_L (c - \delta \theta_L) \iff \delta [\alpha'\theta_L + \Delta \theta] > c - \delta \theta, \] since \( \theta_L < 0 \), by (14). Therefore, since \( \delta [\alpha'\theta_L + \theta_H] > c \), (A.3) holds for \( m \) small enough. With \( \alpha = 1 \), the uniqueness of equilibrium follows from \( s < \tilde{s} = \underline{s}(1, 1) \) and Proposition 2.2. With \( \alpha = \alpha' \), results (i) and (ii) respectively follow from parts 2 and 1 of Proposition 2. ■ **Proof of Proposition 3.** Setting \( \lambda^j \equiv 1 \) in (18) and \( \lambda^j \equiv 0 \) in (19) yields the result. ■ **Adaptive-learning version of the model.** Let the game summarized by Figure 1 be repeated many times, and index those where state \( L \) occurs by \( \tau \in \mathbb{N} \). At any stage \( t = 1 \), agent \( i \)'s optimal decision depends only on his own belief about \( \theta \). At stage \( t = 0 \), by (1)-(2) the only aspect of other agents’ play affecting his future payoffs is the aggregate action \( e_{\tau^{-i}} \) they will choose at \( t = 1 \), impacting him by \( (1 - \alpha)\theta e_{\tau^{-i}} \). Instead of forecasting \( e_{\tau^{-i}} \) by using as before the equilibrium cognitive response \( \lambda_{\tau^{-i}} \) to \( \sigma = L \), let each agent now simply best-respond to the aggregate investment level \( e_{\tau^{-1}} \) observed in the previous (similar) round.\(^{68}\) For simplicity, and without loss of generality, assume also that: \(^{68}\)The state \( \sigma \) drawn in any repetition of the stage game is also assumed to be observable ex-post (at stage \( t = 2 \), when material payoffs are realized), even by those who temporarily forgot it. Such ex-post observability is in any case irrelevant for full groupthink (\( \lambda^j \equiv 0 \)), where everyone invests in both states. (i) Consistently with the idea of bounded rationality, agents are unsophisticated about their own cognitive processes, as they are with respect to those of others: $\chi = 0$ in (6); (ii) Agents form a continuum, with parameter $s$ distributed according to $F(s)$ on $[s_{\text{min}}, s_{\text{max}}]$; heterogeneity could also be with respect to $c$ or $\theta_H$, or idiosyncratic signals about these variables. The continuum assumption will “smooth out” best responses and also equate $e^{-i}_\tau$ with the aggregate response (including $i$’s), denoted $e_\tau$. With agents thus \textit{best responding to past play}, the optimal choice between $\hat{\sigma}^i_\tau = L$ and $\hat{\sigma}^i_\tau = H$ is still governed by comparing (7) and (8), but with $(1 - \lambda^{-i})\theta_L$ replaced by $e_{\tau-1}\theta_L$; in addition, $r(\lambda^i)$ simply becomes 1, since $\chi = 0$. The set of realists at any stage $\tau \geq 1$ of this adaptive process therefore consists of the agents with $s^i \leq \underline{s}(1 - e_{\tau-1})$, where the function $\underline{s}(\cdot)$ is still given by (10); their proportion is thus $\lambda_\tau = F\left[\underline{s}(1 - e_{\tau-1})\right]$. Since realists choose $e^i_\tau = 0$ and deniers $e^i_\tau = 1$, moreover, we have $e_\tau = 1 - \lambda_\tau$. Hence the law of motion \begin{equation} \lambda_\tau = F\left(\frac{m/\delta + c - \delta\alpha\theta_L}{\alpha\theta_L + \Delta\theta + (1 - \alpha)\lambda_{\tau-1}\theta_L}\right), \quad \forall \tau > 1. \end{equation} For $\theta_L > 0$, $\lambda_\tau$ is decreasing in $\lambda_{-1}$, generating stable cobweb dynamics converging to a unique equilibrium (steady-state), and a multiplier less than 1 for responses to any local change in parameters. By contrast, when $\theta_L < 0$ the transition function is increasing, generating monotone dynamics, a scope for multiple equilibria (reached from different initial conditions $e_0$) and a multiplier locally greater than 1 (and increasing in $-\theta_L$).\footnote{Thus, $\lambda = 1$ and $\lambda = 0$ are both equilibria when $[s_{\text{min}}, s_{\text{max}}] \subset [\underline{s}(0), \underline{s}(1)]$, which can be ensured only when $\theta_L < 0$. There is even a continuum of equilibria for $[s_{\text{min}}, s_{\text{max}}] \equiv [\underline{s}(0), \underline{s}(1)]$ and $F(s) \equiv (\underline{s})^{-1}(s)$. Even with a unique equilibrium (or selecting the one reached from $e_0 = 1$), the multiplier can be made arbitrarily large by appropriate choice of $\theta_L$. Finally, in the limit where $F$ degenerates to a mass-point (homogenous agents), the fixed points of (A.5) coincide exactly with the equilibrium set of Proposition 2 (for $\chi = 0$).} \blacksquare \textbf{Proof of Lemma 2 and Propositions 5-6.} From (29), we have \begin{equation} \varphi'(d) \equiv -(1 - q)\left[v'\left(qA^i_H + (1 - q)(B^i_L - f^i_L - d)\right) - v'(B^i_L - d)\right], \end{equation} so $\varphi'(d) > 0$ if and only if $B^i_L - d < qA^i_H + (1 - q)(B^i_L - f^i_L - d)$, or $d > d^i_*$ defined in (32). Therefore, $\varphi(d)$ is strictly quasiconvex, with a minimum at $d^i_*$. Moreover, $qA^i_H + (1 - q)(B^i_L - f^i_L - d^i_*) = B^i_L - d^i_*$, implying $\varphi(d^i_*) = v(B^i_L - d^i_*) - qv(A^i_H) - (1 - q)v(B^i_L - d^i_*)$, or \begin{equation} \varphi(d^i_*) = q\left[v\left(B^i_L - d^i_*\right) - v(A^i_H)\right] = q\left[v(A^i_H - f^i_L(1 - q)/q) - v\left(A^i_H\right)\right] < 0. \end{equation} (2) As $d$ tends to $+\infty$, $\varphi^i(d) \approx v(-d(1-q)) - (1-q)v(-d)$, which behaves as $[(1-q) - (1-q)^{\gamma}] \times (-d)^{\gamma}$ and thus tends to $+\infty$, since $\gamma' > 1$. Similarly, as $d$ tends to $-\infty$, $\varphi^i(d) \approx v(-d(1-q)) - (1-q)v(-d)$, which behaves as $[(1-q) - (1-q)]^{1/\gamma} \times (d)^{1/\gamma}$ and thus tends to $+\infty$, since $1/\gamma < 1$. The rest of Lemma 2 and Proposition 5 follow immediately, as does Proposition 6 since (31) implies $\varphi^i(0) < 0$, hence $\underline{d}^i < 0 < \bar{d}^i$. ■ **Proposition 11.** Let $v(x) \equiv x - \gamma x^2/2$, and let (23), (24) and (31) hold for all $i$. If $|d_L^i|$ is large enough, for all $i$, there is a non-empty range $[\gamma, \bar{\gamma}]$ such the informed uniformed equilibria coexist if and only if $\gamma \in [\gamma, \bar{\gamma}]$. **Proof.** Condition (29) takes the form $$qA_H^i + (1-q)\left(B_L^i - f_L^i - d_L^i\right) - (\gamma/2) \left[qA_H^i + (1-q)\left(B_L^i - f_L^i - d_L^i\right)^2\right] >$$ $$qA_H^i + (1-q)\left(B_L^i - d_L^i\right) - (\gamma/2) \left[q\left(A_H^i\right)^2 + (1-q)\left(B_L^i - d_L^i\right)^2\right] \iff$$ $$(1-q)f_L^i + (\gamma/2) \left[qA_H^i + (1-q)\left(B_L^i - f_L^i - d_L^i\right)\right]^2 < (\gamma/2) \left[q\left(A_H^i\right)^2 + (1-q)\left(B_L^i - d_L^i\right)^2\right],$$ which is equivalent to (33). Similarly, (30) is equivalent to (34). Together, they define a nonempty range for $\gamma$ if and only if $$q\left(A_H^i - B_L^i + f_L^i\right)^2 - f_L^i\left(f_L^i - 2B_L^i\right) < q\left(A_H^i - B_L^i + d_L^i + f_L^i\right)^2 - f_L^i\left(f_L^i - 2B_L^i + 2d_L^i\right) \iff$$ $$2f_L^id_L^i < q\left(\left(d_L^i\right)^2 + 2d_L^i\left(A_H^i - B_L^i + f_L^i\right)\right).$$ If $d_L^i > 0$, which is the main case of interest, this inequality becomes: $$d_L^i > 2\left[(1-q)f_L^i/q - \left(A_H^i - B_L^i\right)\right] = 2d_*^i,$$ which holds for $d$ large enough –e.g., for all $d > 0$ when $d_*^i < 0$. If $d_L^i < 0$, the condition is reversed, and thus holds for $-d_L^i$ large enough (in e.g., for all $d < 0$ when $d_*^i > 0$). Recalling finally that the highest possible payoff, $A_H^i$, must lie in the interval $(-\infty, 1/\gamma)$ over which $v(x) = x - \gamma x^2/2$ is increasing, it must also be that $\gamma A_H^i < 1$, or $$2f_L^iA_H^i + f_L^i\left(f_L^i - 2B_L^i + 2d_L^i\right) < q\left(A_H^i - B_L^i + d_L^i + f_L^i\right)^2 \iff$$ $$2q\left(A_H^i - B_L^i + d_L^i\right)^2 > 2(1-q)f_L^i\left(A_H^i - B_L^i + d_L^i\right) + (1-q)\left(f_L^i\right)^2.$$ Define the polynomial $P(X) \equiv qX^2 - 2(1-q)f_L^iX - (1-q)\left(f_L^i\right)^2$. The discriminant is \[ \Delta' = (1 - q) \left( f_L^i \right)^2, \] therefore the required condition is \[ (A.10) \quad \left( q/f_L^i \right) \left( A_H^i - B_L^i + d_L^i \right) \notin \left( (1 - q) - \sqrt{1 - q}, \ (1 - q) + \sqrt{1 - q} \right), \] which again holds if \(|d_L^i|\) is sufficiently large. ■ **Proof of Proposition 8.** Part (1) follows directly from (36) and (12)-(13). In Part (2), it is easily seen that \(s^* < \bar{s}(0)\), but \(s^* < \underline{s}(1)\) requires \((1 - q) \Delta \theta[m/\delta + c - \delta \alpha \theta_L] < \delta (1 - \alpha) \theta_L \theta_H\), which can go either way. **Proof of Proposition 10.** Assume for now that at \(t = 0\), everyone else invests \(k^{-i} = K\). Since investing (respectively, abstaining) at \(t = 1\) is a dominant strategy given posterior \(\mu^j = r(\lambda^j) \geq q\) (respectively, \(\mu^j = 0\)), the price in state \(L\) will be \(P_L(K + (1 - \lambda^{-i})E)\) and the date-0 expected utilities of realism and denial equal to \[ (A.11) \quad U_{L,R}(\lambda^i, \lambda^{-i}; k^i)/\delta = (\delta + s)P_L(K + (1 - \lambda^{-i})E)k^i, \] \[ (A.12) \quad U_{L,D}(\lambda^i, \lambda^{-i}; k^i)/\delta = -m/\delta + (\delta + s)P_L(K + (1 - \lambda^{-i})E)(k^i + E) - cE \\ + sr(\lambda^i) \left[ P_H(K + E) - P_L(K + (1 - \lambda^{-i})E) \right](k^i + E). \] The net incentive for denial, \(\Delta U_L \equiv U_{L,D} - U_{L,R}\), is thus given by \[ (A.13) \quad [\Delta U_L(\lambda^i, \lambda^{-i}; \bar{k}_i) + m]/\delta = \left[ (\delta + s)P_L(K + (1 - \lambda^{-i})E) - c \right]E \\ + sr(\lambda^i) \left[ P_H(K + E) - P_L(K + (1 - \lambda^{-i})E) \right](k^i + E). \] Setting \(r(\lambda^i) = 1\), realism is a (personal-equilibrium) best response to \(\lambda^{-i}\) for an agent entering period 1 with stock \(k^i\) if \[ (A.14) \quad m/\delta \geq \left[ (\delta + s)P_L(K + (1 - \lambda^{-i})E) - c \right]E \\ + s \left[ P_H(K + E) - P_L(K + (1 - \lambda^{-i})E) \right](k^i + E). \] Conversely, denial \((r(\lambda^i) = q)\) is a (personal-equilibrium) best response for \(i\) if \[ (A.15) \quad m/\delta \leq \left[ (\delta + s)P_L(K + (1 - \lambda^{-i})E) - c \right]E \\ + sq \left[ P_H(K + E) - P_L(K + (1 - \lambda^{-i})E) \right](k^i + E). \] For given \(k^i\) and \(\lambda^{-i}\), these two conditions are mutually exclusive. When neither holds, there is a unique \(\lambda^i \in (0, 1)\) that equates \(\Delta U_L\) to zero, defining a mixed-strategy (personal equilibrium) best-response. The next step is to solve for (symmetric) social equilibria. 1. **Realism** From (A.14), $\lambda^i = \lambda^{-i} = 1$ is an equilibrium in cognitive strategies if \begin{equation} [(\delta + s)P_L(K) - c]E + s[P_H(K + E) - P_L(K)](k^i + E) \leq m/\delta. \end{equation} This condition holds for all $k^i \leq K$ if and only if \begin{equation} s \leq \frac{m/\delta + [c - \delta P_L(K)]E}{[P_H(K + E) - P_L(K)](K + E) + P_L(K)E} \equiv \underline{s}(1; K). \end{equation} Moving back to the start of period 0, one now verifies that it is indeed an equilibrium for everyone to invest $k^i = K$. Since agents will respond to market signals $\sigma = H, L$, the expected price is $qP_H(K + E) + (1 - q)P_L(K) > 0$, whereas the cost of period-0 production is 0 (more generally, sufficiently small). Thus, it is optimal to produce to capacity. 2. **Denial** From (A.15), $\lambda^i = \lambda^{-i} = 0$ is a cognitive equilibrium if \begin{equation} [(\delta + s)P_L(K + E) - c]E + sq[P_H(K + E) - P_L(K + E)](k^i + E) \geq m/\delta. \end{equation} This condition holds for $k^i = K$ if \begin{equation} s > \frac{m/\delta + [c - \delta P_L(K + E)]E}{q[P_H(K + E) - P_L(K + E)](K + E) + P_L(K + E)E} \equiv \bar{s}(0; q, K). \end{equation} An agent with low $k^i$, however, has less incentive to engage in denial. In particular, for $s < \underline{s}(1; K)$, (A.16) for $k^i = 0$ precludes (A.18) from holding at $k^i = 0$. Let $\bar{k}(s, q)$ therefore denote the unique solution in $k^i$ to the linear equation \begin{equation} [(\delta + s)P_L(K + E) - c]E + sq[P_H(K + E) - P_L(K + E)](k^i + E) = m/\delta. \end{equation} Subtracting the equality obtained by evaluating (A.18) at $s = \bar{s}(0; q, K)$ yields \begin{align*} sq[P_H(K + E) - P_L(K + E)](K - \bar{k}) \\ = (s - \bar{s})P_L(K + E)E + (s - \bar{s})q[P_H(K + E) - P_L(K + E)](K + E), \end{align*} where the arguments are dropped from $\bar{k}$ and $\bar{s}$ when no confusion results. Thus, \begin{equation} K - \bar{k} = \frac{s - \bar{s}}{s} \times \left(\frac{qP_H(K + E) + (1 - q)P_L(K + E)}{q[P_H(K + E) - P_L(K + E)]}E + K\right) > \frac{s - \bar{s}}{s} \times (K + E). \end{equation} Note that $\bar{k} \leq K$ (and is thus feasible) if and only if $s \geq \bar{s}$. One can now examine the optimal choice of $k^i$ at $t = 0$, which will be either $k^i = K$ or some $k^i \leq \bar{k}$. (a) For $k^i > \bar{k}(s, q)$, (A.20) implies that denial is the unique best response to $\lambda^{-i} = 0$, leading agent $i$ to produce $e^i = E$ in both states at $t = 1$. These units and the initial $k^i$ will be sold at the expected price $\bar{P}_q(K + E) \equiv qP_H(K + E) + (1 - q)P_L(K + E) > 0$. Therefore, producing $K$ in period 0 is optimal among all levels $k^i > \bar{k}(s, q)$, and yields ex-ante utility \begin{equation} U_D(0, K, K)/\delta = (\delta + s)\bar{P}_q(K + E)(K + E) - cE - (1 - q)m/\delta. \end{equation} (b) For $k^i \leq \bar{k}(q; s)$, on the other hand, agent $i$’s continuation (personal-equilibrium) strategy is some $\lambda^i = \lambda(k^i) \geq 0$: in state $L$ he weakly prefers to be a realist, achieving \begin{equation} U(\lambda^i, 0, k^iK)/\delta = (\delta + s)\bar{P}_q(K + E)\left(k^i + E\right) - cE \\ -(1 - q)\left\{(1 - \lambda^i)m/\delta - \lambda^i\left[c - (\delta + s)P_L(K + E)\right]E\right\}. \end{equation} The agent prefers $k^i = K$ to any $k^i \leq \bar{k}(q; s)$ if $U_D(0, K, K) > U(\lambda^i, 0, k^iK)$, or \begin{equation} (\delta + s)\bar{P}_q(K + E)(K - k^i) > (1 - q)\lambda^i\left\{m/\delta + \left[c - (\delta + s)P_L(K + E)\right]E\right\}. \end{equation} Using (A.21) and $\lambda^i \leq 1$, it suffices that \begin{equation} \left(\frac{s - \bar{s}(0; q, K)}{s}\right)\left(\frac{\bar{P}_q(K + E)(K + E)}{1 - q}\right) \geq \frac{m}{\delta(\delta + s)} + \left(\frac{c}{\delta + s} - P_L(K + E)\right)E. \end{equation} Since $\bar{P}_q(K + E)$ tends to $P_H(K + E)$ as $q$ tends to 1, (A.25) will hold for $q$ close enough to 1, provided $s - \bar{s}(0; q, K)$ remains bounded away from 0. Lemmas 3 and 4 (in online Appendix C) formalize this idea, showing that there exist a threshold $q^*(K) < 1$ and a nonempty interval $S^*(K)$ such that, for all $q > q^*(K): S^*(K) \subset (\bar{s}(0; q, K), \underline{s}(1; K))$ and (A.25) holds for all $s \in S^*(K)$. Consequently, when $q > q^*(K)$ both $(k^i = K, \lambda^i = 1)$ and $(k^i = K, \lambda^i = 0)$ are equilibria of the two-stage market game, for any $s \in S^*(K)$. Indeed, we showed that: (i) for $s < \underline{s}(1; K)$, when others play $(k^{-i} = K, \lambda^{-i} = 1)$ agent $i$ finds it optimal to also invest $k^i = K$ and then be a realist; (ii) for $s > \bar{s}(0; q, K)$, when others play $(k^{-i} = K, \lambda^{-i} = 0)$ he finds it optimal to invest $K$ in period 0 even though he knows that this will cause him to engage in denial if state $L$ occurs. □ REFERENCES Akerlof, G., and W. Dickens (1982) “The Economic Consequences of Cognitive Dissonance,” *American Economic Review*, 72, 307-19. Anand, V., Ashforth, B. and J. Mahendra (2005) “Business as Usual: The Acceptance and Perpetuation of Corruption in Organizations,” *Academy of Management Executive*, 19, 9-23. Asch, S. (1956) “Studies of Independence and Conformity: a Minority of One Against a Unanimous Majority. *Psychological Monographs*, 70 (Whole no. 416). Banerjee, A.(1992) “A Simple Model of Herd Behavior,” *Quarterly Journal of Economics*, 107(3), 797-817. Bazerman, M. and A. Tenbrunsel (2011) *Blind Spots: Why We Fail to Do What’s Right and What to Do About It*. Princeton University Press. Bénabou, R. (2008) “Ideology,” *Journal of the European Economic Association*, 6(2), 321–52. Bénabou, R. and J. Tirole (2002) “Self-Confidence and Personal Motivation,” *Quarterly Journal of Economics*, 117, 871–915. Bénabou, R. and J. Tirole (2003) “Intrinsic and Extrinsic Motivation,” *Review of Economic Studies*, 70(3), 489-520. Bénabou, R. and J. Tirole (2004) “Willpower and Personal Rules,” *Journal of Political Economy*, 112, 848–887. Bénabou, R. and J. Tirole (2006a) “Incentives and Prosocial Behavior,” *American Economic Review*, 96(5), December , 1652-78. Bénabou, R. and J. Tirole (2006b) “Belief in a Just World and Redistributive Politics,” *Quarterly Journal of Economics*, 121(2), May, 699-746. Bénabou, R. and J. Tirole (2011) “Identity, Morals and Taboos: Beliefs as Assets,” *Quarterly Journal of Economics* 126, 805-855. Bernheim, D. and R. Thomadsen (2005) “Memory and Anticipation,” *The Economic Journal*, 115, 271–304. Bikhchandani, S. Hirshleifer, D., and I. Welch (1992) “A Theory of Fads, Fashion, Custom, and Cultural Change as Informational Cascades,” *Journal of Political Economy*, 100(5), 992-1026. Brunnermeier, M. and J. Parker (2005) “Optimal Expectations,” *American Economic Review*, 90, 1092-118. . Brunnermeier, M., Gollier, C. and J. Parker (2007) “Optimal Beliefs, Asset Prices, and the Preference for Skewed Returns,” *American Economic Review*, 97(2), 159-65. Calvó-Armengol, A., de Martí, J. and A. Prat (2009) “Endogenous Communication in Complex Organization,” LSE mimeo, March. Camerer, C. and U. Malmendier (2007) “Behavioral Economics of Organizations,” in *Behavioral Economics and Its Applications*, P. Diamond and H. Vartiainen (eds.), Princeton University Press. Caplin, A. and K. Eliaz (2003) “AIDS Policy and Psychology: A Mechanism-Design Approach,” *Rand Journal of Economics*, 34(4), 631-646 Caplin, A. and J. Leahy (1994) “Business as Usual, Market Crashes, and Wisdom After the Fact,” *American Economic Review*, 84(3), 548-65. Caplin, A. and J. Leahy (2001) “Psychological Expected Utility Theory and Anticipatory Feelings,” *Quarterly Journal of Economics*, 116, 55–79. Chamley, C. and D. Gale (1994) “Information Revelation and Strategic Delay in a Model of Investment,” *Econometrica*, 62(5), 1065-85. Choi, D. and D. Lou (2010) “A Test of the Self-Serving Attribution Bias: Evidence from Mutual Funds,” Hong Kong University of Science and Technology, mimeo, August. Cialdini, R. (1984) *Influence: The Psychology of Persuasion*. HarperCollins Publishers. Cohan, J. (2002) ‘I Didn’t Know’ and ‘I Was Only Doing My Job’: Has Corporate Governance Careened Out of Control? A Case Study of Enron’s Information Myopia”. *Journal of Business Ethics*, 40, 275-99. Columbia Accident Investigation Board (2003) *CIAB Final Report*, especially Chapters 6, 7 and 8. Available at http://caib.nasa.gov/. Compte, O. and Postlewaite, A. (2004) “Confidence-Enhanced Performance,” *American Economic Review*, 94(5), 1536-1557. Dessi, R. (2008) “Collective Memory, Cultural Transmission and Investments,” *American Economic Review*, 98(1), 534-560. Di Tella, R., Galiani, S., and E. Schargrodsky, (2007) “The Formation of Beliefs: Evidence from the Allocation of Land Titles to Squatters,” *Quarterly Journal of Economics*, 122(1), 209-41. Eichennwald, K. (2005) *Conspiracy of Fools: A True Story*. New York, NY: BroadwayBooks. Eil, D. and Rao, J. (2011) “The Good News-Bad News Effect: Asymmetric Processing of Objective Information about Yourself,” *American Economic Journal: Microeconomics*, 3(2), 114–38. Eliaz, K. and R. Spiegler (2006) “Can Anticipatory Feelings Explain Anomalous Choices of Information Sources?” *Games and Economic Behavior*, 56 (1), 87-104. Eyster, E. and Rabin, M. (2009) “Rational and Naive Herding”, LSE mimeo, June. Fang, H., and Moscarini, G. (2005) “Morale Hazard,” *Journal of Monetary Economics*, 52(4), 749-78. Fudenberg, D. and D. Levine (1998) *The Theory of Learning in Games*. Cambridge, MA: MIT Press. Gabaix, X., Krishnamurthy, A. and O. Vigneron (2007) “Limits of Arbitrage: Theory and Evidence from the Mortgage-Backed Securities Market”, *Journal of Finance*, 62(2), 557-595. Gervais, S. and Goldtsein, I. (2007) “The Positive Effects of Self-Biased Perceptions in Teams,” *Review of Finance*, 11(3), 453-96. Goeree, J., Palfrey, T., Rogers, B. and McKelvey, B. (2007) “Self-Correcting Information Cascades,” *Review of Economic Studies*, 74, 733-762. Goetzman, W. and Peles (1997) “Cognitive Dissonance and Mutual Fund Investors,” *Journal of Financial Research*, 20(2), 145-158. Goodman, P. (2008) “The Reckoning: Taking Hard New Look at a Greenspan Legacy,” *The New York Times*, October 8. Hansell, S. (2008) “How Wall Street Lied to Its Computers,” *The New York Times*, September 18. Haslam, A. (2004). *Psychology in Organizations: The Social Identity Approach* (2nd ed.). London, UK & Thousand Oaks, CA: Sage. Hedden, T., Prelec, D., Mijovic-Prelec, D. and J. Gabrieli (2008) “Neural Correlates Of Reward-Related Self-Delusion,” Poster Presentation, Cognitive Neuroscience Society Conference, San Francisco, April 2. Available at lawweb.usc.edu/centers/scip/assets/docs/neuro/drazenprelec.ppt. Hermalin, B. (1998) “An Economic Theory of Leadership: Leading by Example,” *The American Economic Review*, 88(5), 1188-206. Hersh, S. (2004) *Chain of Command*. New York, NY: HarperCollins Publishers. Huseman, R. and R. Driver (1979) “Groupthink: Implications for Small Group Decision Making in Business,” in *Readings in Organizational Behavior: Dimensions of Management Action*, R. Richard Huseman and Archie Carral, eds.. Boston, MA: Allyn and Bacon. Isikoff, M. and D. Corn (2007) *Hubris*. New York, NY: Three Rivers Press. Janis, I. (1972) *Victims of Groupthink: Psychological Studies of Policy Decisions and Fiascoes*. Boston, MA: Houghton Mifflin Company. Karlsson, N., Loewenstein, G. and D. Seppi (2009) “The ‘Ostrich Effect’: Selective Avoidance of Information,” *Journal of Risk and Uncertainty*, 38(2), 95-115. Kindleberger, C. and R. Aliber (2005) *Manias, Panics, and Crashes: A History of Financial Crises*. Hoboken, NJ: John Wiley and Sons. Köszegi, B. (2006) “Emotional Agency,” *Quarterly Journal of Economics*, 21(1), 121-56. Köszegi, B. (2010) “Utility from Anticipation and Personal Equilibrium,” *Economic Theory*, 44, 415-444. Kreps, D. and Porteus, E. (1978), “Temporal Resolution of Uncertainty and Dynamic Choice Theory,” *Econometrica*, 46(1), 185–200. Kunda, Z. (1987) “Motivated Inference: Self-Serving Generation and Evaluation of Causal Theories,” *Journal of Personality and Social Psychology*, 53(4), 636-647. Kuran, T. (1993) “The Unthinkable and the Unthought,” *Rationality and Society*, 5, 473-505. Lalancette, M-F. and L. Standing (1990) “Asch Fails Again,” *Social Behavior and Personality*, 18(1), 7-12. Landier, A. (2000) “Wishful Thinking: A Model of Optimal Reality Denial,” MIT mimeo. Landier, A., Sraer, D. and D. Thesmar (2009) “Optimal Dissent in Organizations,” *Review of Economic Studies*, 76, 761-794. Loewenstein, G. (1987) “Anticipation and the Valuation of Delayed Consumption,” *Economic Journal*, 97, 666-84. Mackay, C. (1980) *Extraordinary Popular Delusions and the Madness of Crowds*. New York, NY: Three Rivers Press. Malmendier, U. and G. Tate (2005) “CEO Overconfidence and Corporate Investment,” *Journal of Finance*, 60 (6), 2661-700. Malmendier, U. and G. Tate (2008) “Who Makes Acquisitions? CEO Overconfidence and the Market’s Reaction,” *Journal of Financial Economics*, 89(1), 20-43. Mayraz, G. (2011) “Wishful Thinking,” Oxford University Mimeo, October. Mijovic-Prelec, D. and D. Prelec (2010) “Self-Deception As Self-Signalling: A Model And Experimental Evidence,” *Philosophical Transactions of the Royal Society*, B 365, 227–240. Mischel, W., E. Ebbesen and A. Zeiss (1976) “Determinants of Selective Memory about the Self,” *Journal of Consulting and Clinical Psychology*, 44, 92-103. Möbius, M., Niederle, M., Niehaus, P. and Rosenblat, T. (2010) “Managing Self-Confidence: Theory and Experimental Evidence,” Stanford University mimeo, October. Morgenson, G. and G. Fabrikant (2007) “Countrywide’s Chief Salesman and Defender,” *The New York Times*, November 2007. Norris, F. (2008) “Color-Blind Merrill in a Sea of Red Flags.” *New York Times*, May 16. Ottaviani, M. and P. Sørensen (2001) “Information Aggregation In Debate: Who Should Speak First?”, *Journal of Public Economics*, 81, 393-421. Pearlstein, S. (2006) “Years of Self-Deception Killed Enron and Lay,” *The Washington Post*, July 8. Prat, A. (2005) “The Wrong Kind of Transparency,” *American Economic Review*, 95(3), 62-877. Prendergast, C. (1993) “A Theory of ‘Yes Men’,” *American Economic Review*, 83(4), 757-70. Reilly, D. (2007) “Marking Down Wall Street.” *The Wall Street Journal*, September 14, C1. Reinhart, C. and Rogoff, K. (2009) *This Time Is Different: Eight Centuries of Financial Folly*. Princeton, NJ: Princeton University Press. Rogers Commission (1986). *Report of the Presidential Commission on the Space Shuttle Challenger Accident*. http://history.nasa.gov/rogersrep/genindex.htm. Rostek, M. and M. Weretka (2008) “Dynamic Thin Markets,” University of Madison-Wisconsin mimeo, December. Rotemberg, J. and G. Saloner (2000) “Visionaries, Managers, and Strategic Direction,” *Rand Journal of Economics* 31, Winter, 693-716. Samuelson, R. (2001) “Enron’s Creative Obscurity.” *The Washington Post*, December 19. Schelling, T. (1986) The Mind as a Consuming Organ,” in D. Bell, Raiffa H. and A. Tversky, eds., *Decision Making : Descriptive, Normative, and Prescriptive Interactions*. Cambridge, MA: Cambridge University Press. Schrand, C. and S. Zechman (2008) “Executive Overconfidence and the Slippery Slope to Fraud,” Wharton School mimeo, University of Pennsylvania, December. Securities and Exchange Commission (2008) *SEC’s Oversight of Bears Stearns and Related Entities: Consolidated Supervised Entity Program*. Inspector General’s Report, Office of Audits, Report No. 446-. September 25, viii-ix. Available at http://www.sec-oig.gov. Securities and Exchange Commission (2009) *Investigation of Failure of the SEC To Uncover Bernard Madoff’s Ponzi Scheme*. Office of Investigations. Case No. OIG-509, August 31. Available at http://www.sec.gov/news/studies/2009/oig-509.pdf. Shiller, R. (2003) “From Efficient Markets Theory to Behavioral Finance,” *Journal of Economic Perspectives*, 17(1), 83-104 Shiller, R. (2005) *Irrational Exuberance*. Second Edition, Princeton University Press. Sims, R. (1992) “Linking Groupthink to Unethical Behaviors in Organizations,” *Journal of Business Ethics*, 11, 651-62. Slovic, P. (2007) “If I Look at the Mass I will Never Act: Psychic Numbing and Genocide,” *Judgment and Decision-Making*, 2(2), 79-95. Small, D., Loewenstein, G. and Slovic, P. (2007) “Sympathy and Callousness: The Impact of Deliberative Thought on Donations to Identifiable and Statistical Victims,” *Organizational Behavior and Human Decision Processes*, 143–53. Suskind, R. (2004) “Without a Doubt,” *The New York Times*, October 17. Tenbrunsel, A. and D. Messick (2004) “Ethical Fading: The Role of Self-Deception in Unethical Behavior,” *Social Justice Research*, 17(2), 223-62. Van den Steen, E. (2005) “Organizational Beliefs and Managerial Vision,” *Journal of Law, Economics and Organization*, 21, 256-283. Van den Steen, E. (2010) “On the Origins of Shared Beliefs (and Corporate Culture),” *Rand Journal of Economics* 41(4), 617-648. Von Hippel, W. and R. Trivers (2011) “The Evolution and Psychology of Self-Deception,” *Behavioral And Brain Sciences*, 34, 1-56. Zald, M. and M. Berger (1978) “Social Movements in Organizations: Coup d’Etat, Insurgency, and Mass Movements,” *The American Journal of Sociology*, 83(4), 823-861. Weiszacker, G. (2010) “Do We Follow Others When We Should? A Simple Test of Rational Expectations,” *American Economic Review*, 100, 100, 2340-2360.
0.7
high
8
42,234
[ "advanced knowledge" ]
[ "cutting-edge work" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.8, "accuracy": 0.6, "pedagogy": 0.6, "engagement": 0.55, "depth": 0.65, "creativity": 0.5 }
32ba2c96-a406-4af8-a075-9402f17ca034
theoretician James Bullock University
science
research_summary
The theoretician James Bullock from the University of California would like to make the surprise discovery of a particle of dark matter in the laboratory. With the aid of computer simulations and observational data from several ground-based and space-borne telescopes, this American researcher studies how dark matter haloes evolve over millions of years. Bullock, an expert in cosmology and particle physics, centres his investigations on understanding how galaxies, including the Milky Way and the Local Group of which it is a member, were formed in the primordial Universe. - You have recently participated in research that reveals the minimum mass for dwarf galaxies in the Milky Way. Could you explain the importance of this study? Our results may suggest that there is a mass threshold, below which galaxies simply do not form. We believe that the presence of dark matter around galaxies is essential to their ability to accumulate matter and form stars. The more dark matter there is, the easier it is for galaxies to collect matter and begin to shine. It may very well be that we've discovered a limiting mass, below which galaxies simply cannot collect enough matter to form stars and start to shine. In this interpretation, there would be a large number of lower mass dark matter clumps out there around the Milky Way, but they are smaller than this mass limit and therefore contain to stars to help us discover them. Another possibility is that there simply are no dark matter clumps smaller than this mass that exist. If so, then it may provide very important evidence about the particle that makes up the dark matter. Specifically, if the dark matter is "cold" then it can form very small clumps, or halos, and these cold dark matter halos can be much much less massive than the 10 million solar masses that we've measured. However, if the dark matter is "warm" then it cannot form into clumps that are smaller than a characteristic mass, and this minimum mass is linked directly to the properties of the particle (like its mass). If we have actually discovered the minimum mass dark matter halo (and not just the minimum mass galaxy) then it provides very important constraints on the type of particle that makes up the dark matter. - The number of low mass dwarf galaxies detected so far in the outskirts of the Milky Way is 10 to 100 times smaller than theory predicts. Do we face a crisis in the model or a detection problem? The question of "missing satellites" comes from this association between the dwarf galaxies we see and the clumps of dark matter that we believe surrounds each of them. It turns out that the Cold Dark Matter theory predicts that the Milky Way should, in fact, be surrounded by many small clumps of dark matter, which have masses that are not so different than the ones we measure for the dwarf galaxies. The problem is that the theory predicts thousands of these clumps, while we only see about 20 dwarf galaxies. This has given rise to the idea that there are a bunch of satellites out there that are "missing". I think that it's most likely a detection problem. The faintest of the dwarfs contains just a few hundred stars, and is about a hundred million times fainter than the Milky Way. That's about the same difference as a firecracker and a lightning bolt (although they both keep shining and don't just go away). Because these galaxies are so inherently faint, they can only be detected if they are relatively close to our position in the galaxy. This is the crucial point of our study. Imagine that you are on a very long road, at night. If you tried to count cars on the road, you would be limited to the ones that are close enough for their lights to be visible to you. Of course, there are likely many more cars out there, but they are just too distant too be seen. This is especially important if the car lights are rather dim. However, if you knew the spacing between cars on the road, you could infer how many cars there were in total based on the number that you could see. We recently wrote a paper where we did the same thing -- except that we were counting dwarf galaxies. We showed that there could easily be hundreds of undiscovered satellites out there. - What are the weak points in the cosmology of Cold Dark Matter? Do you believe in an alternative theory? There are some issues, but those problems are overwhelmed by successes. Most of the problems (like the fact of the 'missing satellites') can be explained by appealing to astrophysical uncertainties and issues with our ability to measure galaxy properties. Nonetheless, it's important for us to take all the problems seriously, because there is always a chance that nature is trying to tell us something. - What cosmological question would you most like to see settled? It would be amazing if the dark matter were actually discovered as a particle in the laboratory. I would like to know just what this stuff is, and how it fits in with our overall theories of particle physics and cosmology.
0.65
medium
4
1,015
[ "scientific method", "basic math" ]
[ "advanced experiments" ]
[ "mathematics", "technology" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.35, "creativity": 0.3 }
8c799ae2-98b9-4ebf-8ce0-42fd7f0f9f22
Schools violence essay
language_arts
research_summary
Schools violence essay Violence is embedded in our school system in every school handbook there is a specific reference to hazing, fighting and bringing weapons there have been an uncountable number of cases of violence in school since the first children went to school in little red buildings. School violence essays: over 180,000 school violence essays, school violence term papers, school violence research paper, book reports 184 990 essays, term and research papers available for unlimited access. Violence in schools: causes and solutions school violence is not a problem that should be solely treated on individual bases with the application of discipline measures essay help dissertation help research paper help editing service proofreading. I started writing this essay originally with a different view in mind but when i went back to read it, it simply wasn't what i wanted to say my mother suggested i wait a few days and come back to it but i was itching too much to get this off my chest i became restless and started surfing the. School violence essays violence in school systems has existed for a long period of time school violence consists of fights, vandalism, shootings, and any act that physically or psychologically harms another child any of these acts should be taken serious, even among kindergarten kids. Essays - largest database of quality sample essays and research papers on school violence. School violence is a major problem around the world the effects of school violence can lead to division and severe mental and physical trauma for both perpetrators and victims alike. The number of teachers who say they've been physically attacked by students is the highest yet. View school violence research papers on academiaedu for free. School violence can be prevented research shows that prevention efforts - by teachers, administrators, parents, community members, and even students - can reduce violence and improve the overall school environment no one factor in isolation causes school violence, so stopping school violence. School violence research topics school violence essay topics in form of discussion questions and activity ideas for teachers are included so that students can extend their understanding of school and campus crime and violence beyond what they read in the articles school violence research. Read school violence free essay and over 88,000 other research documents school violence school violence in the world today there are many different issues that i feel that need to be addressed one. Sample of violence in schools essay (you can also order custom written violence in schools essay. Sample cause and effect essay on school violence here's a sample cause and effect essay on school violence school violence involves aggressive behavior of some students that tend them to pick up weapons and develop gang culture. The issue of school violence has recently become a widely debated topic in the usa, and the solution to this problem is still to be found various school. School violence (bullying) school violence has gone on for as long as schools have been open psychologists categorize the different types of school violence. School shootings seem to be the new type of youth violence that is sweeping across the nation this new trend has an eerie way of turning. Free essay on school violence available totally free at echeatcom, the largest free essay community. School violence this essay school violence and other 63,000+ term papers, college essay examples and free essays are available now on reviewessayscom. Schools violence essay The goal of this paper is to describe options currently available to schools and to analyze the key components of various approaches to help determine their potential positive and school violence prevalence, fears, and prevention by jaana issue papers explore related topics. This essay focuses on violence in schools over the past many years, there has been a severe outbreak of violence in schools in our country violence in. Unlike most editing & proofreading services, we edit for everything: grammar, spelling, punctuation, idea flow, sentence structure, & more get started now. Read school violence free essay and over 88,000 other research documents school violence violence is a problem many school are facing today across the united states violence in schools continues to grow there. New topic violence in school essay violence school cartoon violence family violence game violence media violence preventing violence television violence violence media. Keywords: school violence essay, school violence in vietnam essay school violence is one of the most serious problems nowadays because of detrimental effects on forming human-beings characteristics and future of a nation. Campus gun control works from boston review despite recent shootings, schools, including college campuses, exemplify the success of gun control.
0.55
medium
5
905
[ "domain basics" ]
[ "expert knowledge" ]
[ "science", "social_studies" ]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.3 }
a9deba0b-608a-4151-bbf7-0d5261f8430c
Language and Linguistics
language_arts
data_analysis
: Language and Linguistics Comparative Linguistics and Typology Computer Assisted Language Learning Graphemics and Orthography Morphology and Syntax Philosophy of Language Phonetics and Phonology Other related areas: Arts : Education : Language Arts Computers : Artificial Intelligence : Natural Language Society : Philosophy : Philosophy of Logic Web site listings: Alliance Linguistics Directory The leading Linguistics websites catalogued by experts from Oxford, Stanford and Yale Universities. Binnick's Linguistic Bibliography Annotated bibliography of tense, verbal aspect, aktionsart and related areas. Britann's Language Center A collection of links with resources to over 70 well-known languages including brief descriptions of the major subfields in Linguistics. Links to corpora, software, papers, bibliographies and additional sites. Fields of Linguistics A collection of articles which outline the various areas of specialization in linguistics. Introduction to Linguistics Linguistics for Beginners is a multimedia, interactive introduction to the subject. You will find all major topics of current linguistic studies examined here. Itzalist Language Directory Search engine and compendium of resources on language education, text translators, dictionaries, alphabets and other related resources. Language and Linguistics Essays on language and linguistics and links to related materials. Basic concepts, definition of "ethnolinguistics." From Maricopa Community College. Lexicon of Linguistics Searchable database of linguistic terminology updated with many new terms in the areas of Generative Grammar (Minimalism) and Phonetics. Includes bibliography. Devoted to exploration of our global linguistic environment with extracts from the Linguasphere Register of the World's Languages and Speech Communities. Linguistic Data Consortium Sharing of linguistic data, tools and standards resources. Linguistic Enterprises: Private Sector Employment for Linguists A job-search site for linguists seeking employment in the private sector. Multidimensional exploration of online linguistic databases. Specializes in negatively-valued words and expressions from all languages and cultures, focussing on the origin, etymology, meaning, use and influence of verbal aggression and abuse of any kind. Museum of Human Language Virtual museum with columns. Definition of a language, linguistics, and language function. Listings of international geographical names, orthography and transcription with commentaries on language. The Homepage of Integrational Linguistics A specific approach to linguistics combining a comprehensive theory of language and a theory of grammars. The LINGUIST List Web site for the LINGUIST mailing list: professional communication and networking for the world-wide community of linguists. The Linguistics Project Interdisciplinary, international, internet-community on language and linguistics. The sci.lang FAQ Answers to frequently asked questions about dialects, languages and their relationships, linguistics, and phonetic systems.
0.65
medium
6
594
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.35 }
740ca40b-348b-4b4d-a6f4-21b29ed1cee6
TOWNSEND, SARAH ANN Benton
science
research_summary
TOWNSEND, SARAH ANN - Benton County, Arkansas | SARAH ANN TOWNSEND - Arkansas Gravestone Photos Sarah Ann TOWNSEND Bentonville City Cemetery Benton County, Arkansas Calvin, son of James A. Townsend & Louise Dutton; born December 16, 1875, Larue, Benton Co. AR; died October 20, 1966, Bentonville, Benton Co. AR Sarah,born September 11, 1879; died September 4, 1971 Married August 5, 1897, Carroll Co. AR Contributed on 11/29/07 by wfields55 Email This Contributor NOTICE: THE COPYING OF ANY WEBSITE PHOTO OR DATA WITHOUT THE EXPRESS CONSENT OF THE ARKANSAS GRAVESTONES ADMINISTRATORS OR OWNER OF THE PHOTO SHALL BE CONSTRUED AS THEFT AND COPYRIGHT INFRINGEMENT. SEE TERMS OF USE FOR FURTHER INFORMATION. Thank you for visiting the Arkansas Gravestones On this site you can upload gravestone photos, locate ancestors and perform genealogy research. If you have a relative buried in Arkansas, we encourage you to upload a digital image using our Submit a Photo page. Contributing to this geneology archive helps family historians and genealogy researchers locate their relatives and complete their family tree. Submitted: 11/29/07 • Approved: 11/29/07 • Last Updated: 7/29/12 • R1917-G1916-S3 Other GPP Projects  |  Contact Us  |  Terms of Use  |  Site Map  |  Admin Login
0.65
medium
5
423
[ "introductory science", "algebra" ]
[ "research methodology" ]
[]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
052b1673-d21c-4032-a738-60eb17bfcd01
looks like you're using
interdisciplinary
historical_context
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results. In addition to the overview provided in each box below, you can use the arrow buttons to view collection items that were part of the exhibit. If you click on the image or caption title, the image will be enlarged in a new tab with the option to zoom in further. One of the treasures of the Quayle Bible Collection is a 16th century tapestry relating the early life of David. This tapestry is the beginning point of the exhibit, but to understand the story of David, as it is told in the First Book of Samuel, it is handy to have some context. According to the Bible, the twelve tribes of Israel did not live in harmony with one another. They fought each other as well as the non-Israelite tribes in the land of Canaan. God was sovereign over all the tribes, but the leaders on the ground, so to speak, included prophets, judges and priests. The Israelites were not happy with this arrangement and asked to have a King, like other nations. The prophet Samuel chose Saul to be their king. Saul disobeyed God and failed as a king, so God asked Samuel to choose another king. This time, God left nothing to chance and led Samuel to David, a most unlikely candidate. He was a shepherd, the youngest son of Jesse, of whom little is known other than that he was from Bethlehem. The Flemish tapestry, dating probably from the mid-16th century, provided the impetus for this exhibit and shows several scenes from David’s early life. The Shepherd/Young David As a teenager, David was the shepherd of his father Jesse’s flock. On the right we have a depiction of David fighting a lion and a bear. This story does not appear in the narrative of 1 Samuel, only in David’s boasting about his accomplishments and his ability to slay the giant Goliath. 1 Samuel 17:36 states, “Thy servant slew both the lion and the bear: and this uncircumcised Philistine shall be as one of them.” On the left, David is anointed by Samuel, making him God’s chosen one, to be King of Israel. The anointed one in Hebrew is messiah (מָשִיחַ). Based on this theology, all the kings of Israel must rule with the blessing of God, much like the divine right of kings in Europe. As a result, the lineage of the Messiah is thought by the Jews to travel through David’s bloodline making the messiah both king and shepherd. Christians trace Jesus’s genealogy back to David to justify his legitimacy as messiah. Some traditions hold that David was between 12-15 years old when he donned Saul’s armor and marched to the Valley of Elah to face Goliath. He is almost always depicted as a young boy with a sling, the most famous portrayal is probably Michelangelo’s David. Although he had just received the armor of Saul, he is usually pictured without armor in order to emphasize the shear stature of Goliath and the minuscule profile of David. The only exception of this occurs when David is cutting the head off of the Philistine. The cultural impact of this story resonates with every “underdog” narrative that is told, even if David was more appropriately equipped and a canny fighter. On the left, Goliath and a fellow Philistine solider are clothed in Roman military garb; the young David hardly looks prepared except for his sling On the right, the beheading of the giant is off center of the engraving; however, it is the center of the battle scene, surrounded by horses and soldiers. Although wielding a sword, David is still depicted armor-less. One of Saul’s sons, Jonathan, had an especially close relationship with David that has been at the center of controversy for scholars and theologians. The story of David and Jonathan is set as a parallel with the relationship between David and Saul’s daughter, Michal. The siblings both loved David and both sided with David against their father. However, it would appear that David loved her brother Jonathan more. When David gained popularity with the people, King Saul became furious and attempted to kill him. Because of their love, Jonathan made a covenant with David and vowed to save him. On the right, Saul is told that David is missing while David and Jonathan make a covenant in the wilderness. In the Middle, David and Jonathan embrace and cry after Saul has ordered David’s death, for Jonathan “loved him as he loved his own soul.” (1 Sam. 20:17) Jonathan promised that if his father’s hate had abated and David could return, Jonathan would shoot arrows for a boy to fetch; however, if Saul meant to kill David still, Jonathan would fire an arrow far above the boy’s head. On the left, Jonathan fires an arrow above the boy’s head, while David looks from a nearby cave. Beneath the tapestry: David and Michal When David refused to marry Saul’s older daughter Merab, Saul gave his daughter Michal to David in marriage. According to 1 Samuel 18:20, Michal loved David, which pleased her father. Although David does not appear to have loved Michal like he loved her brother Jonathan, she still cared for him. Much as Jonathan helped David escape Saul’s wrath, Michal puts a dummy in David’s bed to throw off Saul’s soldiers who come looking to kill him and then she lowers him out the window. The left depicts the guards storming the room with the mannequin barely visible in the bed, while Michal lowers David down. On the right, there is a similar depiction of the same scene but in color. Saul essentially failed as the first king of all Israel; however, his death should not be seen as the sign of his failure. His death was but the culmination of events that began with Saul disappointing the Lord. The theological narrative makes Saul a negative character which ends in his suicide in order to pave the way for David. Saul falls on his sword only after the Philistines had overtaken his army, killed his sons, and shot him with arrows. Suicide was the only way to keep from being taken prisoner and tortured by the Philistines. On the right, we progress through the story in four scenes: Saul dies having fallen on his sword, his head is taken by the Philistines to their temple, his crown is brought to David, and David morns his death and that of his dear friend, Jonathan. On the left is the text of David’s lament on the deaths of Saul and Jonathan, from 2 Samuel 1: 17-27. Probably the most infamous story of David’s reign is that of his seduction of Bathsheba. The story is typically titled David’s sin or David’s crime. Looking out of the window in his palace, David catches sight of Bathsheba bathing and decides he must have her. Lovely Bathsheba bathing has always been a popular image in art. Some scholars argue that this scene is meant to show the sin and guilt, not only of Bathsheba and David but also the reader, arguing that the inclusion of this scene encouraged a sort of guilty voyeurism. As a result of this focus on Bathsheba’s nudity, readers often shift the blame from David’s lust to Bathsheba’s exhibitionism. Because cultural norms about sexuality have changed over time, Bathsheba is portrayed differently, from fully nude (on the left) to only bathing her feet (on the right). In both, David is seen in the upper left watching the bathing. One even shows him playing his harp as if to woo her with music. Bathsheba becomes pregnant by David, so David plots several times to get rid of her husband, Uriah, so that he can marry her. At last David sends him to the front lines of battle where he is likely to be killed. Bathsheba is not in a position to refuse David and becomes one of his wives. For this, the Lord punishes David by killing his first born son from his union with Bathsheba. On the left, David hands his commander, Joab, the letter condemning Uriah to death by placing him on the front lines. To the right, the prophet Nathan, sent by God, upbraids David condemning his actions. David’s family was not unlike any royal family, with sparring, rivalry and intrigue. He had difficult relationships with his children, especially his oldest son Absalom. Absalom was a favorite, handsome, and known for his beautiful hair. However, when Absalom’s brother Amnon raped their sister Tamar, Absalom committed fratricide and was exiled by his father. Eventually, he was welcomed back to Jerusalem by David. But he grew restless and sought to rule in his father’s place, causing civil war. When the people favored Absalom, it was David who fled Jerusalem with his supporters, only to return after Absalom’s death. Death came to Absalom when, riding a mule in battle, he was caught in a tree by his long, beautiful hair. Although David had commanded that no one hurt him, David’s commander Joab killed him, as seen on the left. On the right, David mourns the death of his rebellious son. As David neared the end of his life, it seemed obvious that his oldest living son, Adonijah, should succeed him. Adonijah even preemptively proclaimed himself king; however, the prophet Nathan conspired with Bathsheba to convince David to make Solomon his heir. Before he died, David imparted his knowledge to Solomon and then David “slept with his fathers.” Solomon would come to be known as the second greatest king of Israel, builder of the temple, poet, lover, and wise sage. On the left, we see Bathsheba pleading with David to choose Solomon as his successor. On the right, David is seen giving Solomon instruction with the temple plans in his left hand. The Psalms are associated with David, and many of the superscriptions (like the one above Psalm 7) refer to him or to events in his life. Although many psalters are titled ThePsalms of David, there is no clear evidence that he authored many or even any of them. The Psalms were used in both public and private worship. They were printed in breviaries (liturgical texts), books of hours (prayers and passages for the use of laypeople) and books used by priests as they visited the sick and gave comfort. Reformers like Luther and Calvin were keen on congregational singing as a way to teach the psalms. In addition, Calvin wrote that the emotional connection forged by singing the psalms was much stronger than merely reading or hearing them. Looking at the contents of these psalms adapted for singing, it is also possible to see a number of different types of psalm. Lamentations. Faced with grief, sickness, or enemies, the psalmist asks God for help, assuring God of his strong faith and extending his promise to render thanks and praise. Hymns of thanksgiving and praise To be sung on a pilgrimage (or ascent) to the Temple To celebrate royal events ~ marriages, coronations, births Psalters and Hymns Faithful translations of the psalms rarely turned out to be sing-able and many writers and composers turned their attention to putting the psalms into poetic meter and providing tunes and harmonies. The tunes of Clement Marot, Louis Bourgeois, Isaac Watts and Charles Wesley survive today in most hymnals. Isaac Watts and Charles Wesley both adapted the Psalms for use in Christian worship by inserting references to Christ and salvation. A distinguished poet and author of Jersey Rain brings to life one of the most important and complex figures of the Old Testament, the poet, warrior, and king David, describing his eventful life, his triumphs and failures, his divine destiny, and his influence. 25,000 first printing. The story of David is the greatest single narrative representation in antiquity of a human life evolving by slow stages through time. In its main character it provides the first full-length portrait of a Machiavellian prince in Western literature. In King David’s court: musician, poet, warrior, seducer, and murderer by David Mandel Joel Baden, a leading expert on the Old Testament, offers a controversial look at the history of King David,nbsp;the founder of the nation of Israel whose bloodline leads to Jesus, challenging prevailing popular beliefs about his legend in The Historical David. Baden makes clear that the biblical account of David is an attempt to shape the events of his life politically and theologically. Going beyond the biblical bias, he explores the events that lie behind the David story, events that are grounded in the context of the ancient Near East and continue to inform modern Israel. The Historical David exposes an ambitious, ruthless, flesh-and-blood man who achieved power by any means necessary, including murder, theft, bribery, sex, deceit, and treason. As Baden makes clear, the historical David stands in opposition not only to the virtuous and heroic legends, but to our very own self-definition as David’s national and religious descendants. Provocative and enlightening, The Historical David provides the lost truth about David and poses a challenge to us: how do we come to terms with the reality of a celebrated hero who was, in fact, similar to the ambitious power-players of his day? "Of all the figures in the Bible, David arguably stands out as the most perplexing and enigmatic. He was many things: a warrior who subdued Goliath and the Philistines; a king who united a nation; a poet who created beautiful, sensitive verse; a loyal servant of God who proposed the great Temple and founded the Messianic line; a schemer, deceiver, and adulterer who freely indulged his very human appetites. David Wolpe ... takes a fresh look at biblical David in an attempt to find coherence in his seemingly contradictory actions and impulses. The author questions why David holds such an exalted place in history and legend, and then proceeds to unravel his complex character based on information found in the book of Samuel and later literature. What emerges is a fascinating portrait of an exceptional human being who, despite his many flaws, was truly beloved by God."--Publisher's web site. There has been an explosion of recent discoveries in biblical archaeology. These finds have shed powerful light on figures and stories from the Bible -- and completely changed what we know about some of its most famous characters. The reputations of the first great kings, David and Solomon, evolved over hundreds of years. In David and Solomon, leading archaeologists Israel Finkelstein and Neil Asher Silberman focus on the two great leaders as a window into the entire biblical era. David and Solomon covers one thousand years of ancient civilization, separating fact from legend and proving that the roots of the western tradition lie very deep. Biblical tradition portrays King David as an exceptional man and a paragon of godly devotion. But was he? Some scholars deny that he existed at all. Did he? This challenging book critically examines the textual and archaeological evidence in an effort to paint an accurate picture of one of the Bible's central figures. A leading scholar of biblical history and the ancient Near East, Baruch Halpern traces the development of the David tradition, showing how the image of David grew over time. According to Halpern, David was the founder of a dynasty that progressively exaggerated his accomplishments. Halpern's clear portrait of the historical David reveals his true humanity and shows him to be above all a politician who operated in a rough-and-tumble environment in which competitors were ready literally to slit throats. William Brown introduces a new method of exegesis, particularly for biblical poetry, that attends to the metaphorical contours of the Psalms. His method as proposed and demonstrated in this book supplements traditional ways of interpreting the Psalms and results in a fresh understanding of their original context and contemporary significance. This volume offers one of the best available introductions to the psalms literature of the Bible. Specially designed for use in a wide range of educational settings, James Crenshaw's new book will help beginning students read the psalms with understanding and appreciation. Part 1 examines the composition and major features of the book of Psalms. Comparisons to other biblical psalms and to deutero- and noncanonical psalms are also made. Part 2 surveys the various approaches to the Psalter, illustrating with great clarity the various modes of interpreting the book, Crenshaw looks in particular at the types of psalms, their social settings, and the historical reconstruction of the Israelite experience, with special attention to ancient Near Eastern iconography. Artistic design and theological editing are also discussed, In Part 3 Crenshaw offers in-depth exegesis of four notable psalms -- 24, 71, 73, and 115 -- to show how one might fruitfully engage the text. Given its range of discussion and highly accessible style, The Psalms: An Introduction will quickly become a standard text for classroom use.
0.65
medium
5
3,650
[ "domain basics" ]
[ "expert knowledge" ]
[ "science", "technology" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
31239454-5323-4d07-aad0-a903268e23de
Sweet special, Valentine’s Day
technology
code_implementation
Sweet and special, Valentine’s Day offers the prepared teacher the ideal opportunity for teaching kids about God’s love and loving others. From television shows to poor in-home examples, kids see many wrong or, at best, shallow ideas about what love truly is. Bring kids back to the truth by presenting easy to remember love-related scriptures they will treasure all their lives. It’s okay for kids to have some fun while learning so I’ve provided you with two Valentine’s Day games and activities for the kids in your class or ministry. As always, you can “tweak” them to suit your specific needs just by replacing the verse I’ve suggested. Show your kids how much you love them by pointing them toward a big God of Love. Musical Hearts: Before class begins cut out a large heart from red poster board. The ideal paper heart for this game is large enough to be held with both hands. In big letters, write John 3:16 on one side of the heart, and write the verse out on the other side. This way, kids will see the verse and John 3:16 as the heart is passed around. Like musical chairs, the group stands in a circle, only facing one another. Once the music begins, the heart is passed, with both hands to the person to the right who accepts it with both hands. The heart continues around the circle stopping when the music stops. When the music stops the person to the left of the one holding the heart is out. Continue to play this Valentine’s Day game until you have one person left. Hide the Hearts: Cut out 10 or more paper hearts, at least 3-inches wide before the class. It’s good to have one heart per child. Hide them around the room then begin your regular lesson. Throughout the lesson, stop and send a child to search for a heart. You can give clues by saying “You’re hot,” when the child is close to a heart or “You’re cold,” when a child is far away from one. Place a verse of scripture on each heart to make them a special, personal valentine to take home. Another good idea is to put one word on each heart, kind of like a puzzle. When all the hearts are found, the class can assemble them together to read the message. The most important mission of the day is to show kids unconditional love and loving guidance. Anyone can have fun with a class but it takes a special teacher to bring kids closer to understanding the true nature of God. Read Mimi’s latest book, “The Prophet’s Code,” a resource for children and teens interested in Biblical prophecy.
0.65
low
4
559
[ "programming fundamentals", "logic" ]
[ "system design" ]
[ "language_arts", "arts_and_creativity" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.35, "creativity": 0.35 }
e5eabb5b-4721-4158-b826-f8e7766cc6b6
Main Cards Casino Mechanical
science
code_implementation
Main / Cards & Casino / Mechanical engineering glossary pdf Mechanical engineering glossary pdf Name: Mechanical engineering glossary pdf File size: 878mb Common Mechanical Engineering Terms. Ball and Detent. (n) A simple mechanical arrangement used to hold a moving part in a temporarily fixed position. or by any means, electronic, mechanical, photocopying, recording or otherwise, .. has been endorsed by IREB as the standard glossary of terms for the CPRE. „Technical English – Mechanical Engineering“ wendet sich an alle Lehrenden und Studierenden des Fachbereiches Module 1: Basic Technical Vocabulary. mechanical engineering [Elektronski vir]: gradivo za 1. letnik / espace-akwaba.com - Projekt new vocabulary this way. environmental engineering The use of science and engineering principles to .. mechanical advantage The use of simple machines to multiply the output force. 15 Feb This Dictionary/Glossary of Engineering terms has been compiled to compliment the work . Mechanical engineering applies the principles of mechanics and energy to the PDF Portable Document Format from Adobe Sys-. 21 Nov Department of Aerospace and Mechanical Engineering students learn something in class A, and this information is used with different terminology or in a If your formula contains a difference of terms, determine what. Council of the Institution of Mechanical Engineers,. London Mechanical Engineers, London, UK, . a comprehensive glossary of important technical terms. Handbook of Mechanical Engineering Terms - Free ebook download as PDF File .pdf), Text File .txt) or read book online for free. Department of Mechanical Engineering, Auburn University,. Auburn, Alabama. San Diego San Francisco New York Boston Terminology and Notation. Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together.
0.65
high
5
398
[ "introductory science", "algebra" ]
[ "research methodology" ]
[ "technology", "language_arts" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.45, "creativity": 0.45 }
048ec251-e550-4380-afc8-c699a25dfaf3
• Story alerts
arts_and_creativity
creative_writing
• Story alerts • Letter to Editor • Pin it You often hear the word “Americana” thrown around when music shows the influence of blues, folk, country, and old-time rock ’n’ roll. Those are all great ingredients, but why stop there? Why is jazz not “Americana”? Why isn’t hip-hop or hardcore punk? In the song “American Music,” the Violent Femmes sang, “Every time I look at that ugly lake, it reminds me of me.” That’s what Americana should be — something that reminds us of ourselves, even if it is sometimes ugly. The War on Drugs combines country-ish guitars and freight-train rhythms with atmospheric indie-rock weirdness. The band came together in Philadelphia about six years ago after Oakland resident Adam Granduciel moved to the East Coast and met another songwriter named Kurt Vile. The two reportedly bonded over a shared love of Bob Dylan, whose influence shows up primarily in their often-cynical lyrics and their shared habit of sneering through their vocals. After Vile left for a rewarding solo career in 2008, Granduciel began playing up the electronics and hazy effects in his music. Today, critics twist themselves in knots trying to decide whether the War on Drugs sounds more like Bruce Springsteen or the Velvet Underground. It doesn’t much resemble either one. Nor does it sound a lot like Wilco, a band that works with similar influences. The War on Drugs just sounds like the work of someone re-creating American music in his own image, and doing it in a way that’s graceful. Purling Hiss and Carter Tanton also perform. THE WAR ON DRUGS: Soda Bar, Friday, October 21, 8:30 p.m. 619-255-7224. $8–$10. • Story alerts • Letter to Editor • Pin it More from SDReader Comments Sign in to comment Win a $25 Gift Card to The Broken Yolk Cafe Join our newsletter list Each newsletter subscription means another chance to win! Close
0.75
medium
4
476
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science", "social_studies", "life_skills" ]
{ "clarity": 0.7, "accuracy": 0.5, "pedagogy": 0.6, "engagement": 0.5, "depth": 0.35, "creativity": 0.4 }
87655af9-9759-49d6-8675-e6917500473c
MINUTES TEXAS BOARD PROFESSIONAL
interdisciplinary
historical_context
MINUTES TEXAS BOARD OF PROFESSIONAL LAND SURVEYING 12100 Park 35 Circle, Bldg. A, Rm. 173 Austin, Texas December 12, 2014, 9:30 a.m. Call to Order, Establish Quorum, Introductions, and Comments from the Public Chairman Jon Hodde called the meeting to order at 9:32 a.m. Present were Board members Jim Childress, Mary Chruszczak, Nedra Foster, Jerry Garcia, Paul Kwan, Bill Merten, Bill O'Hara and Bob Price. Also in attendance were Executive Director Marcelino A. Estrada, Assistant Attorney General Harold J. Liller, Board Investigator Larry Billingsley, and the Board office staff. The Chair invited the public in attendance to introduce themselves. There were no comments from the public. 1. Approval of the October 16, 2014 Minutes The Chair offered the minutes of the October 16, 2014 Board meeting for approval whereupon motion duly made, seconded and unanimously approved, the minutes were adopted. Before proceeding to the Director's Report, the Chair stated that a presentation would be given on photogrammetrist licensing which was item 5b on the agenda. Michael Zoltek, who is with the National Photogrammetrist Association, offered a presentation on State Licensing of Photogrammetrist. At the conclusion of his presentation, Member Chruszczak thanked Mr. Zoltek for his presentation as it has given the Board something to think about. She also recognized his knowledge and insight to the relationship surveyors have to photogrammetrists. 2. Director's Report The Chair then returned to the items on the agenda, beginning with the Director's Report. a. TBPLS Budget, Year to Date Mr. Estrada provided the Board members with an expense statement showing Year-toDate figures demonstrating the beginning amount provided by General Revenue less expenses to date. b.LBB Appropriations Growth Mr. Estrada provided Board members with information he received from a meeting he attended recently. The Legislative Budget Board approved a growth on appropriations of $94,267,654,158 for the biennium 2016-17. This is $10 billion over the appropriations for 2014-15. c. License Renewals Mr. Estrada reminded Board members and attendees that there were 20 days left to renew their licenses. Approximately 1500 licensees have not renewed. Mr. Estrada also explained that when licensees pay online, they are going through three agencies: TBPLS, Health Professions Council and Texas.gov. Mr. Estrada also provided suggested tips to help make the process smoother. d.Publication of Proposed Rules Mr. Estrada noted that a copy of the proposed rules as published on December 5, 2014, were provided in the member's workbooks. Mr. O'Hara said he had attended a TSPS Governmental Affairs Committee meeting where the proposed rules were discussed. A question arose regarding Rule 661.53. Mr. O'Hara wondered if the new language was the appropriate place or if it should be a new paragraph. Ms. Chruszczak thought that perhaps making the language a new paragraph would make things clearer. The Chair agreed and said this was a change that could be made during this period. 3. Complaints Mr. Estrada provided Board members with information on whether other regulatory boards had a limitation of action on complaints. He found that the range varied from 2 years to 7 years except for the Plumbing Board which had no limitation on improper installations. He noted that they would even pursue complaints after retirees. The Board took no action. a. Discussion of closed cases Mr. Billingsley discussed two complaints that were dismissed. Complaint 14-02 alleged the subject surveyor failed to provide the complainant with the survey for which she paid. The complainant provided copies of cancelled checks, a copy of a letter sent to the surveyor and a copy of two letters from her attorney addressing the issue with the surveyor. Upon receipt of the complaint, the subject surveyor contacted the complainant. Personal and medical circumstances, along with a move to another city, caused the delay in completing the survey. The subject surveyor was unaware that the complainant was attempting to contact him as none of the letters reached his new address. The surveyor completed the survey and delivered it to the complainant. The complainant asked that the complaint be withdrawn. The complaint was dismissed. Complaint 14-19 alleged that the subject surveyor trespassed on the complainant's property and stated a drainage easement that did not exist. After contacting a title company, the complainant was informed the easement did exist but the description was too general and vague to be of value. The subject surveyor had been hired by TxDOT in August of 2010 to establish a right of way for a highway adjacent to the complainant's property. The work included locating fence post believed to be encroaching in the right of way and staking an easement purchased by the State of Texas, acting through the State Highway Commission, in 1954 from the property owner at the time. The Board's investigator found that the subject surveyor did sufficient field work necessary to establish the right of way line. TxDOT right of way markers were found both north and south of the complainant's property and utilized in the boundary work. Sufficient information was found to enable the surveyor to establish stationing along the highway centerline, or baseline. The surveyor used the stationing to help locate the easement in question. There were no rule violations and the complaint was dismissed. b. Discussion of open cases There were no open cases discussed. c. Informal Settlement Conferences / State Office of Administrative Hearings (SOAH) There were no ISC/SOAH complaints to report. 4. Committee Reports a. Executive Committee Mr. Hodde reported that the Executive Committee had nothing to report. b. Rules Committee Ms. Chruszczak reported that she and Mr. Kwan had reviewed licensing requirements for educators wanting to become RPLS but who do not have the experience to qualify. She presented a draft rule, along with compliance verification for experience. Educators would still have to take the SIT exam but the Board could consider giving them credit towards the RPLS exam for experience, such as in field accuracy and tolerance or field, along with nine months to one year teaching experience. No credit could be given towards the office experience portion of the RPLS requirement. For educators that have SIT certification, the Board would need proof of a Ph. D. from an accredited institution and at least one year experience as an instructor. The Chair expressed concern over the fact that if these individuals were licensed, they would be able to practice. Ms. Foster stated that she would prefer the educators to be licensed and involved if they were going to be teaching our licensees. Mr. Kwan mentioned that the Engineering Board gave an exemption to educators but they had to be teaching, not working in research and development. Ms. Chruszczak said the intent was to acknowledge the situation and assist them but hand the license to them. These educators have been creative, working nights and weekends to gain experience. Their experience should be evaluated differently. Mr. Kwan said that educators would need to submit a detailed resume on what they teach, what they research, and for how long. Mr. Price pointed out that Board of Professional Engineers had similar concerns when amending their rules to license educators. Since the adoption of those rules, few have taken advantage of their license and performed engineering services. Educators will be training our future professionals; we need to take care not to create a divide between education of future professionals and the professionals that exist in the current industry. Mr. Merten asked if the recommended 9 months experience was the maximum that the educators could receive. Ms. Chruszczak replied that the Board could choose any time period but in her opinion was that the most she believed anyone could receive. Mr. Merten agreed. Mr. O'Hara asked if the educators would still be required to take the exam. Mr. Kwan noted that that was required by statute. Mr. O'Hara then stated that the issue was in the educators acquiring experience. He noted that Patti Williams at Tyler Junior College and Dr. Jeffress at Texas A& M-Corpus Christi were RPLS who could serve as mentors. Mr. O'Hara mentioned that the rules state a surveyor providing services must be competent and an educator who becomes a RPLS would not be competent to perform those services. He believes that this is good direction for the Board. The Chair thanked Ms. Chruszczak and Mr. Kwan for their work and asked that they bring a recommendation to the Board at its next meeting. c. RPLS/SIT Examination Committee – Jon Hodde, Chair Mr. Hodde noted that eight passed the SIT exam and 40 passed the RPLS exam. Mr. Kwan offered a motion to certify the SITs. The motion was seconded and passed unanimously. Mr. O'Hara asked if there were statistics on the trend on SITs. Ms. Jackson reported that the trend is dropping; she estimated that 40 people have sat for the exam and 8 passed. Mr. O'Hara asked if there were similar statistics for RPLS. Mr. Hodde stated that there were more reciprocal examinees because of our economy. Mr. O'Hara asked if there were statistics of licensees who were leaving the profession and not renewing their license or putting their license in inactive status. Ms. Foster offered a motion to certify the new registrants. The motion was seconded and passed unanimously. The Chair called for a 10-minute break at 10:50 a.m. because there would be another presentation via telephone. The meeting was reconvened at 11:06 a.m. Jack Warner, Psychometrician, addressed the Board concerning the recent rule change requiring an examinee to retake the entire exam rather than only repeating only the part them failed. Mr. Warner stated that the Board currently offers two different exams. His concern was that if the Board required an individual to retake an exam that they had already passed, the exam could be challenged. Mr. Kwan offered a driving test as an example, saying that if you failed a part of the driving test, you had to take the whole exam over. Mr. Warner rebutted with the Colorado driving test being a written exam and a performance exam. You had to pass both but only retake the one you missed. He felt this was akin to the two separate exams offered by the Board. Mr. Warner was concerned with the situation where an individual passed the legal part but was required to retake the legal part because they failed the analytical part. He felt that the NCEES exam did not have the same requirement that this Board was suggesting. He felt that the Board's current system should not be changed. Mr. Warner went on to offer his thoughts on providing an examinee with an analysis of his test results. He suggested the board: (1) consider delineating the content and scope of what is being tested in the analytical exam, and (2) then go through each item in the item bank that would classify all the questions according to the test blue print. This will also help ensure that the breadth of the profession is being covered and any areas not covered could be addressed. Mr. Warner thought it would be beneficial to have the Item Writing Committee and the Cut-off Score Committee address this in a workshop. Ms. Foster felt that by treating our exam as one exam, we are aligning with other Board's and the NCEES PS exam. Mr. Childress asked if there were any data supporting the concerns raised by Mr. Warner. Mr. Estrada stated that, as Mr. Warner had stated, there was no data, his concern was theoretical. d. LSLS Examination Committee – Bill O'Hara, Chair Mr. O'Hara reported that the next exam would be in April 2015. There are two applicants whose reports are being reviewed. This concluded Mr. O'Hara's report. e. Continuing Education Committee – Paul Kwan, Chair i. Approval of Courses Mr. Kwan offered his recommendations to the Board. Mr. Kwan recommended approval of courses offered by Halff Associates, TSPS Ch. 6, TSPS, R-Delta Engineers, Jon Hoelbelheinrich, and David Hunt. Mr. Kwan also recommended approval of an individual course submitted by Robert Hysmith with the exception that he receive eight-hours of continuing education credit. Mr. Kwan recommended rejecting the individual course request from Stephen Horvath, Edward Prince and Robert Anguaino because the course was TxDOT specific. Ms. Foster offered a motion to accept Mr. Kwan's recommendations. The motion was seconded and unanimously approved. f. Oil Well Issues Committee – Bill O'Hara, Chair Mr. O'Hara reported that price of oil was on the decline. The U.S. benchmark price fell below $60 for the first time in five years. The price of natural gas is holding steady, though it has dropped this past year. The impact of the oil and gas industry on Texas is tremendous. Mr. O'Hara said there was still a lot of drilling activity. He also noted that a new Land Commissioner would be taking office next month. They will be examining revenues produced from the permanent school fund mineral interests, which includes the decline in oil prices. This concluded Mr. O'Hara's report. g. Legislative Needs Committee—Bill Merten, Chair Mr. Merten reported that his committee has been keeping an eye on the upcoming Legislative session. He did want to bring a proposed bill to the attention of the members that would be filed by TSPS LSLS Committee. The bill concerns the custody of county surveyor's records when the county surveyor office is abolished. There has been a serious problem with LSLS being able to file required documents and surveys. Many files are lost and many are in the hands of private individuals when they should not be. This concluded Mr. Merten's report. 5. Old Business a. Discussion on firm contract labor and Board concerns Mr. Merten spoke on behalf of his committee which included Mr. Price and Mr. O'Hara and referred to a draft comment contained in the member's workbooks. He reported that there were several questions regarding the definition of "independent contractors". The Texas Workforce Commission and the IRS have definitions that are in conflict with the Board's rules. Their definitions read that this is someone who provides a service for a fee where they have complete control over the service by the hiring entity. If an RPLS hires a contractor, he needs to be in control though the whole process. Mr. Merten said that the Board cannot control who people hire and whether they are contractors or not. This is a matter to be clarified by Texas Workforce Commission and the IRS and not TBPLS. The recommended statement, in response to the questions concerning this issue is, it is paramount that the RPLS, in responsible charge, retain complete control of the final product. A secondary question dealt with firms outside of Texas soliciting work in Texas or an RPLS to do the work in Texas. This firm would be in violation of Board rules if they are not a registered firm. An outside firm hiring an RPLS is allowed so long as the work is done on the letterhead of the RPLS. This concluded Mr. Merten's report. b. Update on licensing of photogrammetry by TBPLS - Mary Chruszczak This topic was covered earlier. c. Licensing requirements for educators – Paul Kwan This was addressed earlier except for the following: i. Request from Nicolas Marina, Lone Star Community College Mr. Estrada reported that Mr. Marina was hired by the community college but is not licensed in Texas. He was asking for a waiver in the educational requirement and being allowed to take the exam. Mr. Estrada felt that Mr. Marina would have to have his degree evaluated and referred members to documentation submitted by Mr. Marina. Mr. Kwan said that a comparison of his education from Puerto Rico would have to be compared to a similar U.S. degree. Mr. Estrada will let Mr. Marina know. d. Investigation of complaints regarding surveys older than 10 years Mr. Estrada directed the Board members attention to a chart within their work book where he provides a comparison of other Texas regulatory boards and how they deal with complaints over 10 years of age. The Funeral Commission has a two-year statutory limitation on complaints; Optometry has four-years. Plumbing has no limitation on improper installation and would apply to retirees depending on prior complaints. Engineering treats these complaints on a case-by-case basis. Mr. Estrada asked the Board for guidance and stated he felt that a case-by-case basis might be an approach for the Board to take. Mr. O'Hara asked if the Board did not have a 10-year statute of limitation. Mr. Hodde explained that applied to civil matters and not Board investigations. Mr. Price pointed out that researching an old complaint would depend on the documentation available and the enforcement arm being able to make a finding. e. Digital signatures Mr. Merten reported on behalf of Mr. O'Hara and Mr. Price. He reported that the committee studied many definitions of digital signature and provided a copy of a description that the committee thought was excellent. A digital signature is a "fingerprint" done by an individual program separate from the document you are working on and can be provided by a service or an individual that owns the program. An electronic signature is something like writing your name on an email. The committee looked to the Engineering Board because they have enacted rules regarding electronic seals and signatures. Mr. Merten said the committee had three recommendations: 1) A document signed and sealed with a digital signature from a digital signature program or by a company that provides that service is acceptable. The surveyor shall retain digitally signed/sealed originals and a hand signed/sealed original in his/her permanent files. 2) Any electronic submittal that is an unalterable copy (i.e. PDF or similar format) of an original that includes a signature and seal should be considered a copy no different than a copy of an original from a copy machine and therefore acceptable. The surveyor shall retain the signed/sealed original in his/her permanent files. 3) A digital graphics program such as AutoCad, MicroStation or other similar platforms where it is possible to add a digitized (not digital) signature and seal into the drawing as a separate entity, shall not be allowed outside of the control of the surveyor and transmittal of such shall not be acceptable. This concluded Mr. Merten's report. Mr. O'Hara asked what the surveyor's responsibility to sending a CAD file to his client. Mr. Merten said it would not have the signature or seal within the document because the signature could easily be removed from the drawing and placed in another drawing. Mr. O'Hara asked about sending the file without a signature. Mr. Merten said that would be considered a preliminary and not a problem. Mr. O'Hara then asked what about when a project is complete and the client wants the CAD file. Is the only option to apply a digital signature? Mr. Merten said he would not consider the CAD file a final copy since the original hardcopy was also delivered. Mr. O'Hara pointed out that this was likely being done every day and wanted surveyors to understand their responsibility. Ms. Foster asked how this would protect the public. Mr. O'Hara said it would prevent someone from stealing the signature and seal of the surveyor. Ms. Foster thought it was more of a business decision between the surveyor and the client and how they want to transfer information. The Board has a rule saying surveyors have to protect their seal and this seems to be pushing the Board over the line, forcing surveyor's to protect their license. Mr. Merten stated that these are recommendations on what would be acceptable under the rules and Act since questions had been received. Mr. Hodde asked if there were questions or if the members wanted the committee to consider further and bring suggestions to the Board. Committee members declined. 6. New Business a. Request for reinstatement of expired license – Joseph E. Guerra Mr. Estrada informed the Board that a letter had been received from Mr. Guerra whose license had been expired since 2004. A letter of support was also included. Mr. Kwan asked why his license was expired and Mr. Estrada stated that Mr. Guerra had not obtained the required continuing education and his license was expired. Mr. Kwan said that Mr. Guerra would have to start over and offered a motion to deny the request. The motion was seconded and passed unanimously. b. Oil field plats – Mark Paulson Mr. Paulson addressed the Board concerning unit plats. He wanted to know how, as surveyors, they can turn in a product that has boundary lines with no bearing, no distance and no way to reconstruct the boundary line. How can this be considered to protect the public? Mr. Paulson would like the Board to issue a statement to say that surveyors have to do this. Ms. Foster asked if the examples he presented had certification and Mr. Paulson said they did. Mr. O'Hara asked if this could be discussed at this time. Mr. Garcia asked Mr. Paulson to explain to him, what information Mr. Paulson would like to see on the documents. Mr. Hodde stated that Mr. O'Hara was on the Oil and Gas Committee and was asking that he look into this matter. Mr. Hodde will assist and they will bring another member into this committee. Mr. O'Hara stated that these types of drawings are acceptable to the Texas Railroad Commission but are substandard to the Board's minimum requirements. Somehow, the Railroad Commission rules override the Board's rules but it is something that needs to be examined again. Ms. Chruszczak asked Assistant Attorney General Harold Liller if he would assist with this concern. 7. Future Agenda Items – Select next meeting date The Board chose March 6, 2015 at 9:00 a.m. for the next Board meeting. 8. Comments from the Public One public member stated that he agreed with Mr. Kwan regarding testing. He did not understand why the Board allowed separation of the tests. He also commented on the educator licensing and voiced concern that if individuals did not have field experience, they cannot do this type of work. He noted that other regulated professions did not have licensed professionals teaching and being licensed should not be a requirement for them to teach. An alternative might be a certificate for certified teachers which mean that the individual is certified to teach. He felt it would be better to have a surveyor take two years to obtain a Ph.D. and become an educator than to take an educator and make them surveyors. This would prevent a non-surveyor getting a license and then worrying that they might perform surveying. Phil Payne commented on oil and gas plats and agreed with Mr. Paulson's presentation. He felt that the Board should send a letter to every surveyor when the Board arrives at a conclusion regarding oil and gas plats so that everyone will know what the rule is. There should be a minimum three state plane locations so that anyone can recreate the boundary. Regarding educators, he wondered if an institution of higher education would accept a professional degree in lieu of a Masters, there would be plenty of individuals who might be willing to become educators. Jim Gillis, TSPS President, commented on the educator issue and wanted to speak for the land surveying community in general. He believes that the vast majority do not feel an educator should receive credit towards land surveying experience time from their education. There is a difference between surveying and land surveying. Educators teach how to measure; land surveying is about boundaries and law, and the educators do not get any experience in boundaries and law. In the current Act and Rules, we do not have a requirement for actual field time before a person becomes an RPLS. It is not just two years, it should be two years in the field measuring and learning how to establishing a boundary. On the oil field plats, he mentioned he brought some oil and gas well plats to the attention of the former Director Sandy Smith and it was determined that the plats had violations. However, the Board did not enforce the rules. What is being done now is nothing more than a cartoon. Is this proper? Mark Paulsen commented that the understanding is that the Railroad Commission would accept anything regarding oil and gas well plats but this thinking has progressed too far. Surveyors performing oil and gas well plats say how those plats are prepared do not matter because it's for the Railroad Commission. The Board-not the Railroad Commission-has control over the surveyors, and the Board needs to enforce its rules. Another audience member commented that he agreed with the previous comments. The Board needs to take control of the surveyors and how surveying is done. He felt that if a surveyor is going to put their seal and signature on a plat, the document should meet the Board's standards. Regarding the educator issue, there is a difference between the practitioner and an academia. If we want the academia to have a surveyor's license, then we need to make certain they have the proficiency to practice land surveying. The next audience member mentioned that Southern Association of Colleges and Universities requires, for accreditation purposes, that instructors have education in the field specific to the field they are teaching and that they also have a professional license. Paul commented on the proposed legislation for county records that are no longer in possession of the county. He suggested that the records or an index of the records in the possession of surveyors should be given to the county clerks and hoped the Board could help facilitate this. Ken Gold commented on exam committees. Dr. Warner suggested that we look at our test blue print but it is so well used that Mr. Gold felt that the blue print was a living blue print and it would be an exercise in futility. By combining parts of the exam, Dr. Warner said he would not support the Board. Mr. Gold hoped that we would weigh this carefully. Mr. Gold went on to say that the exam committees have had a problem with the raw score (passing score), allowing examinees to pass with below a 70. This is putting minimally qualified people in the profession and they are staying minimally qualified. He hopes the Board will make a careful study of whether to continue with Dr. Warner or not. Charlie Gutierrez of El Paso asked how line item and cut-off committee members are selected. Mr. Hodde stated that people have volunteered to serve on the committees and they are chosen on an as-needed basis. Bill Masey commented that Dr. Warner seemed to not know we had a blue print. The analytical exam has eight categories and examinees know in which category they did in those eight categories. John Barnard commented that it would be futile to have the item writers re-categorize the items that are already created. The question is how can we inform the applicants that have not passed what their deficiencies may have been? If the items are properly categorized, the QAQC or cut-off score committee could, as a double check, ask if a question is in the correct category. To hold a workshop as Dr. Warner suggested would be a waste of time. Marty Costa asked if the Board was trying to put the legal and analytical parts together or take them apart. When he took the exam there were four parts and you only took the part(s) you failed. He wondered how many passed the exam the first time. He believes that not many would be able to pass the exams if they had to be passed the first time. 9. Adjourn There being no further business before the Board, the meeting was adjourned at 12:55 p.m.
0.75
medium
6
6,295
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "language_arts" ]
{ "clarity": 0.7, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.65, "creativity": 0.35 }
443c89b5-a5d1-4b1b-a46e-6d94d180bf55
Awe probably emotion many
interdisciplinary
research_summary
Awe is probably not an emotion many of us experience on a regular basis; a childlike sense of wonder is hard to achieve when you’re busy avoiding giant slush puddles on your way to work. But perhaps we should seek out the feeling more often, as psychologists keep finding new ways that it benefits us. Research has indicated that awe seems to encourage collaboration, for one; it also appears to slow down our perception of the passage of time. And the latest study, published recently in the journal Emotion, suggests that feeling awe may promote good health. Specifically, as lead study author Jennifer E. Stellar explained in a phone interview, people who reported feeling awe on a regular basis tended to have lower markers of inflammation, which has been correlated with ailments like heart disease and cancer. In two experiments, Stellar analyzed cheek swabs from more than 200 healthy study volunteers, looking for a particular inflammatory protein; the volunteers also completed questionnaires assessing the positive and negative emotions they’d felt during the previous month. Overall, Stellar found that those who reported feeling more positive emotions also tended to have fewer of the inflammatory markers — but she found that awe produced the strongest correlation. In her paper, Stellar explains the correlation this way: One reason is that proinflammatory cytokines encourage social withdrawal and reduce exploration, which would serve the adaptive purpose of helping an individual recover from injury or sickness. … [A]we is associated with curiosity and a desire to explore, suggesting antithetical behavioral responses to those found during inflammation. It’s speculative still, as it’s very early in the research process, but Stellar’s work and others suggest there may be some measurable benefits to something as intangible and hard to explain as the experience of awe.
0.6
medium
6
359
[ "intermediate understanding" ]
[ "research" ]
[ "science", "technology", "life_skills" ]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.35, "creativity": 0.4 }
d3992a4d-c34e-465b-93d3-99912f623583
US-based Arxan Technologies specialises
science
practical_application
US-based Arxan Technologies specialises in application security. Its solutions are used to protect a number of applications across a range of industries, including automotive. With an increasing number of cars fitted with wireless connectivity, Matt Clemens, security solutions architect at Arxan Technologies explains the security risks and what a driver can do to stay safe.
0.5
high
5
70
[ "introductory science", "algebra" ]
[ "research methodology" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
3e87b3c1-b187-4771-ab0b-209d46c1ffb5
You've found serious security
interdisciplinary
ethical_analysis
You've found a serious security problem in your company's web application—one that puts your customers at risk of identity theft. Despite your protests, the problem is given no attention and persists for several weeks. Would posting an anonymous message to a public mailing list alerting your customers to the problem be an ethical thing to do? If your employer finds you out and fires you, is that a principled act or a dastardly one? Ethics is the branch of philosophy concerned with morality—good and evil, vice and virtue. How can we evaluate an act as being right or wrong? When faced with an ethical dilemma, how can we make the best choice? While people may proclaim their own system of morality to be the only correct one, all systems of ethics have deficiencies and criticisms. In the end, each of us is left to decide our code of ethics for ourselves. Philosophers have grappled with this problem for millennia, and three main threads of thought have emerged. The first is teleological ethics , where "right" is defined as what leads to the best consequences. This encompasses theories such as utilitarianism, which holds that one must pursue "the greatest good for the greatest number of people." The second is deontological ethics , where "right" is defined by duties and rules, such as "It is wrong to lie." Here, we find the divinely ordained moral codes common to various religions, as well as the idea of the social contract—a set of rules by which people who unite into a society agree to abide. Finally, we have virtue ethics , which takes the question "Is this act right?" and turns it on its head. Instead, it asks, "What would a virtuous person do?" At first brush, this seems like a very circular definition of ethics, and it has been duly criticized as such. However, when faced with a moral dilemma, the answer to the question "What kind of person do I want to be?" can provide penetrating insight into the merits of one's choices. While they employ different means of argument to get there, these three schools of ethical reasoning have a considerable amount of overlap in acts and precepts they deem acceptable. Most notable amongst these is "The Golden Rule," frequently stated as "Do onto others as you would have them do unto to you." This formulation often comes under fire as being too facile, and, indeed, if you look at it on a superficial level, you will find superficial problems. What about if people like to be treated differently than you do? Could a thief not argue that since a judge wouldn't want to be sent to jail, the judge shouldn't send him to jail either? This thief, however, would probably prefer that anyone stealing from him be duly punished by the judge. Thus, a less pithy but more comprehensive way of expressing the Golden Rule might be "Treat people the way you would like to be treated if you were in their shoes." What does this have to do with testing software? Nowadays, software has a profound impact on people's lives. Software's proper functioning—or lack thereof—dictates whether or not people get correct utility bills, a mortgage from their bank, or particular attention from law enforcement. In the case of medical or embedded software especially, software malfunctions may result in physical injury and death. As software professionals, then, what are our responsibilities? Surely we have many of the same duties we would in any workplace. We have obligations to our peers and customers, such as dealing with them respectfully and with
0.65
medium
4
720
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "science", "technology", "philosophy_and_ethics" ]
{ "clarity": 0.5, "accuracy": 0.6, "pedagogy": 0.4, "engagement": 0.45, "depth": 0.55, "creativity": 0.35 }
8d26ca61-842a-40c6-8268-2cbc8ceac767
why hard boiled egg
life_skills
practical_application
why would a hard boiled egg spin and raw egg wont if made to spin on a flat surface? When you spin the shell of a raw egg, the torque is not transmitted to the liquid inside. You can see this by rotating a glass of water with tea leaves. The leaves take a long time to spin if you spin the flass. the torque is not transferred to the liquid..ok, then the reason it does not rotate is due to the imbalance of torque outside and inside? For a raw egg the torque is transmitted slowly to the liquid inside. And since you usually spin an egg just once around, there is not much angular momentum that can be imparted to the egg so quickly. If you do keep on rotating it, after a few turns the yolk will pick up speed and the egg will go ahead and spin like a cooked egg. Conversely, a standard method of determining whether an egg is cooked or not is to get it spinning and then suddenly bring it to a halt. Then release it. For an uncooked egg the yolk will not immediately stop, it will keep on going inside even after the shell is stopped. Then when you release the egg the spinning motion will resume. Separate names with a comma.
0.6
low
3
255
[ "foundational knowledge" ]
[ "advanced concepts" ]
[]
{ "clarity": 0.5, "accuracy": 0.5, "pedagogy": 0.4, "engagement": 0.4, "depth": 0.25, "creativity": 0.3 }
31f14c48-8220-49cf-a2fa-58538ad68392
Food law regulates content
life_skills
historical_context
Food law regulates the content, labelling and promotion of food products, including food supplements. There is no requirement for food supplements to be licenced or registered with the UK Government. However, all foods sold within the UK must comply with all relevant food law. The Food Safety Act deals with all stages of food production and marketing, from farming, hygiene and preparation, through to consumption. Food is defined (in line with the General Food Law Regulation) as “any substance or product, whether processed, partially processed or unprocessed, intended to be, or reasonably expected to be ingested by humans” which includes food supplements. The Food Safety Act establishes requirements for food safety and places the responsibility for this on the businesses that grow, produce, process, store, distribute and sell food. The main responsibilities of food businesses under the Act are to: - Ensure that nothing is included or removed from food, or used to treat food, which would be damaging to the health of the people who will eat it - Ensure that food served or sold is of the nature, substance and quality demanded by consumers. - Ensure that all food is labelled, advertised and presented in a way that is not misleading. The three main offences under the Act are rendering food injurious to health, selling, to the purchaser’s prejudice, food which is not of the nature or substance or quality demanded and falsely or misleadingly describing or presenting food. The Food Supplements Directive lists the vitamins and minerals which are permitted for use in food supplements. The lists have been amended several times and the safety of the substances on the list has been assessed and approved by the European Food Safety Authority (EFSA). The amendments to the lists of substances permitted for use in food supplements can be found on the Commission website. The European Commission has not yet begun this work and currently the UK industry works to the safe upper levels (SULs) established by the 2003 report by the Expert Group on Vitamins and Minerals and the EU Nutrient Reference Values (NRV) of vitamins and minerals. The Food Information for Consumers Regulation regulates labelling requirements and other information which must be made available to consumers at point of purchase. It applies to all foods, including food supplements. The Nutrition and Health Claims Regulation (EC/1924/2006) was implemented to improve consumer protection in labelling claims. All foods which make claims, including food supplements, sold within the EU must comply with this Regulation.
0.65
medium
4
505
[ "intermediate knowledge" ]
[ "specialized knowledge" ]
[ "technology", "social_studies" ]
{ "clarity": 0.6, "accuracy": 0.5, "pedagogy": 0.5, "engagement": 0.5, "depth": 0.25, "creativity": 0.3 }
7d23d804-732a-4f49-afef-8f05eabfd713
Comments on: OTRW1361: Lone
technology
technical_documentation
Comments on: OTRW1361: Lone Ranger – The Knife (10-10-41) http://www.otrwesterns.com/2012/11/12/otrw1361-lone-ranger-the-knife-10-10-41/ First of its kind to bring you Old Time Radio Westerns Daily. Westerns that include The Lone Ranger, Cisco Kid, Challenge of the Yukon, Have Gun Will Travel, The Six Shooter, Tales of the Texas Rangers, Gunsmoke, Hopalong Cassidy, and many many more. Mon, 02 May 2016 06:39:40 +0000 hourly 1 https://wordpress.org/?v=4.7.2
0.4
high
6
182
[ "algorithms", "software design" ]
[ "distributed systems" ]
[]
{ "clarity": 0.4, "accuracy": 0.5, "pedagogy": 0.3, "engagement": 0.3, "depth": 0.35, "creativity": 0.4 }
b88bd277-e48f-47fc-97a0-d0e47b6d798c
Programming Tools: Eclipse 3.0.1
technology
practical_application
Programming Tools: Eclipse 3.0.1 Eclipse has set a new standard in IDEs and component-based development. Most of us get to know it as an IDE, but it can be much more. It also can be used as the basis for developing applications. Other important aspects of Eclipse are: It is one of the first major open-source packages developed mainly by a commercial entity. IBM started the project and continues to support it. The quality and the scope of Eclipse sets a new standard in programming environments and raises the expectations for other open-source packages. It will be tough to follow, especially for the normal one- or two-person developer teams found in the Open Source community. Eclipse has been designed from the ground up to be a feature-based system. It's design uses the idea of plug-in features as the aspects of what the user sees. For a programmer, this means an IDE. For a normal business user, this means applications that run on top of the Eclipse platform. Eclipse's modular design is based on its feature sets. A feature set is made up of one or more plugins. Plugins are made up of one or more code components. Features are added to the Eclipse platform using either its built-in installer or a more usual external installer. Many systems and APIs provide many of the tools necessary to develop applications or serve as programming tools. In the Open Source community, Eclipse is the first to do it on a comprehensive scale. Perhaps, only Microsoft's .NET concept on Windows comes close. The advantage of Eclipse is it is platform-independent. Eclipse is written in Java, but it does not come with a built-in Java Runtime Environment (JRE). Thus, you need to have Java installed. To work with Eclipse 3.0+, the JRE must be version 1.4 or higher. Installing Eclipse is easy. Simply unzip the downloaded file and the Eclipse system resides in the eclipse subdirectory. Eclipse's update process puts each new version in its own subdirectory. The subdirectory includes the version number in its name. This allows multiple versions to be resident at the same time, without cross-pollution among versions. Eclipse's installer allows implementers to mark features as optional. Users then can elect whether to include them in their Eclipse environments. Such optional features can be installed later, if the occasion calls for doing so. Finally, Eclipse has a built-in update feature that you can run at any time. You can use it to update both Eclipse and any of its feature sets. A useful set of wizards in the Eclipse package can create many project types. The Help -> Cheat Sheet mechanism worked quite well for me. With it, I could create skeleton Java applications, Java applets, plugins, CVS tasks and SWT applications. As shown in the two screen snapshots below, Eclipse uses multipaned tabbed windows. The views shown depend on the action being taken. For example, the default view for files is an editor window is geared to the type of file. For building Eclipse's form of a build file, a build view is shown. Although Eclipse was written in Java and has a well-developed Java IDE, I was curious to see how it would work with languages other than Java and C++. A feature currently in beta testing, pydev, provides a Python IDE within the Eclipse platform. Given the beta nature of pydev, incorporating it into the Eclipse platform went quite well. I tested pydev on some projects I am developing, and it worked adequately. In the future, the promise of Eclipse and its rich set of features makes it a viable contender for a Python IDE. During my evaluation of Eclipse, I noticed both some problems and some neat features. None of these are show stoppers, but you might as well know what I ran into. The Help viewer did not allow me to change the font size. On my notebook, this made reading the help files difficult. The Help viewer did not tell me which files were being viewed. That is, I could not see the path names. This made it difficult to use alternative viewers. (See the previous point.) Tool tips often appeared to the right of the item over which the cursor was hovering. This made the tool tip unreadable when the item was at the right edge of the screen, because the tip was truncated and the cursor covered the rest; for example, the maximize button on the Help viewer. The default text editor did not support a word-wrap option. This meant I could not write this column using the default text editor. Also, I tried to define the default TXT editor as KDE's kate, but that had no effect. In fact, I could no longer edit any TXT file in the IDE. Opening an external file went into the bit bucket. I did recover, but it was a trip. The Search -> File facility was limited to workspaces. I needed to go to the command line to search for a string in any file. The Go Forward and Go Back on the Help viewer did not work as expected. I needed to click on the Content Tree to get to the next topic. When I started to create a new project, I found that the creation dialog box was modal, and I was unable to move around the Help system. The Help system should be independent of the modality of the rest of Eclipse. CVS currently is the only built-in source code control system. I found some missing help files for the SWT examples. Some of the Cheat Sheets were missing steps. Also, a only a limited number of Cheat Sheets are included. Icons on tabs are useful. For example, the left icon gives the file's full path when editing a text file. The right icon allows you to close that view. Use of tasks and markers are neat. Select any line in a resource, and you can create a task associated with that line or resource. The history log is a poor man's simple source code control system. It doesn't really replace CVS, but it is fine for a series of changes that you have saved but not yet checked into CVS. A rich set of build files and options are generated for the ANT build system. - Máy sấy quần áo 2 hours 19 min ago - Services on GlusterFS 2 hours 29 min ago - máy lọc nước 3 hours 20 min ago - Máy lọc nước 3 hours 23 min ago - Reply to comment | Linux Journal 4 hours 9 min ago - Definitely cool stuff here 5 hours 10 min ago - thanks for the information 6 hours 22 min ago - nice information thanks 7 hours 39 sec ago 9 hours 56 min ago - The lost opportunity of security 21 hours 47 min ago
0.7
medium
4
1,493
[ "programming fundamentals", "logic" ]
[ "system design" ]
[ "science", "social_studies", "arts_and_creativity" ]
{ "clarity": 0.6, "accuracy": 0.6, "pedagogy": 0.5, "engagement": 0.55, "depth": 0.55, "creativity": 0.35 }
0250569a-7348-4b18-b9e8-5c34270a8a30
The Posthuman Rosi Braidotti
interdisciplinary
concept_introduction
The Posthuman Rosi Braidotti If you ally obsession such a referred the posthuman rosi braidotti book that will have enough money you worth, get the no question best seller from us currently from several preferred authors. If you want to humorous books, lots of novels, tale, jokes, and more fictions collections are in addition to launched, from best seller to one of the most current released. You may not be perplexed to enjoy all books collections the posthuman rosi braidotti that we will unconditionally offer. It is not almost the costs. It's practically what you need currently. This the posthuman rosi braidotti, as one of the most operational sellers here will enormously be among the best options to review. Rosi Braidotti, "Posthuman Knowledge" Rosi Braidotti's \"The Posthuman\" Prof. Rosi Braidotti - Keynote Lecture - Posthumanism and Society Conference, New York 9 May 2015 Keynote Lecture 4 The Posthuman Condition and the Critical Posthumanities Rosi Braidotti: "Revolution is a fascist concept" Dictionary of Now #12 | Rosi Braidotti: Post-Humanimals Rosi Braidotti: Thinking as a Nomadic Subject Posthumanism Explained - Nietzsche, Deleuze, Stiegler, Haraway David Roden: pt. 1_Disconnection and Unbounded Posthumanism | Speculative and Unbounded Posthumanism 'Posthuman, All Too Human' - Professor Rosi Braidotti Rosi Braidotti – Necropolitics and Ways of Dying Rosi Braidotti, "Aspirations of a Posthumanist" Race and Gender Issues Condemned as 'Tools of the left' Keynote: Philosophical Posthumanism (Dr. Ferrando, NYU) PostHuman: An Introduction to Transhumanism 2. What is TRANSHUMANISM? Dr. Ferrando (NYU) - Course \"The Posthuman\" Lesson n. 2 TRANSHUMANISM AND POSTHUMANISM The Philosophical Roots of Posthumanism and Transhumanism - Dr. Ferrando (NYU), concept 323. Queer Theory and Gender Performativity Slavoj Žižek: Post-humanism is Soviet utopia There's No \"I\" in Human: Toward a Posthuman Ethics | Michael Shirzadian | TEDxOhioStateUniversity 3. What is POSTHUMANISM? Dr. Ferrando (NYU) - Course \"The Posthuman\" Lesson n. 3 Rosi Braidotti, "Memoirs of a Posthumanist" Dictionary of Now #12 | Discussion with Rosi Braidotti, Philippe Descola Rosi Braidotti Posthuman Feminism Rosi Braidotti: What is the Human in the Humanities Today? 1. What does \"POSTHUMAN\" mean? Dr. Ferrando (NYU) - Course \"The Posthuman\" Lesson n. 1 Panel debate with Rosi Braidotti in the Futures Lecture Series 02 Inhuman Symposium – Rosi Braidotti IMPACT20 – Planetary Alliances - Symposium Day 1: Lectures w/ Rosi Braidotti \u0026 Johannes Paul Raether The Posthuman Rosi Braidotti THE POSTHUMAN is a rather startling work that requires heavy concentration on the part of the reader to follow the brilliant thinking of the author, Rosi Braidotti, a contemporary philosopher and feminist theoretician `who makes a case for an alternative view on subjectivity, ethics and emancipation and pitches diversity against the postmodernist risk of cultural relativism while also standing against the tenets of liberal individualism.' Amazon.com: The Posthuman (9780745641584): Braidotti, Rosi ... The Posthuman starts by exploring the extent to which a post-humanist move displaces the traditional humanistic unity of the subject. Rather than perceiving this situation as a loss of cognitive and moral self-mastery, Braidotti argues that the posthuman helps us make sense of our flexible and multiple identities. The Posthuman | Rosi Braidotti THE POSTHUMAN is a rather startling work that requires heavy concentration on the part of the reader to follow the brilliant thinking of the author, Rosi Braidotti, a contemporary philosopher and feminist theoretician `who makes a case for an alternative view on subjectivity, ethics and emancipation and pitches diversity against the postmodernist risk of cultural relativism while also standing against the tenets of liberal individualism.' The Posthuman - Kindle edition by Braidotti, Rosi ... The Posthuman by Rosi Braidotti is an impressive display of intellectual virtuosity, but I suspect it may be an exercise in futility as well. The author subtly interrogates a vast range of works that purport to be post-humanist or zoocentric, from deep ecology to ecofeminism, concluding that they are ultimately tied to anthropocentric and humanistic paradigms. The Posthuman by Rosi Braidotti - Goodreads posthuman than some of the well-meaning and progressive neo-humanist opponents of this system. I will return in the next chapter to the opportunist brand of the posthuman developed in the contemporary market economy. Critical Posthumanism The third strand of posthuman thought, my own variation, shows no conceptual or normative ambivalence towards The Posthuman - Theory Tuesdays For people interested in Critical Posthumanism, Rosi Braidotti's The Posthuman is probably a good place to start. Summary of Rosi Braidotti's The Posthuman (Part 1) | by ... Braidotti concludes the chapter by noting that in her focus on these processes of humanity's posthuman becoming, she does not mean to undersell the different aspects of humanity or treat all of ... Summary of Rosi Braidotti's The Posthuman (Part 2) | by ... In Posthuman Knowledge, Rosi Braidotti takes a closer look at the impact of these developments on three major areas: the constitution of our subjectivity, the general production of knowledge and the practice of the academic humanities. Posthuman Knowledge | Rosi Braidotti The Posthuman. Rosi Braidotti. The Posthuman offers both an introduction and major contribution to contemporary debates on the posthuman. Digital 'second life', genetically modified food, advanced prosthetics, robotics and reproductive technologies are familiar facets of our globally linked and technologically mediated societies. The Posthuman | Rosi Braidotti | download The Posthuman starts by exploring the extent to which a post-humanist move displaces the traditional humanistic unity of the subject. Rather than perceiving this situation as a loss of cognitive... The Posthuman - Rosi Braidotti - Google Books Page 1/2 Copyright : myprofile.canton.wickedlocal.com Rather than perceiving this situation as a loss of cognitive and moral self-mastery, Braidotti argues that the posthuman helps us make sense of our flexible and multiple identities. Braidotti then analyzes the escalating effects of post-anthropocentric thought, which encompass not only other species, but also the sustainability of our planet as a whole. Rosi Braidotti - Wikipedia Philosopher Rosi Braidotti talks about the post-human ethic, the devastating effects of neoliberal capitalism, and her proposal for affirmative resistance. B... Rosi Braidotti: "Revolution is a fascist concept" - YouTube subjectivity (Braidotti, 1994, 2011a, 2011b) and to expose power both as. entrapment (potestas) and as empowerment (potentia). One ?eld of immediate cartographic relevance to the posthuman is. biopolitical scholarship which grew from Foucault's seminal work on. Theory, Culture & Society ATheoretical The Author(s) 2018 ... The Posthuman starts by exploring the extent to which a post-humanist move displaces the traditional humanistic unity of the subject. Rather than perceiving this situation as a loss of cognitive and moral self-mastery, Braidotti argues that the posthuman helps us make sense of our flexible and multiple identities. The Posthuman | Wiley "With Posthuman Glossary, editors Rosi Braidotti and Maria Hlavajova bring together a comprehensive and diverse range of entries that make an emphatic intervention in posthuman scholarship, offering neat summaries, exploring new applications and challenges and suggesting intriguing conceptual networks. The product of significant, collective intellectual and adminstrative labour, this ensemble piece will be a catalyst for research, activism and the formation of new ethical communities." Posthuman Glossary (Theory in the New Humanities) Rosi ... Rosi Braidotti has 58 books on Goodreads with 5289 ratings. Rosi Braidotti's most popular book is The Posthuman. Books by Rosi Braidotti (Author of The Posthuman) Braidotti outlines new forms of cosmopolitan neo-humanism that emerge from the spectrum of post-colonial and race studies, as well as gender analysis and environmentalism. The challenge of the posthuman condition consists in seizing the opportunities for new social bonding and community building, while pursuing sustainability and empowerment. The Posthuman : Rosi Braidotti : 9780745641584 THE POSTHUMAN is a rather startling work that requires heavy concentration on the part of the reader to follow the brilliant thinking of the author, Rosi Braidotti, a contemporary philosopher and feminist theoretician `who makes a case for an alternative view on subjectivity, ethics and emancipation and pitches diversity against the postmodernist risk of cultural relativism while also standing against the tenets of liberal individualism.' Copyright code : be4f843b8eaa81c985999051349e4c14 Page 2/2
0.7
medium
5
2,217
[ "domain basics" ]
[ "expert knowledge" ]
[ "science", "technology", "social_studies" ]
{ "clarity": 0.7, "accuracy": 0.5, "pedagogy": 0.6, "engagement": 0.5, "depth": 0.55, "creativity": 0.3 }