text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
Daniel Goldston
Daniel Goldston
BornJanuary 4, 1954 (age 68)
NationalityAmerican
Alma materUniversity of California, Berkeley
Known forGPY theorem in number theory
AwardsCole Prize (2014)
Scientific career
FieldsMathematics
InstitutionsSan Jose State University
ThesisLarge differences between consecutive prime numbers (1981)
InfluencedYitang Zhang
Daniel Alan Goldston (born January 4, 1954 in Oakland, California) is an American mathematician who specializes in number theory. He is currently a professor of mathematics at San Jose State University.
Early life and education
Daniel Alan Goldston was born on January 4, 1954 in Oakland, California. In 1972, he matriculated to the University of California, Berkeley, where he earned his bachelor's degree and, in 1981, a Ph.D. in mathematics. His doctoral advisor at Berkeley was Russell Sherman Lehman; his dissertation was entitled "Large Differences between Consecutive Prime Numbers".[1]
Career
After earning his doctorate, Goldston worked at the University of Minnesota Duluth and then spent the next academic year (1982–83) at the Institute for Advanced Study (IAS) in Princeton. He has worked at San Jose State University since 1983, save for stints at the IAS (1990), the University of Toronto (1994), and the Mathematical Sciences Research Institute in Berkeley (1999).
Research
Goldston is best known for the following result that he, János Pintz, and Cem Yıldırım proved in 2005:[2]
${\displaystyle \liminf _{n\to \infty }{\frac {p_{n+1}-p_{n}}{\log p_{n}}}=0}$
where ${\displaystyle p_{n}}$ denotes the nth prime number. In other words, for every ${\displaystyle c>0\ }$, there exist infinitely many pairs of consecutive primes ${\displaystyle p_{n}\ }$ and ${\displaystyle p_{n+1}\ }$ which are closer to each other than the average distance between consecutive primes by a factor of ${\displaystyle c\ }$, i.e., ${\displaystyle p_{n+1}-p_{n}.
This result was originally reported in 2003 by Goldston and Yıldırım but was later retracted.[3][4] Then Pintz joined the team and they completed the proof in 2005.
In fact, if they assume the Elliott–Halberstam conjecture, then they can also show that primes within 16 of each other occur infinitely often, which is related to the twin prime conjecture.
Recognition
In 2014, Goldston won the Cole Prize, shared with Yitang Zhang and colleagues Cem Yildirim and János Pintz, for his contributions to number theory.[1] Also, Goldston was named to the 2021 class of fellows of the American Mathematical Society "for contributions to analytic number theory".[5]
4. ^ "Archived copy". Archived from the original on 2009-02-20. Retrieved 2009-03-31.{{cite web}}: CS1 maint: archived copy as title (link)
|
{}
|
mersenneforum.org On relations file format
Register FAQ Search Today's Posts Mark Forums Read
2012-08-18, 16:25 #1
Dubslow
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts
On relations file format
Quote:
Originally Posted by fivemack Yes provided you can read base-sixteen (so in some senses no); if you look at the last few lines of the file, it will read something like Code: 22617993,32185531:4f7c7bad,25f9,3ea88d,288611,a5a09,f8b7,8425,6b252c3:7a526e39,57d8af69,67563a1,1b0f047,3ce17,55d9 70093203,74087957:7e890e81,ff64977d,9ef,455,f0a43,a58b,6997,5e75,6b252c3:574e3e89,679,1de69b7,1163ae9,366137,1e707,5071 119968003,87443153:ee91c059,886dd67,c83,4cf4afd,3addd07,668cdb,6b252c3:48ebe3c1,9fb4baa9,13f1d0ef,1ec1,ef9,2669bf,109dd1 Notice that each of these has the same number 6b252c3 before the colon; that's the last special-Q value, and turns out to be 112349891 in hex. So the 112300000-112400000 chunk is just under half-done. Some versions of the siever write incrementally to the output file 1234.t, in which case you look at the last line of that, but I think serge's current compilation doesn't. A 1e6 chunk is a lot of work - probably about two weeks on your fast machines, possibly three weeks.
Wouldn't the file be quite a bit smaller if the rels were recorded in (say) base 36?
Or would that make compression harder (and thus larger)?
2012-08-18, 18:30 #2 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 100101100010002 Posts Bill, it doesn't matter much for this size of a problem. Every relation takes, say, 0.7s to generate; writing it and downloading a chunk takes negligible time. GMP can read and write numbers in any base. (For compression it is best that only lowercase letters were used for output [or only upper] but not both.) Tom, have you thought about using 16e siever and a much lesser sieving area? At q~268M, the 16e siever is only 10% slower and produces 3x* >2x more relations (so the sieving area would be way shorter). It does take 1.5G of memory compared to 1.1G for the 15e. ____________ *waited for more output, while typing. 3x was a local anomaly. EDIT2: With my 1.5G 16e test processes, I effectively starved the 15e's (the box I am torturing has only 4G of memory), and they shed the fat and revealed that each of them needs only 0.66G res (still 1.1G virtual). So, 15e might be a better bet for this project, because you can easily recruit 4G 4-cpu boxes; there are more of them in existence than the 8G+ (the latter must be quite new, born after the dramatic drop in memory prices per Gb). Last fiddled with by Batalov on 2012-08-18 at 18:49 Reason: my kbord is loosng lettrs
2012-08-18, 18:56 #3
Dubslow
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3·29·83 Posts
Quote:
Originally Posted by Batalov Bill, it doesn't matter much for this size of a problem. Every relation takes, say, 0.7s to generate; writing it and downloading a chunk takes negligible time. GMP can read and write numbers in any base. (For compression it is best that only lowercase letters were used for output [or only upper] but not both.)
Well no, but I meant for large collections of relations, e.g. preparing to do LA by downloading rels from NFS@Home/RSALS (especially w.r.t. the former server space problems). If the compression ratio is the same, that could have been some significant space savings for RSALS.
2012-08-18, 19:15 #4 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 960810 Posts You could recode a reasonable chunk (or a full dataset that you may have somewhere), gzip, compare and share the results. There probably will be a small benefit, no doubt. It has to be substantial - to change the de facto standard file format [makes air quotes and winks with both eyes]. There were also proposals to bzip, 7zip, etc. (they pack better, but are much slower) Here's another idea: (context: the factors less than 1000 are already not reported in this file and are reconstitued on the fly by msieve) drop factors less than 10000. A radical version of the same idea: keep only a and b values in the file. Less radical variant: keep a few of the largest factors and only the changed (wrt previous line) q0. These were all tried - for storage and/or for transmission. A modified version of msieve can read any of these file dialects (it is best to recode the "a,b:" file into a standard file once, before real use - because miseve re-reads the file many times).
2012-08-18, 19:23 #5
smh
"Sander"
Oct 2002
52.345322,5.52471
29·41 Posts
Quote:
Originally Posted by Batalov (For compression it is best that only lowercase letters were used for output [or only upper] but not both.)
Don't know if the relations in a later SVN are already the same case, but to convert the relation files
Code:
cat file | tr '[A-Z]' '[a-z]' > file.out
2012-08-18, 19:37 #6 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 23×1,201 Posts Only the very old legacy sievers were using both. That was one of the very first patches: to unify case. In retrospect, I think uppercase should have been preferred: dc balkes when I use it to convert by click'n'pasting (I have a script though) Code: echo "16i 6B252C3 p" |dc ==> 112349891 echo "16i 6b252c3 p" |dc ==> error echo "16i 6b252c3 p" | tr '[a-f]' '[A-F]' | dc ==> fine, but awkward dc is available on any computer ;-) Nevermind though. Not a good reason to change. Last fiddled with by Batalov on 2012-08-18 at 21:21 Reason: forgot to uc the ^V
2012-08-18, 19:47 #7
Dubslow
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
3×29×83 Posts
Quote:
Originally Posted by Batalov Only the very old legacy sievers were using both. That was one of the very first patches - to unify case. In retrospect, I think uppercase should have been preferred: dc balkes when I use it to convert by click'n'pasting (I have a script though) Code: echo "16i 6b252c3 p" |dc ==> 112349891 echo "16i 6b252c3 p" |dc ==> error echo "16i 6b252c3 p" | tr '[a-f]' '[A-F]' | dc ==> fine, but awkward dc is available on any computer ;-) Nevermind though. Not a good reason to change.
I've always been of the opinion that upper case looks more like a number, i.e. the numerals 0-9 are all about the same height as upper case letters, but they're much taller than lower case letters. Words we read are mostly lower case, while (base 10) numbers are thus "upper case"; thus, all hex should use upper case A-F as well
2012-08-18, 19:57 #8 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 258816 Posts A totally new (binary) file format would be probably way better. It should be able to store values in variable number of bytes (a byte of even a bit stream with interjected tags). Primes could be converted into their ordinal (or at the very least they could be stored in mod 30 <<3 + mod30class; 2 of course not stored as well as all small primes). Of course, most likely this will not beat a gzip pass. "Muda da, muda da, said Ecclesiastes, muda da. Kore wa subete no kyoei kokorodesu."
2012-08-18, 21:54 #9 jasonp Tribal Bullet Oct 2004 DD716 Posts Latter-day msieve versions read the original data file 2-3 times during the filtering, once for the linear algebra and once for each dependency in the square root. I have plans to convert the relation text into a Berkeley DB database. Factoring relations on the fly would be very nice, but we really need a subsystem that is tuned for finding small factors in large numbers (CADO has one and Alex's dissertation describes a more powerful one).
2012-08-18, 22:33 #10
Batalov
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2
23×1,201 Posts
iirc
Quote:
Originally Posted by gmp Function: size_t mpz_out_str (FILE *stream, int base, mpz_t op) Output op on stdio stream stream, as a string of digits in base base. The base may vary from 2 to 36.
It is either undocumented or documented elsewhere, that base = -16 will produce hex-in-uppercase.
EDIT: indeed, mpz/out_str.c:
Code:
if (base >= 0)
{
num_to_text = "0123456789abcdefghijklmnopqrstuvwxyz";
if (base == 0)
base = 10;
else if (base > 36)
{
num_to_text = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz";
if (base > 62)
return 0;
}
}
else
{
base = -base;
num_to_text = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
}
Last fiddled with by Batalov on 2012-08-18 at 22:39
2012-08-18, 23:20 #11 Batalov "Serge" Mar 2008 Phi(4,2^7658614+1)/2 258816 Posts I once dealt with bona fide CWI relations. (afair, it strips more small factors, and the format is a bit different) First, I built msieve that worked straight with them. After a few hours, I couldn't bear to look at the (crawl) speed of file import (it was dominated by trial factoring), killed it, wrote a recoder (miseve-based!) and then remdups'd the resulting file and everything was smooth after that.
Similar Threads Thread Thread Starter Forum Replies Last Post xilman Msieve 2 2015-11-27 09:54 R.D. Silverman Msieve 25 2013-04-17 07:40 Dubslow Msieve 5 2012-09-13 11:26 schickel Aliquot Sequences 5 2009-04-02 12:44 fivemack Factoring 7 2007-08-04 17:32
All times are UTC. The time now is 00:42.
Mon Nov 29 00:42:53 UTC 2021 up 128 days, 19:11, 0 users, load averages: 0.63, 0.97, 1.11
|
{}
|
Hardcover | $55.00 Short | £37.95 | ISBN: 9780262162173 | 592 pp. | 6 x 9 in | 116 illus.| December 2003 Paperback |$30.00 Short | £20.95 | ISBN: 9780262661973 | 592 pp. | 6 x 9 in | 116 illus.| January 2006
# Seeing and Visualizing
It's Not What You Think
## Overview
In Seeing and Visualizing, Zenon Pylyshyn argues that seeing is different from thinking and that to see is not, as it may seem intuitively, to create an inner replica of the world. Pylyshyn examines how we see and how we visualize and why the scientific account does not align with the way these processes seem to us "from the inside." In doing so, he addresses issues in vision science, cognitive psychology, philosophy of mind, and cognitive neuroscience.
First, Pylyshyn argues that there is a core stage of vision independent from the influence of our prior beliefs and examines how vision can be intelligent and yet essentially knowledge-free. He then proposes that a mechanism within the vision module, called a visual index (or FINST), provides a direct preconceptual connection between parts of visual representations and things in the world, and he presents various experiments that illustrate the operation of this mechanism. He argues that such a deictic reference mechanism is needed to account for many properties of vision, including how mental images attain their apparent spatial character without themselves being laid out in space in our brains.
The final section of the book examines the "picture theory" of mental imagery, including recent neuroscience evidence, and asks whether any current evidence speaks to the issue of the format of mental images. This analysis of mental imagery brings together many of the themes raised throughout the book and provides a framework for considering such issues as the distinction between the form and the content of representations, the role of vision in thought, and the relation between behavioral, neuroscientific, and phenomenological evidence regarding mental representations.
Zenon W. Pylyshyn is Board of Governors Professor of Cognitive Science at Rutgers Center for Cognitive Science. He is the author of Seeing and Visualizing: It's Not what You Think (2003) and Computation and Cognition: Toward a Foundation for Cognitive Science (1984), both published by The MIT Press, as well as over a hundred scientific papers on perception, attention, and the computational theory of mind.
Rick van der Ploeg is Professor of Economics and Research Director of the Oxford Centre for the Analysis of Resource Rich Economies, University of Oxford.
## Reviews
"Pylyshyn's book is to be commended as a thorough and persuasive defense of the information-processing approach to vision and visualizing. It should be essential reading for psychologists, cognitive scientists, and philosophers." , Paul Coates, Metapsychology
## Endorsements
"Pylyshyn's book is an impressive achievement and a refreshing approach to vision science. Written with characteristic flair and erudition, the book provides a comprehensive synthesis of research and theory in the field. Pylyshyn combines masterful exposition with incisive critical evaluations, including his own significant experimental contributions and theoretical analyses. Sensitive to key philosophical and methodological issues, Pylyshyn offers a radical critique of received views and dispels deeply entrenched misconceptions to which much theorizing about vision has fallen victim."
—Peter Slezak, Program in Cognitive Science, University of New South Wales
"Over the past thirty years, Zenon Pylyshyn has played a leading role in developing theories of high-level visual cognition. In this book, he brings together his long-standing interests in the modularity of visual processing, the relations between visual attention, spatial indexing, and 'seeing,' and the relationship between imagery and vision. The work not only summarizes his influential views, but also raises important questions for future research. It will be of considerable relevance to all interested in high-level vision, from psychologists to computer scientists and philosophers."
—Glyn Humphreys, University of Birmingham
"Pylyshyn's work is cognitive science at its best, bringing detailed experimental results to bear upon a number of important theoretical questions, all in amiably clear prose. Anyone interested not only in the nature of vision, but in imagistic experience, attention, or demonstrative reference, will find here a wealth of relevant data and argument. Although Pylyshyn is largely concerned to defend his own theory of 'visual indices,' he is quite sensitive to competing views, as well as to the (sometimes misleading) deliverances of introspection. I emphatically recommend the book not only to the specialist, but to anyone wanting an up-to-date text on the topic generally."
—Georges Rey, Professor of Philosophy, University of Maryland
"Seeing and Visualizing offers a persuasive account of why visual perception and visual imagery do not depend on internal pictorial representations, and puts forward the deeply counterintuitive notion that the machinery of visual thinking does not use mental pictures at all. Pylyshyn's masterful defense of this idea is a 'must-read' not only for committed Fodorians but also for those who believe that mental representations resemble the things they depict. The book is challenging and provocative—and even occasionally infuriating—but always thoughtful and immensely readable. I recommend it to anyone who has ever wondered about how we see and visualize the world."
—Mel Goodale, Canada Research Professor in Visual Neuroscience, University of Western Ontario
|
{}
|
Announcing Timescale Cloud: The first fully-managed time-series database service that runs on AWS, GCP, and Azure
Today we are thrilled to announce the general availability of Timescale Cloud, the first fully-managed, multi-cloud time-series database service that runs on Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
Timescale Cloud gives developers the freedom to deploy and migrate time-series workloads across a variety of regions around the world -- on the cloud provider of their choice -- with just a few clicks.
Timescale Cloud delivers fully-managed instances of TimescaleDB (powered by PostgreSQL) with a clean & simple experience for your time-series workloads. It’s fully production ready with built-in high-availability, automated monitoring, automated backups, and point-in-time restore.
Feature highlights include:
• Starts at $3.86/day on your cloud provider of choice (AWS, GCP, Azure) -- that’s less than an SF cup of coffee. • Store and analyze time-series data using TimescaleDB’s advanced time-series analytical functions and automated data management capabilities with SQL (eg interpolation, data retention policies, automated aggregates). • Visualize your data with fully-managed Grafana instances, or connect with any of our compatible third-party viz tools (eg Tableau, PowerBI, Superset). • Seamlessly scale up, scale down, or even scale out via read replicas or database forks (eg useful to populate development or data science environments). • (Scroll to the bottom of this post to see even more features, or register for the upcoming webinar on Timescale Cloud.) SIGN UP HERE to get started today! When you do, you’ll receive$300 in trial credits (no credit card required, available for a limited time).
There’s a lot you can build (The sky’s the limit when you’re in the cloud)
When you combine the power of SQL with time-series, there’s a lot you can build:
Thank you to our Timescale Cloud Beta users
We’d like to take a moment to thank our users who joined us for the Timescale Cloud Beta & Preview these past several months and provided valuable feedback:
“The Lavoro integrated platform solution required a database capable of easily scaling to support our growing data volumes and historical and/or real-time analytics. After evaluating a number of database solutions, we selected TimescaleDB, which operates at the edge, and Timescale Cloud. The flexibility of SQL and its broad ecosystem allowed Lavoro to easily integrate TimescaleDB into our platform. Timescale continues to refine its advanced support for time-series data management, which is a major benefit for Lavoro customers today and into the future.” - Brandon Davis, Chief Technology Officer, Lavoro Technologies
“We are consistently capturing and analyzing a lot of network data, WiFi sessions, and DNS logs, which by nature have a time component. We were very excited to find TimescaleDB because it provides advanced time-series functions that can be used with SQL and is compatible with our systems. Having TimescaleDB available as a managed cloud service make operations much simpler.” - Keegan McCallum, Head of Engineering, Colony Networks
“We started exploring TimescaleDB a year ago and have been using it ever since. The ease of installation and time-series features have simplified our collection and storage of real-time information. Having to deal with millions of objects every minute at Yottaa can get complicated; Timescale's flexible Hypertable architecture simplifies controlling data retention automatically and creating continuous aggregations. Timescale Cloud can serve to further reduce our operational overhead.” - Bob Buffone, CTO, Yottaa
More cost-effective than alternatives
With Timescale Cloud we are building not just the easiest time-series experience, but we have also made the service even more cost-effective versus other managed time-series offerings.
Take Amazon Timestream, announced just last year. At first glance, Timestream pricing may seem reasonable. But if you apply its pricing against even modest real-world time-series workloads, you’ll see Timescale Cloud is much more cost-effective.
For example, let’s say you have a workload with the following characteristics: 1,000 writes per second, a 30 day retention policy, and 60 queries a minute (eg if there are multiple dashboards and applications pointed at the data). Then, depending on your query patterns, you’ll spend anywhere between $113/day (for shallow queries only scanning 30 minutes of data at a time) to$1357/day (for deep queries scanning 6 hours of data at a time). The equivalent Timescale Cloud machine type deployed on AWS would only cost $13.90/day (I/O optimized, 512GB Storage, 15GB RAM, 2 vCPUs). In other words, AWS Timestream is 10x-100x more expensive than Timescale Cloud. (For full transparency, we’ve documented our calculations in this public spreadsheet. Please feel free to double-check our math, or even copy this spreadsheet and change the parameters to suit your own workloads.) Open-source is the future of software development and cloud is the future of software delivery While we are excited to announce Timescale Cloud, we are equally excited to share why this launch is particularly important given broader trends.We live in a fascinating time in the software industry. We are witness to the rise of open-source at the expense of proprietary software. We see the cloud delivery model, whether it be IaaS, PaaS, or SaaS, become so dominant that AWS has been able to build a$30-billion-dollar-a-year business. We also have seen friction between the public clouds and open-source companies play out in the form of alternative licensing models.
Through all of this, one thing is clear: open-source is the future of software development and cloud is the future of software delivery.
Open-source software is just better for developers and enterprises: it is free, it is developed out in the open (which leads to better code quality), it provides deployment and code flexibility, and it matures and is hardened at a scale that no proprietary software or service can match. The heart of its success is that open source builds communities -- and collaborative communities don’t just produce fewer bugs, but also better things.
A managed cloud service makes open-source software even better: it relieves the burden of software management and operations (eg uptime, backups, updates) and frees developers to focus on their core applications.
Open-source software enables developers to build applications that are cloud portable and avoid vendor lock-in. But as more workloads move to the cloud, and the public cloud companies become more dominant, how do software developers avoid building an application that only runs on a single cloud, which is just another form of vendor lock-in?
Freedom of deployment choice empowers software developers
At Timescale, we have recognized these dynamics for a while now. Our mission is to build the easiest, fastest, most-reliable, and most cost-effective place to store and analyze time-series data. When we first launched over two years ago, “easiest” primarily meant “full SQL” -- we saw no point in forcing developers to teach themselves and their colleagues a new query language. We also recognized that building on PostgreSQL meant that the operational experience can be something that is tried-and-true vs. having to learn to manage something unproven and new, or even forcing one to manage a complex polyglot system of relational metadata and time series data in different siloed systems. We realized that by building on top of PostgreSQL, we could offer simplicity at scale.
Today, millions of downloads later, we are extending that principle of simplicity by eliminating the hassle of managing and operating a database, all while giving users the freedom to work with the cloud service provider of their choice. Timescale Cloud amplifies the power of TimescaleDB by making it even easier to use.
We believe the cloud freedom that Timescale Cloud provides is a necessary step in the right direction to empower the developer.
With Timescale Cloud, some of you may ask: are the public clouds now your vendor, partner, or competitor?
Today’s complicated reality means that to the public clouds we are sometimes a customer (eg Timescale Cloud deployed on AWS, Azure, or GCP), a partner (eg TimescaleDB with Azure Database for PostgreSQL), and a competitor (eg Timescale Cloud versus public cloud time-series offerings).
At the end of the day, our goal is simple: to provide the best time-series experience to software developers and their organizations. Software developers want optionality, and we are here to serve their time-series needs however and wherever they need them.
Building this optionality also means empowering others in our community: for example, public cloud providers offering managed TimescaleDB OSS (eg Azure Database for PostgreSQL, Alibaba Cloud, Digital Ocean), monitoring tools (eg Grafana, Zabbix, Sensu), and others (eg Seeq, PostgREST, SeveralNines).
Want an open-source or free version? Sure. Want to pay for the benefits in Enterprise? Sure. On premise? Check. Edge? Check. Public Clouds? Check. Fully-managed cloud service? Check.
Get started with Timescale Cloud today
Getting started with Timescale Cloud is incredibly easy.
Timescale Cloud is available today to customers across the globe. By signing up, you’ll automatically receive access to the features and capabilities in TimescaleDB Enterprise. Plus we’ll give you $300 in free trial credits to start (no credit card required, available for a limited time), which can be applied to any service plan configuration (from Dev to Pro). Once you exceed those free credits, Timescale Cloud offers a pay-as-you-go model with the flexibility to cancel at anytime. SIGN UP HERE to get started today! For even more information, register for our upcoming webinar on Timescale Cloud. And if you have any other questions, please feel free to ping us on Community Slack or Contact Us. We’re here to help. Expanded Timescale Cloud Feature List Time-Series Analytics Full SQL for time-series and relational data, including JOINs, window functions, CTEs Advanced time-series analytical functions (eg gap filling, LOCF, interpolation) Automatic continuous aggregates Automatic data retention policies Ability to handle infinitely large cardinalities Inserts that scale to 1-2M metrics/second The reliability and ecosystem of PostgreSQL Geo-spatial data support with PostGIS integration Compatibility with Prometheus, Grafana, and Telegraf for ingest and visualization Operations High-availability with automated monitoring Automated backups and point-in-time restore Database forking and read replicas Deployment & Management Freedom to choose among a variety of regions on AWS, Azure, and GCP One-click cross-cloud and cross-region migrations One-click upgrades and automated maintenance Security VPC Peering Network IP Access Control SSL-enabled database connections Data encryption for cloud instances and backups Visualization Rich integrated visualization with fully-managed Grafana instances Pricing & Support Flexibility allows you to scale from 20GB to 10TB in storage, 2-64 CPUs, 4-488GB RAM Transparent pricing, pay-as-you-go model with the flexibility to cancel at anytime Free Community Support No data transfer costs between TimescaleDB and other services within the same cloud and region (eg with other AWS services within the same AWS region) Plans starting as low as$2.64/day Free \$300 in trial credits when you sign up (no credit card required, available for a limited time)
This post was written by
|
{}
|
1. If you are a new user, please register to get an Indico account through https://login.ihep.ac.cn/registIndico.jsp. Any questions, please email us at helpdesk@ihep.ac.cn or call 88236855.
2. The name of any uploaded file should be in English or plus numbers, not containing any Chinese or special characters.
3. If you need to create a conference in the "Conferences, Workshops and Events" zone, please email us at helpdesk@ihep.ac.cn.
# International Conference on Technology and Instrumentation in Particle Physics 2017(TIPP2017)
21-26 May 2017
Beijing International Convention Center
Asia/Shanghai timezone
Home > Timetable > Session details > Contribution details
# Contribution oral
Beijing International Convention Center - Room 305A
Neutrino Detectors
# CUPID-0: a cryogenic calorimeter with particle identification for double beta decay search.
## Speakers
• Laura CARDANI
## Content
With their excellent energy resolution, efficiency, and intrinsic radio-purity, cryogenic calorimeters are primed for the search of neutrino-less double beta decay (0nDBD). The sensitivity of these devices could be further increased by discriminating the dominant alpha background from the expected beta-like signal. The CUPID-0 collaboration aims at demonstrating that the measurement of the scintillation light produced by the absorber crystals allows for particle identification and, thus, for a complete rejection of the alpha background. The CUPID-0 detector, assembled in 2016 and now in commissioning, consists of 26 Zn$^{82}$Se scintillating calorimeters for about 2x10$^{25}$ 0nDBD emitters. In this contribution we present the preliminary results obtained with the detector and the perspectives for a next generation project.
## Summary
The neutrinoless double beta decay (0nDBD) is a hypothesized nuclear transition that violates the conservation of the total lepton number. Its prized observation would have important implications in the explanation of the asymmetry matter/anti-matter, and it would demonstrate that neutrinos have a Majorana mass component.
The CUORE collaboration is now completing the commissioning of a ton-scale detector based on cryogenic calorimeters, which is expected to become soon one of the most sensitive detectors searching for 0nDBD.
Next generation projects aim at increasing the sensitivity on 0nDBD by at least an order of magnitude with respect to CUORE. The sensitivity of this experiment is limited by an intrinsic background due to alpha particles, that are produced by contaminations of the material that constitute the detector itself.
We present an upgrade of the calorimetric technique, based on the simultaneous read-out of heat and scintillation light, that will allow to perform particle identification and disentangle electrons (possible signal) from the dominant alpha background. We assembled a first medium-scale prototype of this technology, which is now being commissioned in the underground Laboratori Nazionali del Gran Sasso (Italy). Given the high number of 0nDBD emitter and the low expected background, this prototype has also an interesting physics potential in the search for 0nDBD.
In this contribution we describe the detector, we present our preliminary results, and we discuss the perspectives in view of a next generation experiment.
|
{}
|
# In the figure, OM AB and AM = 6 cm where ‘O’ is the centre of the circle then AB =
36 views
in Circles
closed
In the figure, OM AB and AM = 6 cm where ‘O’ is the centre of the circle then AB =
A) 6 cm
B) 12 cm
C) 3 cm
D) 6 √6 cm
+1 vote
by (56.9k points)
selected by
Correct option is (B) 12 cm
We know that perpendicular from centre of the circle on any chord bisects the chord.
Since, OM $\bot$ AB
$\therefore$ OM bisect chord AB
$\Rightarrow$ M is mid-point of chord AB
$\Rightarrow$ AB = 2AM
$=2\times6$ $(\because$ AM = 6 cm)
= 12 cm
+1 vote
by (42.4k points)
Correct option is B) 12 cm
|
{}
|
Outlook: Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Wait until speculative trend diminishes
Time series to forecast n: 19 Jan 2023 for (n+3 month)
Methodology : Modular Neural Network (Social Media Sentiment Analysis)
## Abstract
Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the SNV^D stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes
## Key Points
1. What statistical methods are used to analyze data?
2. Can machine learning predict?
3. Can we predict stock market using machine learning?
## SNV^D Target Price Prediction Modeling Methodology
We consider Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share Decision Process with Modular Neural Network (Social Media Sentiment Analysis) where A is the set of discrete actions of SNV^D stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Multiple Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Modular Neural Network (Social Media Sentiment Analysis)) X S(n):→ (n+3 month) $\stackrel{\to }{S}=\left({s}_{1},{s}_{2},{s}_{3}\right)$
n:Time series to forecast
p:Price signals of SNV^D stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
## SNV^D Stock Forecast (Buy or Sell) for (n+3 month)
Sample Set: Neural Network
Stock/Index: SNV^D Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share
Time series to forecast n: 19 Jan 2023 for (n+3 month)
According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
## IFRS Reconciliation Adjustments for Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share
1. For a discontinued hedging relationship, when the interest rate benchmark on which the hedged future cash flows had been based is changed as required by interest rate benchmark reform, for the purpose of applying paragraph 6.5.12 in order to determine whether the hedged future cash flows are expected to occur, the amount accumulated in the cash flow hedge reserve for that hedging relationship shall be deemed to be based on the alternative benchmark rate on which the hedged future cash flows will be based.
2. Annual Improvements to IFRS Standards 2018–2020, issued in May 2020, added paragraphs 7.2.35 and B3.3.6A and amended paragraph B3.3.6. An entity shall apply that amendment for annual reporting periods beginning on or after 1 January 2022. Earlier application is permitted. If an entity applies the amendment for an earlier period, it shall disclose that fact.
3. The accounting for the forward element of forward contracts in accordance with paragraph 6.5.16 applies only to the extent that the forward element relates to the hedged item (aligned forward element). The forward element of a forward contract relates to the hedged item if the critical terms of the forward contract (such as the nominal amount, life and underlying) are aligned with the hedged item. Hence, if the critical terms of the forward contract and the hedged item are not fully aligned, an entity shall determine the aligned forward element, ie how much of the forward element included in the forward contract (actual forward element) relates to the hedged item (and therefore should be treated in accordance with paragraph 6.5.16). An entity determines the aligned forward element using the valuation of the forward contract that would have critical terms that perfectly match the hedged item.
4. An example of a fair value hedge is a hedge of exposure to changes in the fair value of a fixed-rate debt instrument arising from changes in interest rates. Such a hedge could be entered into by the issuer or by the holder.
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
## Conclusions
Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share is assigned short-term Ba1 & long-term Ba1 estimated rating. Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share prediction model is evaluated with Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression1,2,3,4 and it is concluded that the SNV^D stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Wait until speculative trend diminishes
### SNV^D Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCC
Balance SheetBaa2Ba3
Leverage RatiosCaa2Baa2
Cash FlowBa3C
Rates of Return and ProfitabilityBaa2C
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
### Prediction Confidence Score
Trust metric by Neural Network: 84 out of 100 with 638 signals.
## References
1. F. A. Oliehoek, M. T. J. Spaan, and N. A. Vlassis. Optimal and approximate q-value functions for decentralized pomdps. J. Artif. Intell. Res. (JAIR), 32:289–353, 2008
2. Bottomley, P. R. Fildes (1998), "The role of prices in models of innovation diffusion," Journal of Forecasting, 17, 539–555.
3. Abadir, K. M., K. Hadri E. Tzavalis (1999), "The influence of VAR dimensions on estimator biases," Econometrica, 67, 163–181.
4. F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems. Springer, 2016
5. M. Benaim, J. Hofbauer, and S. Sorin. Stochastic approximations and differential inclusions, Part II: Appli- cations. Mathematics of Operations Research, 31(4):673–695, 2006
6. Barkan O. 2016. Bayesian neural word embedding. arXiv:1603.06571 [math.ST]
7. Athey S, Bayati M, Imbens G, Zhaonan Q. 2019. Ensemble methods for causal effects in panel data settings. NBER Work. Pap. 25675
Frequently Asked QuestionsQ: What is the prediction methodology for SNV^D stock?
A: SNV^D stock prediction methodology: We evaluate the prediction models Modular Neural Network (Social Media Sentiment Analysis) and Multiple Regression
Q: Is SNV^D stock a buy or sell?
A: The dominant strategy among neural network is to Wait until speculative trend diminishes SNV^D Stock.
Q: Is Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share stock a good investment?
A: The consensus rating for Synovus Financial Corp. Fixed-to-Floating Rate Non-Cumulative Perpetual Preferred Stock Series D Liquation Preference \$25.00 per Share is Wait until speculative trend diminishes and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of SNV^D stock?
A: The consensus rating for SNV^D is Wait until speculative trend diminishes.
Q: What is the prediction period for SNV^D stock?
A: The prediction period for SNV^D is (n+3 month)
|
{}
|
# Homework Help: Taylor Series Problem!
1. Apr 23, 2012
### jsewell94
1. The problem statement, all variables and given/known data
The first three terms of a Taylor Series centered about 1 for $ln(x)$ is given by:
$\frac{x^{3}}{3}$ - $\frac{3x^{2}}{2}$ + $3x$ - $\frac{11}{6}$
and that
$\int{ln(x)dx}$ = $xlnx - x + c$
Show that an approximation of $ln(x)$ is given by:
$\frac{x^3}{12}$ - $\frac{x^2}{2}$ + $\frac{3x}{2}$ - $\frac{5}{6}$ - $\frac{1}{4x}$
2. The attempt at a solution
I have tried this problem a few times, but it is becoming clear that I am missing some crucial step/idea. Basically, what I have tried is setting lnx equal to the Taylor Series, integrating both sides, and solving for lnx. However, when I do this, I manage to get all of the necessary terms EXCEPT for the 1/4x. Where does that come from, exactly? If someone could help, that'd be awesome! :D
Thanks!
2. Apr 23, 2012
### micromass
I think you likely forgot the +C. C is a constant and should equal some number. What do you think that number is?
3. Apr 23, 2012
### jsewell94
Does c = 1 because the center is at 1?
4. Apr 23, 2012
### jsewell94
Even if that is the case, I'm not sure how I would get 1/4x from that.
5. Apr 23, 2012
### micromass
No.
What do you get after you subsitute ln(x) with the polynomial and integrate it??
6. Apr 23, 2012
### jsewell94
Please don't judge my slowness :( I'm really not bad at math. I just haven't done any of my homework, lol.
7. Apr 23, 2012
### jsewell94
After you integrate the polynomial, you get:
$\frac{x^4}{12}-\frac{x^3}{2}+3x^2-\frac{11x}{6} + c$
Which is what I did, and then set it equal to $xln(x)-x+C$
But every time I do that, I get the wrong thing :( (meaning, I don't get the 1/4x)
I feel like I'm misunderstanding the nudges that you are giving me, lol.
8. Apr 23, 2012
### micromass
OK, that's good (although the middle term should be $\frac{3x^2}{2}$)
What you should get is
$$\frac{x^4}{12}-\frac{x^3}{2}+\frac{3x^2}{2}-\frac{11x}{6} + c = x ln(x)-x+C$$
Yes, the constant of integration c and the other constant C are distinct in general, so you can't eliminate them!! This is likely your mistake.
The above is equivalent to:
$$\frac{x^4}{12}-\frac{x^3}{2}+\frac{3x^2}{2}-\frac{11x}{6} = x ln(x)-x+(C-c)$$
Can you figure out what number C-c is?? (we're not interested in the specific values of C and c here, we just want a number instead of constants).
9. Apr 23, 2012
### jsewell94
That's what I actually got, I just mistyped it :/
I know :D That's why I denoted one as c and the other as C.
This is where I am confused..Like, I'm not trying to sound annoyingly difficult, but I honestly have no idea what to do with the C-c. :( I understand that it is just some number, but I have absolutely no clue what that number is or how to get it.
My confidence in my math skills is quickly plummeting :(
10. Apr 23, 2012
### jsewell94
I mean, I assume it is 1/4, because that is what I am trying to prove. But what is the reasoning behind that?
11. Apr 23, 2012
### micromass
Maybe you can substitute in some value for x and see what you get??
For example, if I want to determine a constant C such that
$$\sin(x)=x^2+x+C$$
Then I can substitute in x=0 and get
$0=C$.
Can you do something like that to determine C-c?
12. Apr 23, 2012
### jsewell94
Wow..Oh my freaking gosh, I think I am going to go into a corner and cry. I am so stupid :(
Thanks for the help :( I got it now :(
13. Apr 23, 2012
### micromass
Don't worry about it. Yes, it is a trivial problem, but it's one of these things you had to see before knowing you could do that.
Instead of feeling depressed, you should feel happy because you found a new technique!! I bet that next time (on a test perhaps), you won't forget how to do this! :tongue2:
14. Apr 23, 2012
### jsewell94
That's not even a "new technique." That is basic, elementary calculus that everyone should know how to do. Plugging in a given value and solving for C is something that every Calc student learns the moment they learn integration. I am in calculus 2 and didn't even consider it.
AKA, I think my feelings of idiocy are justified :P
|
{}
|
# Triangulation of Torus
I was asked to find out the simplicial homology groups of the torus $T=S^1\times{}S^1$ embedded in $R^3$. I triangulated the torus like this :
Here the $0$-simplices are $\{v_0\}$. $1$-simplices are $\{a,b,c\}$ and the $2$-simplices are $\{D_1,D_2\}$. And I found out the homology groups : $H_0(T)=\mathbb{Z}, H_1(T)=\mathbb{Z}^2,H_2(T)=\mathbb{Z}$
But my teacher said it was wrong, because the triangulation is not correct. According to her it should be:
I don't understand what is wrong with my triangulation of the torus $T$. Can someone please clarify this to me? Thanks! (Excuse the crude pictures!)
• Your triangulation is actually a pseudo triangulation. Namely, you identify vertices of the same triangle. So you're not dealing with a simplicial complex at the end, but with a CW-complex (which is fine by the way if you know about cellular homology).
– Pece
Oct 1 '14 at 5:19
• @Pece So I cannot have a $1$-simplex attached to the same $0$-simplex? i.e. "loops" are not allowed in simplicial triangulation? Oct 1 '14 at 6:05
• It depends on what you call a triangulation. Judging by the answer of your teacher, it's probably a simplicial complex, in which case all the faces of a simplex are required to be different (simply because if a simplex is embedded in a real vector space, all its faces are different). Oct 1 '14 at 6:27
In a simplicial complex, every simplex is required to be embedded, but none of your 1-simplices $a$, $b$, and $c$ are embedded, because each has its two endpoints attached to the same $0$-simplex $v_0$.
Also, the intersection of any pair of simplices is required to be a simplex, but your 2-simplices $D_1,D_2$ intersect in $a \cup b \cup c$.
• @ChesterX: Each $k$-simplex in a simplicial complex $X$ may be regarded as the image of a continuous map $\Delta^k \to X$, where $\Delta^k$ denotes the "standard" $k$-simplex in $\mathbb{R}^{k+1}$, and this continuous map is required to be an injection, i.e. an embedding. Notice that in the concept of a "pseudo triangulation" mentioned in the comment of Pece, the requirement of injection is weakened. Oct 1 '14 at 16:52
|
{}
|
# Efficient estimation and local identification in latent class analysis
## Author
Listed:
• Richard McHugh
## Abstract
No abstract is available for this item.
## Suggested Citation
• Richard McHugh, 1956. "Efficient estimation and local identification in latent class analysis," Psychometrika, Springer;The Psychometric Society, vol. 21(4), pages 331-347, December.
• Handle: RePEc:spr:psycho:v:21:y:1956:i:4:p:331-347
DOI: 10.1007/BF02296300
as
File URL: http://hdl.handle.net/10.1007/BF02296300
As the access to this document is restricted, you may want to search for a different version of it.
## References listed on IDEAS
as
1. T. Anderson, 1954. "On estimation of parameters in latent structure analysis," Psychometrika, Springer;The Psychometric Society, vol. 19(1), pages 1-10, March.
Full references (including those not matched with items on IDEAS)
## Citations
Citations are extracted by the CitEc Project, subscribe to its RSS feed for this item.
as
Cited by:
1. Frank Rijmen & Paul Boeck & Han Maas, 2005. "An IRT Model with a Parameter-Driven Process for Change," Psychometrika, Springer;The Psychometric Society, vol. 70(4), pages 651-669, December.
2. Perez-Mayo, Jesus, 2003. "Measuring deprivation in Spain," IRISS Working Paper Series 2003-09, IRISS at CEPS/INSTEAD.
3. F. Bartolucci & A. Farcomeni & F. Pennoni, 2014. "Latent Markov models: a review of a general framework for the analysis of longitudinal data with covariates," TEST: An Official Journal of the Spanish Society of Statistics and Operations Research, Springer;Sociedad de Estadística e Investigación Operativa, vol. 23(3), pages 433-465, September.
4. Henk Kelderman, 1989. "Item bias detection using loglinear irt," Psychometrika, Springer;The Psychometric Society, vol. 54(4), pages 681-697, September.
5. Anton K. Formann, 2003. "Latent Class Model Diagnosis from a Frequentist Point of View," Biometrics, The International Biometric Society, vol. 59(1), pages 189-196, March.
6. Paul Westers & Henk Kelderman, 1992. "Examining differential item functioning due to item difficulty and alternative attractiveness," Psychometrika, Springer;The Psychometric Society, vol. 57(1), pages 107-118, March.
7. Gongjun Xu & Stephanie Zhang, 2016. "Identifiability of Diagnostic Classification Models," Psychometrika, Springer;The Psychometric Society, vol. 81(3), pages 625-649, September.
8. Francesco Bartolucci & Fulvia Pennoni, 2007. "A Class of Latent Markov Models for Capture–Recapture Data Allowing for Time, Heterogeneity, and Behavior Effects," Biometrics, The International Biometric Society, vol. 63(2), pages 568-578, June.
9. K. Humphreys & D. Titterington, 2003. "Variational approximations for categorical causal modeling with latent variables," Psychometrika, Springer;The Psychometric Society, vol. 68(3), pages 391-412, September.
10. Dereje W. Gudicha & Fetene B. Tekle & Jeroen K. Vermunt, 2016. "Power and Sample Size Computation for Wald Tests in Latent Class Models," Journal of Classification, Springer;The Classification Society, vol. 33(1), pages 30-51, April.
11. A. Felipe & P. Miranda & L. Pardo, 2015. "Minimum $$\phi$$ ϕ -Divergence Estimation in Constrained Latent Class Models for Binary Data," Psychometrika, Springer;The Psychometric Society, vol. 80(4), pages 1020-1042, December.
12. Guan-Hua Huang & Karen Bandeen-Roche, 2004. "Building an identifiable latent class model with covariate effects on underlying and measured variables," Psychometrika, Springer;The Psychometric Society, vol. 69(1), pages 5-32, March.
13. Anton Formann & Ivo Ponocny, 2002. "Latent change classes in dichotomous data," Psychometrika, Springer;The Psychometric Society, vol. 67(3), pages 437-457, September.
14. Robert Mislevy & Mark Wilson, 1996. "Marginal maximum likelihood estimation for a psychometric model of discontinuous development," Psychometrika, Springer;The Psychometric Society, vol. 61(1), pages 41-71, March.
## Corrections
All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:spr:psycho:v:21:y:1956:i:4:p:331-347. See general information about how to correct material in RePEc.
For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Sonal Shukla) or (Rebekah McClure). General contact details of provider: http://www.springer.com .
If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.
If CitEc recognized a reference but did not link an item in RePEc to it, you can help with this form .
If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.
Please note that corrections may take a couple of weeks to filter through the various RePEc services.
IDEAS is a RePEc service hosted by the Research Division of the Federal Reserve Bank of St. Louis . RePEc uses bibliographic data supplied by the respective publishers.
|
{}
|
1. May 6, 2008
### shigg927
1. The problem statement, all variables and given/known data
Thorium (with half-life T1/2 = 1.913 yr. and atomic mass 228.028715 u) undergoes alpha decay and produces radium (atomic mass 224.020186 u) as a daughter nucleus. (Assume the alpha particle has atomic mass 4.002603 u.)
What percent of thorium is left after 266 days?
2. Relevant equations
X --> Y + He
N=No*(1/2)^n
n= t/T(half)
T(half)= .693/$$\lambda$$
3. The attempt at a solution
I found that lambda=4.14x10^-5 hrs^-1 (the problem asks for it in hours, dumb, I know.)
I then found the number of half-lives to be 266 days, or 6384 hours divided by 16757.88 hours, to be .381 half-lives. I multiplied this by Thorium's atomic mass to get 36% but this keeps turning up incorrect for my online homework.
2. May 7, 2008
### kamerling
Why don't you just apply N=No*(1/2)^n ?
since 266 days is shorter than the half life, more than 50% should be left.
3. May 7, 2008
### shigg927
Ahhhhh for some reason I thought I needed to now the number of nuclei, did NOT know I could just use the atomic mass. I got it, thank you!
|
{}
|
nLab SVect
superalgebra
and
supergeometry
Applications
The category $S Vect$ of super vector spaces is the symmetric monoidal category which as a monoidal category is the ordinary monoidal category of $\mathbb{Z}_2$-graded vector spaces for which
$(V \otimes W)^{ev} := V^{ev}\otimes W^{ev} \oplus V^{odd} \otimes W^{odd}$
and
$(V \otimes W)^{odd} := V^{ev}\otimes W^{odd} \oplus V^{odd} \otimes W^{ev}$
but equipped with the unique non-trivial symmetric monoidal structure
$V \otimes W \stackrel{\sigma_{V,W}}{\to} W \otimes V$
that is given on homogeneously graded elements $v,w$ of degree $|v|, |w| \in \mathbb{Z}_2$ as
$v \otimes w \mapsto (-1)^{|v| |w|} w \otimes v \,.$
related concepts
Revised on June 7, 2015 04:31:13 by Harrison Smith? (71.229.175.86)
|
{}
|
# A possible proof of the Borsuk Ulam theorem without “Homology-Cohomology”
Assume that $n>1$.
The configuration space of $S^n$ is defined as follows $$M_n=\{(x,y)\in S^n\times S^n\mid x \neq y\}$$
We have two questions:
1.Is there a continuous function $f:M_n \to S^{n-1}$ with $f(y,x)=-f(x,y)$, for all $x,y \in$S^{n}$? 2.Is there a continuos function$h: M_n \to \mathbb{R}^n$such that$h(x,y)=-h(y,x) $and$h(x,-x)\neq 0$for all$x,y \in S^n$If the answer to either of these two questions is "affirmative ", then we can provide an alternative proof for the Borsuk Ulam theorem, inductively. Because an equivalent formulation of the Borsuk Ulam theorem is that: There is no an odd continuous function$g:S^{n+1}\to S^n$Assuming that the answer to either of the above two questions is affirmative, we give a proof for the above equivalent formulation of the Borsuk Ulam theorem. The proof is as follows: Assume that$g:S^{n+1}\to S^n$is an odd continuous function. then$f(g(x),g(-x))$( or$h(g(x),g(-x))$) is an odd continuous function from$S^{n+1}$to$S^{n-1}$( or to$\mathbb{R}^n \setminus \{0\}$). This obviously gives a contradiction by induction. Because this situation leads to existence of an odd continuous function from$S^n$to$S^{n-1}$. Now we apply the induction argument. • @NoahSchweber Yes We want$n>1$as i wrote in the first line of the question. – Ali Taghavi Aug 14 '17 at 6:28 ## 1 Answer Suppose such a function$M_n \to S^{n-1}$existed. Consider the composition$S^n \to M_n \to S^{n-1}$, where the first map sends$x$to the pair$(x,-x)$. That composition is an odd continuous function$S^n \to S^{n-1}\$, hence can not exist by Borsuk-Ulam.
• @ThomasRot Yes this was the aim of the question, provided the answer would be affirmative. But, as the answer shows, it is not the case. – Ali Taghavi Aug 14 '17 at 8:11
• Sorry i need to stop commenting on my phone – Thomas Rot Aug 14 '17 at 10:14
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
We experienced a service interruption from 9:00 AM to 10:00 AM. PT. We apologize for any inconvenience this may have caused. (Posted: Tuesday 02/21/2017)
# 5.3: Activities and Answer Keys
Difficulty Level: At Grade Created by: CK-12
## Activity 4-1: Cell Division - Double or Nothing
### PLAN
Summary Students simulate the process of mitosis using pipe cleaners to represent chromosomes. They compare the cell before and after division to learn that no genetic information is lost during cell division, and that each new cell has the same number of chromosomes.
Objectives
Students:
\begin{align*}\checkmark\end{align*} simulate each stage of mitosis using pipe cleaners to represent chromosomes.
\begin{align*}\checkmark\end{align*} identify and explain the sequence of events in mitosis.
\begin{align*}\checkmark\end{align*} determine that no genetic information is lost during cell division and each new cell has the same number of chromosomes.
Student Materials
• Activity Report
• Crayons or colored pens or pencils (same colors as pipe cleaners if possible)
• 2 large paper plates
• 8 pipe cleaners (2 long of color A, 2 long of color B, 2 short of color A, and 2 short of color B)
Teacher Materials
• Activity Report Answer Key
• Additional student supplies
Collect student materials. Prepare Enrichment 4-1 materials if you plan on extending this activity.
Estimated Time One class period
Interdisciplinary Connection
Art Students can illustrate the processes of mitosis on a poster or in a collage.
Prerequisites and Background
Students need to be familiar with the parts of the cell and the process of mitotic cell division.
• Check student knowledge after each simulated stage of mitosis.
• The first teams to demonstrate the correct sequence of mitotic stages can act as “Teacher Assistants” to help other teams.
### IMPLEMENT
Steps 1-8 Have students work in pairs. Give one set of student materials to each pair. However, each student should complete his or her own Activity Report.
Monitor student progress to check students' knowledge after each simulated stage of mitosis.
### ASSESS
Use the completion of the activity and written responses on the Activity Report to assess if students can
\begin{align*}\checkmark\end{align*} simulate each stage of mitosis.
\begin{align*}\checkmark\end{align*} identify the sequence of events in mitosis.
\begin{align*}\checkmark\end{align*} determine that no genetic information is lost during cell division and each new cell has the same number of chromosomes.
## Activity 4-1: Cell Division - Double or Nothing Activity Report Answer Key
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. Compare the chromosome number of the parent cell with that of each of the two daughter cells.
2. Compare the genetic information of the parent cell with that of each of the two daughter cells with single chromosomes.
3. What is the importance of mitosis to the organism?
4. You have 46 chromosomes in each of your somatic cells. If you cut your arm, how many chromosomes would be in each newly formed skin cell?
5. Pretend that you are a double chromosome in the nucleus of a finger cell. Describe in a paragraph your experience going through cell division to become a new finger cell. Draw diagrams as you did on your Activity Report.
A suggested response will be provided upon request. Please send an email to teachers-requests@ck12.org.
Which parent determines the sex of the child?
## Activity 4-2: Meiosis and Fertilization
### PLAN
Summary Students model each stage of meiosis using pipe cleaners to represent chromosomes. They compare the chromosomes of the parent cell with gametes to learn that the number of chromosomes is reduced by half. They use their chromosome models to simulate fertilization.
Objectives
Students:
\begin{align*}\checkmark\end{align*} simulate the process of meiosis using pipe cleaners to represent chromosomes.
\begin{align*}\checkmark\end{align*} observe and record the movements and positions of chromosomes during meiosis.
\begin{align*}\checkmark\end{align*} recognize that the number of chromosomes is reduced by half.
\begin{align*}\checkmark\end{align*} compare and contrast the process of mitosis with the process of meiosis.
Student Materials
• Data Sheet
• Activity Report
• Crayons or colored pens or pencils (same colors as pipe cleaners if possible)
• 4 large paper plates
• Pipe cleaner set (4 double pairs) 2 long pairs (Each member of the pairs is made up of 2 long pipe cleaners for a total of 4 pipe cleaners per pair.) 1 pair, color A. 1 pair, color B. 2 short pairs (Each member of the pairs is made up of 2 pipe cleaners for a total of 4 pipe cleaners per pair. 1 pair, color A. 1 pair, color B.)
Teacher Materials
• Activity Report Answer Key
• Extra pipe cleaners
Gather student materials. Arrange for video equipment if you plan to tape the activity. Prepare the pipe cleaner sets for each team.
Estimated Time One class period
Interdisciplinary Connection
Art Summarize the process of meiosis on a poster or a collage.
Prerequisites and Background Information
Students should have knowledge of the parts of the cell, especially the nucleus. Knowledge of mitotic and meiotic cell division is necessary.
Nondisjunction results when members of a chromosome pair fail to separate in normal meiosis and move into the same cell. Nondisjunction is more common in the production of gametes in females than in males.
The process of crossing-over is addressed in Enrichment 4-2
### IMPLEMENT
Have students work in pairs. Give one set of student materials to each pair.
Step 1 Demonstrate the first step of procedure before students continue with Step 2. Students will discuss the questions from the Activity Report in pairs, but each student should turn in his/her own Activity Report.
Steps 2-11 Consider videotaping student simulations of the process of meiosis. Show these videotapes to the class at a later time, to other classes, at back-to-school nights, or to students who missed the class.
### ASSESS
Use the completion of the activity and the written answers on the Activity Report to assess if students can
\begin{align*}\checkmark\end{align*} simulate and describe the movements and positions of chromosomes during meiosis.
\begin{align*}\checkmark\end{align*} recognize that the number of chromosomes is reduced by half during meiosis.
\begin{align*}\checkmark\end{align*} explain the significance of producing haploid gamete cells.
\begin{align*}\checkmark\end{align*} compare and contrast the process of meiosis with the process of mitosis.
• Check student knowledge of meiosis, especially after completing Steps 5 and 7.
• You may want to do procedure B (Fertilization) the following day.
## Activity 4-2: Meiosis and Fertilization Activity Report Answer Key
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. Compare the chromosome number of the parent cell with that of each of the four gamete cells.
2. Compare the genetic information of the parent cell with that of each of the four gamete cells.
3. How are the chromosomes of the offspring (Data Sheet #2) similar to the chromosomes of the parents (Data Sheet #1)?
4. Given your response to the question above, do you think the offspring looks different from its parents? Explain.
5. What is the importance of meiosis and fertilization in sexual reproduction?
6. You have 46 chromosomes in each of your body cells. How many chromosomes are in each gamete cell? Are the gamete cells produced by mitosis or meiosis?
7. Compare and contrast the process of mitosis with the process of meiosis including a) chromosome number, b) degree of genetic variation, c) purpose, and d) where it occurs.
mitosis meiosis
a. chromosome number
b. degree of genetic variation
c. purpose
d. where it occurs
## Activity 4-2: Data Sheets 1 and 2 Answer Key Meiosis and Fertilization
A suggested response will be provided upon request. Please send an email to teachers-requests@ck12.org.
What is the relationship between DNA replication and the fact that chromosomes are doubled when they begin meiosis?
What Do You Think?
In the models you made of the processes of meiosis and fertilization, you used pipe cleaners in place of chromosomes. You followed only two chromosomes, but imagine how the chromosomes in humans sort and recombine when there are 46 chromosomes. In what other ways were your models of meiosis and fertilization different from the real thing?
Make a list of the characteristics that make you, you. They can be both characteristics you see and characteristics in personality or choice of activities. Now separate those characteristics into two groups: those you think you cannot control and are part of your genetic self (nature), and those characteristics you have developed and you think you can change (nurture). How much of who you are is truly genetic, and how much of who you are is a product of how you were raised?
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. What is the difference between mitosis and meiosis?
2. Describe the process of meiosis.
3. Why is meiosis important to the continuity of a species?
4. Why are models important tools for geneticists?
## Enrichment 4-2: Crossing Over in Meiosis Activity Report Answer Key
• Sample answers to these questions will be provided upon request. Please send an email to teachers-requests@ck12.org to request sample answers.
1. Why is modeling clay better than pipe cleaners for illustrating crossing over? What other materials can you suggest?
2. What is crossing over?
3. What is the value of crossing over to the long-term survival of a species?
## Activity 4-1: Report Cell Division - Double or Nothing (Student Reproducible)
1. Compare the chromosome number of the parent cell with that of each of the two daughter cells.
2. Compare the genetic information of the parent cell with that of each of the two daughter cells with single chromosomes.
3. What is the importance of mitosis to the organism?
4. You have 46 chromosomes in each of your somatic cells. If you cut your arm, how many chromosomes would be in each newly formed skin cell?
5. Pretend that you are a double chromosome in the nucleus of a finger cell. Describe in a paragraph your experience going through cell division to become a new finger cell. Create diagrams as you did on your Activity Report.
## Activity 4-2 Report: Meiosis and Fertilization (Student Reproducible)
1. Compare the chromosome number of the parent cell with that of each of the four gamete cells.
2. Compare the genetic information of the parent cell with that of each of the four gamete cells.
3. How are the chromosomes of the offspring (Data Sheet #2) similar to the chromosomes of the parents (Data Sheet #1)?
4. Given your response to the question above, do you think the offspring looks different from its parents? Explain.
5. What is the importance of meiosis and fertilization in sexual reproduction?
6. You have 46 chromosomes in each of your body cells. How many chromosomes are in each gamete cell? Are the gamete cells produced by mitosis or meiosis?
7. Compare and contrast the process of mitosis with the process of meiosis including a) chromosome number, b) degree of genetic variation, c) purpose, and d) where it occurs.
mitosis meiosis
a. chromosome number
b. degree of genetic variation
c. purpose
d. where it occurs
## Enrichment 4-1 Activity Guide: Chromosome Cards (Student Reproducible)
Introduction
How much do you know about mitosis? Can you demonstrate your knowledge of mitosis to a friend? Using the instructions below and the chromosome cards, work with a partner to simulate the sequence of events that occur during mitosis. Be sure that you demonstrate events before and after replication of DNA, the sorting of chromosomes, and the cell division resulting in two daughter cells.
Materials
• Resource (chromosome cards)
• Scissors
• Tape
• Large piece of butcher paper to represent the cell
Procedure
Step 1 Put your initials on the back of each card in your deck of 46 cards. This deck represents the diploid number of chromosomes.
Step 2 Place your chromosomes in numerical sequence from autosomal chromosome #1 through #22 and then the sex chromosomes. Note the characteristic differences in size and position of the centromere and banding patterns among the chromosomes. This double set represents the number of chromosomes you have in each of your body (somatic) cells.
Step 3 Work with a partner who has a different colored set of cards.
Step 4 To represent DNA replication, take sticky tape and place your partner's set of chromosomes onto each of your chromosomes, pairing each homologous pair. For example, you will have green chromosome #1 linked to yellow chromosome #1. The tape represents the centromere. That is the amount of genetic material that you have in a somatic cell just after DNA replication, but before cell division. The difference is that each chromosome is joined at the centromere so that you have two sister chromatids linked together.
Step 5 Line up your chromosomes in the center of the large sheet of construction paper along the same plane (in single file order). Separate each of the sister chromatids that are taped together. Send one of each pair to each end of the large piece of construction paper. You can see that each half of the construction paper has a complete set of chromosomes.
Step 6 Cut the butcher paper in the middle. Now you have two cells, each with a complete set of 46 single chromosomes. What do you conclude about the overall outcome of mitosis in terms of chromosome content? Is each daughter cell the same?
## Enrichment 4-2 Activity Guide: Crossing Over in Meiosis (Student Reproducible)
In this activity, you use modeling clay to simulate an event that occurs during meiosis and increases the possibilities for genetic variation in offspring.
As you complete this activity, think about why it is important for a species to produce a lot of offspring having many genetic variations.
Materials
• Modeling clay (two different colors)
• Activity Report
Procedure
Step 1 Use one color of modeling clay to make one double chromosome. Use the other color of modeling clay to make a second double chromosome. Each double chromosome should be the same length, but a different color.
Step 2 Place the two double chromosomes side-by-side. Remember that this pairing up of duplicated chromosomes is unique to meiosis and occurs early during the process. This process can occur with all of the double chromosome pairs at the beginning of meiosis. You will use only one pair of double chromosomes to represent the many found in the cell.
Step 3 Use your fingers to grasp the inner half of each double chromosome and twist until the chromosome ends break off. Next, take each broken end and join it onto the chromosome of the opposite color, as shown in the diagram below.
This process that you have just simulated, occurs during meiosis, and is called crossing over.
## Enrichment 4-2 Resource: Crossing Over in Meiosis (Student Reproducible)
Meiosis is a special type of cell division that results in sperm and egg cells having half the normal number of chromosomes. In humans, each gamete cell (sperm and egg) contains only 23 chromosomes.
The chromosomes of the sperm and egg combine through fertilization to create offspring that are different from the parents.
An important difference between meiotic and mitotic cell divisions is the difference in the resulting cells. In mitosis the new cells have identical DNA with that of the parent cell. Cells produced by meiosis have half the amount of parental DNA and recombine through fertilization to produce genetic combinations of greater variety than that of the parents. However, how can two parents have several biological children, each of whom looks quite different from each other? If the children only received genes from each parent, what happens genetically to make each child a unique individual?
One way each child is unique is by the pattern of dominant and recessive genes he or she receives from each parent. Let's use eye color as an example. Although several genes are thought to be involved, we will ignore the complications and note that the allele for brown eyes is dominant over the allele for blue eyes. If both parents are heterozygous for brown eyes, that is Bb, then we know that there is a one and four chance that they could produce an offspring with blue eyes. Do the cross to see. We know that every time these parents produce gamete cells there are three gamete cells with the dominant brown gene (B) for eye color and one gamete cell with the recessive blue gene (b) for eye color on the chromosome that carries the trait for eye color. The expressed eye color (phenotype) and the genetic makeup for eye color of that child depend on which gamete cell from the mother combines with which gamete cell from the father. Children of these parents genetically could be BB, Bb, or bb, but only the child that is genetically bb will have blue eyes.
Beyond one single trait, think for a moment and imagine how chromosomes could arrange themselves to create many more different combinations of genes than are possible by the simple reassortment of whole chromosomes. Crossing over is a process in which genetic information is exchanged between the two members of a chromosome pair. When the chromosomes separate and are divided into two separate cells, the resulting gamete cell contains a unique combination of genes, different from the parent cell. Normally genes are not lost, but are switched between similar chromosomes. The new combination of genes gives rise to new variations of traits, different from those in the parents, which over long periods of time may allow the species to survive and reproduce more effectively.
## Enrichment 4-2 Activity Report: Crossing Over in Meiosis (Student Reproducible)
1. Why is modeling clay better than pipe cleaners to illustrate crossing over? What other materials can you suggest?
2. What is crossing over?
3. What is the value of crossing over to the long-term survival of a species?
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
Please to create your own Highlights / Notes
Show Hide Details
Description
Authors:
Tags:
Subjects:
|
{}
|
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.
No credit card required
# Time for action – directly using special characters
In a TeX document, we would like to use the German name of a street. It contains diacritics, so called umlauts. Let's check out how to make it work right.
1. Start a new document. Within a small parbox, write the text:
\documentclass{article}
\begin{document}
\parbox{3cm}{Meeting point: K\"onigsstra\ss e (King's Street)}
\end{document}
2. Typeset and have a first look:
3. Try babel instead! It provides shortcuts for German umlauts, if you state the ngerman option. Use "u for ü, a for ä, and s for ß":
\usepackage[ngerman]{babel}
…
\parbox{3cm}{Meeting point: K"onigsstra"se (King'sStreet)}
4. The output would be the ...
## With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more.
No credit card required
|
{}
|
zbMATH — the first resource for mathematics
Coefficients of Maass forms and the Siegel zero. Appendix: An effective zero-free region, by Dorian Goldfeld, Jeffrey Hoffstein and Daniel Lieman. (English) Zbl 0814.11032
Let $$f$$ be a Maaß form, which is a newform for $$\Gamma_ 0(N)$$ with eigenvalue $$\lambda$$ and Dirichlet character $$\chi \bmod N$$, normalized such that the Petersson inner product is 1. Let $$F$$ denote the adjoint square lift of $$f$$ to $$\text{GL}(3)$$. It is known that the size of the first Fourier coefficients $$\rho(1)$$ of $$f$$ is closely related to the behavior of the $$L$$-series $$L(s,F)$$ near $$s = 1$$.
In Theorem 1 the authors show that $| \rho(1)|^ 2 \leq c \log (\lambda N + 1)$ with an effective constant $$c$$ provided that no Siegel zero occurs, i.e. $$L(s,F) \neq 0$$ in a sufficiently large neighborhood of 1. In the appendix the same estimate is derived for all $$f$$ which are not lifts from $$\text{GL}(1)$$, i.e. the $$L$$-series of $$f$$ is not equal to a Hecke $$L$$-series of a quadratic field. A weaker estimate for lifts from $$\text{GL}(1)$$ is also included.
The paper goes on with the inequality $$L(1,F) \geq c(\varepsilon) (\lambda N)^{-\varepsilon}$$ for all $$\varepsilon > 0$$ with an effective constant $$c(\varepsilon)$$ and for all $$F$$ with one possible exception. This in turn leads to $$| \rho(1)| \ll_ \varepsilon (\lambda N)^ \varepsilon$$.
Reviewer: A.Krieg (Aachen)
MSC:
11F66 Langlands $$L$$-functions; one variable Dirichlet series and functional equations 11F37 Forms of half-integer weight; nonholomorphic modular forms
Full Text:
|
{}
|
## A random walk through sub-riemanian geometry
Series
Analysis Seminar
Time
Wednesday, October 9, 2019 - 1:55pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Masha Gordina – University of Connecticut – maria.gordina@uconn.edu
Organizer
Galyna Livshyts
A sub-Riemannian manifold M is a connected smooth manifold such that the only smooth curves in M which are admissible are those whose tangent vectors at any point are restricted to a particular subset of all possible tangent vectors. Such spaces have several applications in physics and engineering, as well as in the study of hypo-elliptic operators. We will construct a random walk on M which converges to a process whose infinitesimal generator is one of the natural sub-elliptic Laplacian operators. We will also describe these Laplacians geometrically and discuss the difficulty of defining one which is canonical. Examples will be provided. This is a joint work with Tom Laetsch.
|
{}
|
# Using commands and environments
Using commands and environments
We have already seen many examples of commands and environments, but now we will formalize the concepts.
### Commands
In LATEX there are seven single character commands, they are #, $, %, &, ~, _, and ^. The octothon (#) is used in creating new commands and environments. We have already seen the dollar sign ($) (used for entering inline math mode), the ampersand (&) (used for vertical alignment), the underscore (_) (used for subscripts), and the caret (^) (used for superscripts) commands. The percent sign (%) is used for making comments in your LATEX document. The compiler will ignore everything on a line after the percent sign. Comments are extremely helpful for both the writer and other readers of LATEX input files. Anyone wishing to make changes to a document, especially long after it was originally written, will find the comments invaluable. Finally, the tilde (~) is used to create unbreakable spaces. That is if you want to have a space appear in your document, but you do not want to have a line break at that space you can use the tilde.
The rest of the LATEX commands consist of a back slash (\) followed by one or more letters or a single special character. Some examples of commands we have already seen are \sin, \varphi, and \cdots. It should be noted that many commands can only be used in math mode. You can produce the TeX and LATEX logos with the commands \TeX and \LATEX. It is important to note that LATEX is case sensitive, so \latex is not the same as \LATEX.
Commands often have arguments. There are two types of arguments, mandatory and optional. As the names imply you must always include mandatory arguments, but optional arguments may or may not be included. Mandatory arguments are enclosed in curly braces ({}) and optional arguments are enclosed in square braces ([]). One example of a command with an option argument is the \sqrt command. The result of adding an optional argument to the \sqrt command can be seen by changing the line $\sqrt{x^{5}} + y_{n}$ to $\sqrt[3]{x^{5}} + y_{n}$ in our document.
### Environments
Environments have the form
\begin{EnvironmentName}
...
\end{EnvironmentName}
We have seen many examples of environments already, such as the document environment, the eqnarray environment, and the equation environment.
### New commands and environments
As noted above LATEX allows you to create your own commands and environments. The ability to create your own commands is what makes LATEX such a powerful tool. Creating your own commands allows you to make a simple change in one place in your document, and have it affect the entire document. Below is the example of an inner product command and an environment with strange tab-sets.
|
{}
|
# News How much for a nuclear power plant?
1. Sep 14, 2010
### ensabah6
http://counterpunch.com/wasserman09142010.html [Broken]
Why Atomic Energy Can't Compete
Is the Nuclear Renaissance Dead Yet?
By HARVEY WASSERMAN
Soaring costs at Vogtle, the US's one active new reactor project, have stuck Georgia ratepayers with $108 million in unplanned overcharges Currently calculated to cost a sure-to-soar$14.5 billion, the Vogtle project got $8.33 billion in federal loan guarantees from Obama in February. Citizen/taxpayer groups have since sued to see the details, which the administration is keeping secret. Georgia Power, which is building Vogtle, has already asked for another$1 billion rate increase.
Is this a factually accurate accounting of the cost of this new nuclear plant?
In the US, liability is capped at around $11 billion, even though the financial damage from a full-scale catastrophe could easily soar into the trillions. Minimum estimates from the 1986 Chernobyl disaster, which occurred in a remote, impoverished area, have exceeded$500 billion. By recent estimates the death toll is 985,000 and still counting.
Last edited by a moderator: May 4, 2017
2. Sep 15, 2010
### CRGreathouse
I find slightly lower costs online, but it's basically accurate. Amortized over 30 years, that's 2.4 cents per kilowatt-hour (using a total production of 2.3 GWe). Adding in the cost of refined uranium at maybe 1.1 cents per kilowatt-hour, that still gives the project a lot of headroom for personnel, repairs, insurance, and profit, since current prices are perhaps 10-15 cents per kilowatt hour -- and prices are likely to rise, even after inflation.
So expensive, yes, but not unreasonable.
3. Sep 15, 2010
### Jack21222
Does the author of your source realize that a "full-scale catastrophe" like Chernobyl is impossible in modern nuclear plants?
Additionally, a 0.1 billion cost overrun is peanuts to a 14.5 billion dollar budget. They were off by less than 1%.
Last edited by a moderator: May 4, 2017
4. Sep 15, 2010
### xxChrisxx
Nuclear power plant cost?
Tree fiddy
More BS from someone who don't know what they are talking about. Chernobyl comes up so often becuase it's pretty much the only horrendous incident in the history of nuclear power.
Since the 60's and discounting Chernobyl (are we really going to base out view on a technology on a Soviet era constriction - lets face it they weren't exactly known for pushing safety), you can count the direct number of deaths caused in Nuclear reactor incidents on your fingers.
People like the author of that article wind me up, it's like pointing towards the de Havilland Coment in the 50's and declaring all plane travel unsafe. Disredarding the umpteen thousand incident free flight-years.
Also France has managed to pull off Nuclear power, with no problems at all.
5. Sep 15, 2010
### ensabah6
He's antinuke
"Will this finally kill the much hyped "renaissance" of a Dark Age technology defined by quadruple failures in human health, global ecology, sound finance and increasingly shaky performance?"
6. Sep 15, 2010
### loseyourname
Staff Emeritus
I wonder how many people have been killed in hydroelectric dam disasters.
7. Sep 15, 2010
### Office_Shredder
Staff Emeritus
8. Sep 15, 2010
### FlexGunship
Not to be overly utilitarian, but if you're discussing the dangers of power generation (which is not the purpose of the thread) shouldn't you count it in terms of watt-hours per death. Obviously, higher would be better! More power with fewer deaths.
Since I invented the unit, I'd like to call it the "toasty" (symbol is the Jesus fish, ichthys).
-Wind is pretty bad at 6.66 teratoasties.
-Rooftop solar is horrible at 2.27 teratoasties.
-Hydro is okay if you ignore Banqiao (the Chernobyl of hydroelectric) at 10 teratoasties, but a crappy 0.71 teratoasties if you include it.
-Nuclear has the best ratio at 25 teratoasties if you INCLUDE Chernobyl. If you don't include Chernobyl then it has a rating of 1875 teratoasties. That's 1.875 petatoasties!!!! (That number includes a single death that was attributed to radiological exposure of a plant worker. There is still debate over that.)
For comparison, coal is only 0.006 teratoasties, and oil is 0.028 teratoasties.
Banqiao was responsible for 26,000 deaths directly, and 150,000 from famine and disease after. Chernobyl was responsible for 56 deaths directly and 19 more later were attributed to it. I vote we stop talking about Chernobyl entirely, forever, in the context of nuclear safety. It essentially works out to a rounding error for coal or oil.
EDIT: source: http://nextbigfuture.com/2008/03/deaths-per-twh-for-all-energy-sources.html
Last edited: Sep 15, 2010
9. Sep 15, 2010
### Office_Shredder
Staff Emeritus
You say potato, I say petoastie
So we've determined that nuclear power is not as evil and not as expensive as the OP makes it out to sound. Is there anything left to talk about?
10. Sep 15, 2010
### CRGreathouse
Well, there's the issue of liability. As a free-market enthusiast, I agree with the OP on this issue: let the nuclear plants cover their costs. We don't want another BP Gulf spill...
If private insurers won't cover them, in a pinch, the US government can sell a policy -- but I'd prefer that it be provided by private insurance companies (or even directly by reinsurers).
11. Sep 15, 2010
### Gokul43201
Staff Emeritus
There seem to be some variances that need resolving...
... like the difference between 56 (or 75) and 985,000. I have no idea where the larger number comes from, but that difference can take 25 teratoasties and shrink it down to a mere 2 gigatoasties.
12. Sep 15, 2010
### CRGreathouse
FWIW, Wikipedia (citing [1]) claims as many as 4000 deaths once indirect cancer deaths are included, but doesn't even total as many as 75 direct deaths.
This would give a figure of 0.47 TT, putting it below the other renewables but still well above coal and oil. Of course the non-Chernobyl number makes more sense to me, considering that even at the time that style of reactor was considered unsafe and wasn't really used anywhere but the USSR; they're certainly not being proposed today.
[1] Elisabeth Rosenthal (International Herald Tribune) (6 September 2005). "Experts Find Reduced Effects of Chernobyl". New York Times. Retrieved 11 September 2010.
13. Sep 15, 2010
### Ivan Seeking
Staff Emeritus
What is the worst case scenario given the assumption that terrorists take control of a reactor, who have all of the equipment, training, and knowledge needed to cause the most destructive event possible, using the number of deaths as a metric?
When we talk about safety, we have to include the potential for damage if someone is out to defeat the system.
There is also talk about limiting the size of dams. The idea that the failure of any constructed system could cause the death of millions, is called into question wrt more than just nuclear power. Frankly, I tend to think the Chinese are taking a big chance with the Three-Gorges Dam.
Last edited: Sep 15, 2010
14. Sep 15, 2010
### Jack21222
I'm no nuclear engineer, but I'm under the impression that modern reactor designs have built-in fail-safes that simply cannot be overridden by human operators.
http://en.wikipedia.org/wiki/Passive_nuclear_safety has a list of such fail-safes, but I admit I lack the nuclear engineering knowledge to understand much of it.
15. Sep 15, 2010
### CRGreathouse
Surely "worst-case" is the wrong metric to use here. In that case, it could be tens of billions for just about any scenario: the Earth's population grows, then experiences total existence failure.
If we (reasonably) want to exclude the Banqiao Dam incident, caused by a '1 in 2000 year' flood, maybe we should consider events which have a 1/1000 chance of happening in a given year.
This doesn't detract from your suggestion -- even with the current number of nuclear power plants I could see 1 or 2 such events happening in a dozen centuries, and more as the world moves away from fossil fuels. But I thought it important to draw this distinction early in the discussion.
16. Sep 15, 2010
### CRGreathouse
That's essentially right. (I don't do nuclear power, but I *do* work in radiation safety... if that counts for anything.)
But that's more like 'resistance to meltdown' and less like 'resistance to terrorists'.
17. Sep 15, 2010
### Gokul43201
Staff Emeritus
Does it make sense to not include Chernobyl fatality numbers but still count pre-Chernobyl cumulative TWH? Also, accidents tend to be stochastic. It can't be good science to simply exclude specific accidents (from an already small sample) on the grounds that those particular accidents can no longer occur in modern systems.
18. Sep 15, 2010
### Staff: Mentor
Based on that description, I'd say the "worst case" would be that they brought with them a 10 megaton nuclear bomb and detonated it at the nuclear plant. That's about the "most destructive event possible" by humans today.
Not sure if it is very realistic to consider such a possiblity, though.
Realistically, if they hijack the plant for a few days and have the expertise, I suppose the most immediate risk is of a major power failure if they do it in the summer. Long term, they could probably cause enough damage to necessitate closing the plant, which would cost several dozen people their jobs and badly hurt the shareholders of the company that owns the plant.
Backup for the above opinion:
http://www.world-nuclear.org/info/inf06.html
In other words, barring a meterorite strike or terrorists trucking-in a rediculously large quantity of explosives (that may not be enough: you may actually need a nuclear bomb), TMI represents about the worst possible failure of a western nuclear reactor.
I suppose there is another possibility: they could steal the fuel and truck it into the nearest city along with a conventional bomb. That would require holding off the Marines for a few days, though, which is pretty unlikely.
Last edited: Sep 15, 2010
19. Sep 15, 2010
### CRGreathouse
Certainly I wouldn't want to count the production from any RBMK plant.
Agreed. When I wrote 1/1000 chance per year, I was specifically envisioning that as lambda = 0.001 in a Poisson distribution.
On one hand, I agree that naive extrapolation from a small sample is a bad idea; in particular, in this case, it inflates the "TT" rating.* But on the other hand, that Soviet design was known to be unsafe even at the time, and there's no expectation that more of that style will ever be built. So actually yes, I would say that good science allows -- even requires -- excluding that design.
* I have a good example in mind, where a group of extremely low-risk drivers (?) were insured [with very low premiums] under the terrible assumption that their lack of accidents were due to their skill, where actually chance played greater part. Unfortunately I can't think of the name of the group! Our example doesn't have the same selection bias, but it's still similar in concept.
20. Sep 15, 2010
### CRGreathouse
This might cause thousands of deaths if there's a heat wave -- but of course this risk applies to all plants, not just nuke plants. (They're more vulnerable in that they tend to provide more power and thus service more people, but less vulnerable in that their security tends to be tighter. I'll call it a wash unless someone wants to crunch numbers here.)
Worst-case loss in that scenario: perhaps $10 to$20 billion. At \$10 million each, that could cost up to 2000 lives, in the sense that the money (which will eventually come from somewhere) could have been used to save roughly that many lives.
I think this is what Ivan was referring to.* It would be interesting to do some Fermi estimates on the chances and potential damage caused. I would tend to think of that as major property damage (semi-permanent evacuation of a whole city!) but unlikely to actually kill many people. But I freely admit that's entirely speculation. Thoughts?
* He may also have been referring to the possibility that they would create a nuclear weapon from the fuel. This isn't a risk today. I actually think this could become a major issue in the future, but not during my lifetime: the technology requires is too difficult and tightly-controlled. If a terrorist gets a nuclear weapon in the (not-even-that) near future it will be stolen (or bought), not created by the terrorist group.
|
{}
|
Lemma 7.32.3. Let $\mathcal{C}$ be a site. Let $p = u : \mathcal{C} \to \textit{Sets}$ be a functor. There are functorial isomorphisms $(h_ U)_ p = u(U)$ for $U \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$.
Proof. An element of $(h_ U)_ p$ is given by a triple $(V, y, f)$, where $V \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{C})$, $y\in u(V)$ and $f \in h_ U(V) = \mathop{Mor}\nolimits _\mathcal {C}(V, U)$. Two such $(V, y, f)$, $(V', y', f')$ determine the same object if there exists a morphism $\phi : V \to V'$ such that $u(\phi )(y) = y'$ and $f' \circ \phi = f$, and in general you have to take chains of identities like this to get the correct equivalence relation. In any case, every $(V, y, f)$ is equivalent to the element $(U, u(f)(y), \text{id}_ U)$. If $\phi$ exists as above, then the triples $(V, y, f)$, $(V', y', f')$ determine the same triple $(U, u(f)(y), \text{id}_ U) = (U, u(f')(y'), \text{id}_ U)$. This proves that the map $u(U) \to (h_ U)_ p$, $x \mapsto \text{class of }(U, x, \text{id}_ U)$ is bijective. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# How do you find the center and radius of X^2 + Y^2 = 13?
Feb 4, 2016
The center is (0,0) and the radius is $\sqrt{13}$
#### Explanation:
The equations of the circle is
${\left(x - h\right)}^{2} + {\left(y - k\right)}^{2} = {r}^{2}$
where h is the x of the center of the circle and k is the y of the center of the circle, and r is the radius.
${\left(x\right)}^{2} + {\left(y\right)}^{2} = 13$
is
${\left(x - 0\right)}^{2} + {\left(y - 0\right)}^{2} = 13$
(2,6) radus is 3
h = 0
k = 0
r = $\sqrt{13}$
(0,0) radus is $\sqrt{13}$
|
{}
|
# Baire's Theorem with locally compact Hausdorff space
Statement of the theorem:
If $$S$$ is either
a) a complete metric space, or
b) a locally compact Hausdorff space,
Then the intersection of every countable collection of dense open subsets of $$S$$ is dense in $$S$$.
The idea of the proof is to show that every open set $$B$$ intersects the countable union of given dense open subsets. Specifically if $$\left\{ V_i \right\}_{i \in \mathbb{N}}$$ is such a collection and $$B$$ is an arbitrary open set the following recursion is then defined
$$\begin{array}{l} B_0 = B \\ \bar{B}_{n} \subset V_n \cap B_{n-1} \end{array}$$
Later we define
$$K = \bigcap_{n=1}^{\infty} \bar{B}_n$$
The author at this point states that $$K$$ isn't empty by compactness. I cannot really understand why.
From wikipedia:
Let $$X$$ be a topological space. Most commonly $$X$$ is called locally compact, if every point $$x$$ of $$X$$ has a compact neighbourhood, i.e., there exists an open set $$U$$ and a compact set $$K$$, such that $${\displaystyle x\in U\subseteq K}$$
I guess this is the definition that we're trying to apply, but I can't figure how exactly we apply it.
• There are some crucial facts missing here. With what you have quoted it is not possible to show that $K$ is not empty. Please look at the entire proof. – Kavi Rama Murthy Jul 14 '20 at 12:36
• The only bit I missed is that, according to the Rudin, $\bar{B}_n$ can be chosen to be compact is this what I'm missing? – user8469759 Jul 14 '20 at 12:45
• Yes, you missed the most important assumption. – Kavi Rama Murthy Jul 14 '20 at 13:02
• Another case that works for Baire category theorem: locally countably compact regular space. – GEdgar Jul 14 '20 at 13:15
$$(\overline {B_n})$$ is decreasing sequence of nonempty compact sets and hence their intersection is not empty: if it is empty then complements of $$\overline {B_n}, n=2,3,,$$ cover the compact set $$\overline {B_1}$$. Hence there is a finite sub-cover. But this means $$\cap_{n=1}^{N} \overline {B_n} (=\overline {B_N})$$ is empty for some $$N$$, a contradiction.
|
{}
|
J. Korean Ceram. Soc. > Volume 37(3); 2000 > Article
Journal of the Korean Ceramic Society 2000;37(3): 201.
$80Al_2O_3-20Al$ 복합재료의 내열충격성: 실험과 유한요소 해석 김일수, 신병철 동의대학교 신소재공학과 Thermal Shock Resistance of $80Al_2O_3-20Al$ Composites: Experiments and Finite Element Analysis ABSTRACT Thermal shock resistance of 80Al2O3-20Al composite and monolithic alumina ceramics was compared. Fracture strength was measured by using a 4-pont bending test after quenching. Thermal stresses of the ceramics and ceramic-metal composites were calculated using a finite element analysis. The bending strength of the Al2O3 ceramics decreased catastropically after quenching from 20$0^{circ}C$ to $0^{circ}C$. The bending strength of the composite also decreased after quenching from 200~2$25^{circ}C$, but the strength reduction was much smaller than for Al2O3. The maximum thermal stress occured in the monolithic alumina ceramics when exposed to a temperature difference of 20$0^{circ}C$ was 0.758 GPa. The same amount of stress occured in the Al2O3-Al composite when the temperature difference of 205$^{circ}C$ used. Key words: Thermal shock resistance, Finite element analysis, $Al_2O_3-20Al$ composite, Monolithic alumina
TOOLS
|
{}
|
# How do you factorise x^2 − 7 x + 12?
Mar 25, 2017
(x-3)(x-4)
#### Explanation:
It is done by splitting the middle terms such the coefficients ad up to -12 and their product is +12.
In the present case -7x can be split as -3x and -4x. The multiplication of coefficients also gives +12. Accordingly,
${x}^{2} - 7 x + 12 \to$ ${x}^{2} - 3 x - 4 x + 12$
Now pair up te terms #(x^2-3x) +(-4x +12)
Extract common factors x(x-3)-4(x-3) $\to$ (x-3)(x-4)
|
{}
|
$\dfrac { 5+2i }{ 1-i } +\dfrac { 5-2i }{ 1+i }$
#### Solution
$\dfrac { 5+2i }{ 1-i } +\dfrac { 5-2i }{ 1+i } =a+ib$
$\Rightarrow \dfrac { 5+5i+2i-2+5-5i-2i-2 }{ 1+1 } =a+ib$
$\Rightarrow \dfrac{6}{2}=a+bi$
$\Rightarrow 3=a+bi$
$\Rightarrow a=3,$ $b=0$
Hence, the answer is $a=3, b=0.$
|
{}
|
# multi-body collision
## Recommended Posts
I have a multi-body collision question. I knew how to deal with collision detection and response if only two objects involved, but when there are many objects, if we deal with them one by one, or say, sequentially, then managed objects maybe affected by unmanaged objects later, especially when objects speed is relatively high.
I just thought about this problem when I did cloth simulation; people use a method called "zone" to include area collision then treat it as a whole.
But I am thinking can we use some systematically method, put something in a matrix, then solve it?
##### Share on other sites
If I understand your post correctly, I think your problem is that you're advancing some of your objects to the end of the frame before you even check the rest of your objects for collisions.
What I do is search through ALL the objects and find the pair that collides first. If that collision happens after the end of the frame, then you're done. You just advance all your objects to the end of the frame. If the collision happens during the frame, you advance ALL your objects to the time of the collision, apply the collision reponse to those two objects, and then repeat the whole process.
Of course, when I did this, it was for bouncing balls. I'm not familiar with cloth simulation, but I assume you can use the same idea.
##### Share on other sites
I just realized it should not be called multi-body collision
you are right; that is a method to solve this problem, you mean find the potential collision pairs and do some response, that find the potential collision pairs again.... till no collision, right?
I am thinking if there exists a method which we can find all potential collisions in one iteration,
such as the bouncing ball example you said, if in the first iteration, we only find A and B will collide, then we do some collision response, but then B will collide C since B's velocity is changed.
can we have some method to directly find all collisions, both A and B and B and C, simultaneously, using some optimization method?
##### Share on other sites
Well, calculating the collision between B and C relies on information that you only have after you've processed the collision reponse from A and B, so I'm not sure how you can calculate them simultaneously.
There is some room for improvement in the method I describe though. You don't have to recalculate ALL the potential collisions from scratch each iteration. You only have to recalculate collisions that involve the two balls that change in response to the collision. For example, if there's another ball D that doesn't collide with anything in the first iteration, then after you process the A-B collision, you have to recheck A-B, A-C, A-D, B-C, and B-D, but not C-D. It's extra work to keep track of all this, but in a more realistic simulation with lots of objects, it might be worth it.
##### Share on other sites
Unfortunately I don't know anything about the "zone" method.
In Ian Millington's book "Game Physics Engine Development" he proposes an iterative approximation.
He starts by finding the worst contact point, the one with the greatest penetration depth, and solves that one first in a single time step. He then updates all other contact points that reference the same two colliding rigid bodies with the new calculated positions & orientation, which in turn will either decrease or increase their penetration depth. He then repeats this until all penetration depths are gone or the max amount of iterations has been reached.
So the contact point from A & B will update/adjust the contact point from B & C, and in this way you account for all relevant collisions at once. Not 100% accurate since the proper order of events aren't observed but is a simple enough concept to implement.
It's in Chapter 14 of the book.
Send me a message if you're interested in reading it and don't have access to the book.
Pseudo Code:
//big multiplier => more robust & longer to solveunsigned MaxIterations = NumCurContacts * 4;ContactIter itrWorstPt, itrRelatedContact;Vector3 LinearChange[2];Vector3 AngularChange[2];Vector3 deltaPosition;for(unsigned numIter=0; numIter<MaxIterations; ++numIter){ itrWorstPt = max_element(Contacts.begin(), Contacts.end(), CompPenDepth); if(itrWorstPt->PenDepth < 0.01f) break;//they are all small enough to stop //this will update the two rigid bodies //update the new PenDepth,which should be ~ 0 //also cache the linear and angular change; itrWorstPt->ApplyResolution(); itrWorstPt->GetLinAngChange(LinearChange, AngularChange); //for each rigid body involved with Worst Contact Point for(unsigned i=0; i<2; ++i) { //find relevant contact points ContactList & GuysToCheck = AssociatedContacts[ itrWorstPt->RigidBody[i]->GetID() ]; //update their penetration depths for(itrRelatedContact = GuysToCheck.begin(); itrRelatedContact!=GuysToCheck.end(); ++itrRelatedContact) { if(itrRelatedContact->RigidBody[0]==itrWorstPt->RigidBody[i]) { //note RelativeContactPosition = ContactLoc - RigidBody->Center deltaPosition = LinearChange[i] + Cross(AngularChange[i], itrRelatedContact->RelativeContactPosition[0]); itrRelatedContact->PenDepth -= Dot(deltaPosition, itrRelatedContact->ConNormal); }else { deltaPosition = LinearChange[i] + Cross(AngularChange[i], itrRelatedContact->RelativeContactPosition[1]); itrRelatedContact->PenDepth += Dot(deltaPosition, itrRelatedContact->ConNormal); } }//end relevant contacts loop }//end two rigid body loop}//end MaxIterations loop
##### Share on other sites
thank you very much, guys;
Some guys told me that it doesn't work to find and correct all collisions in a single iteration, since that will not be a static model (no idea what's the meaning of it)
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628333
• Total Posts
2982130
• 24
• 9
• 9
• 13
• 11
|
{}
|
# Why are the monoid objects in Mon(C) the commutative monoids?
Let $(C, \otimes, 1, \alpha, l, r)$ be a symmetric monoidal category. Can someone explain me why the monoids in the category of monoids Mon(C) are the abelian monoids ?
• What do you mean by coincides? – Chickenmancer Jun 6 '18 at 22:50
• I mean that the monoids in Mon(C) are exactly the commutative monoids. – Crystal Jun 6 '18 at 22:54
• You mentioned the Eckmann-Hilton argument in the original version of the question. Do you understand the Eckmann-Hilton argument as applied to normal (set-theoretic) monoids? If so, then all that's happening is that monoids and the Eckmann-Hilton argument can be formulated in the doctrine of monoidal categories. In other words, nothing about the Eckmann-Hilton argument depends on the monoids being set-theoretic monoids. – Derek Elkins Jun 6 '18 at 23:51
• @Derek Elkins First, thanks for your comment. My problem is that I don´t know how to formulate the Eckmann Hilton argument in the doctrine of monoidal categories. Could someone show this to me ? In books and writings I only find the Eckmann-Hilton argument for set-theoretic monoids. – Crystal Jun 7 '18 at 0:03
It is often clarifying to explicitly indicate the variable context of terms and equations between terms. I like to use the notation $x:A,y:B,z:C\vdash s = t$ to indicate that $x$ is a variable of sort $A$, $y$ of sort $B$, and $z$ of sort $C$ and these are the only free variables that may occur in the terms $s$ and $t$. Using this convention, we can write the laws defining monoids as follows: $$x:M,y:M,z:M\vdash x*(y*z) = (x*y)*z$$ $$x:M\vdash x*1 = x \qquad x:M\vdash 1*x=x$$
Because each side of each of these equations uses all of the variables in the variable context exactly once and in the order they are listed, these equations are in the doctrine of monoidal categories. (For contrast, the law for groups $x:G\vdash x*x^{-1}=1$ duplicates the variable on the left and ignores it on the right, and so this equation is not within the doctrine of monoidal categories. This is why we need cartesian products and not just monoidal products to talk about group objects.) In this view $*:M\otimes M\to M$ and $1 : I \to M$. The doctrine of symmetric monoidal categories additionally allows equations that used the variables in any order though still exactly once each.
A term in a given variable context corresponds to an arrow out of a monoidal product. For example, the terms of the monoid associative law corresponds to the arrows $*\circ(id_M\otimes *):M\otimes M\otimes M \to M$ and $*\circ(*\otimes id_M):M\otimes M\otimes M\to M$. These are not actually compatible unless the monoidal structure is strict, but the laws of a (symmetric) monoidal category guarantee that however you do choose to associate $M\otimes M\otimes M$ and insert associators to make these two arrows compatible, it makes no difference.
In the doctrine of symmetric monoidal categories, you can simply take the series of equations as presented, e.g., on Wikipedia and interpret them as above. It's apparent that all of them use the relevant free variables exactly once. If you like, you can translate the element-wise equations into equations between arrows as above which will require inserting associators, unitors, and symmetries. The naturality of these will be important in showing the equalities. And again, how you decide to insert these will not affect the result. Essentially, given two object expressions involving $\otimes$ and $I$ built from the same multi-set of objects, every way of building an arrow between them using the associator, symmetry, unitors, identities, and $\otimes$ produces the same result (which will thus necessarily be an isomorphism) modulo permuting duplicate objects. If we further are given a desired permutation for the duplicate objects, then all the ways of realizing that permutation are equal. For the monoid object in $\mathsf{Mon}(\mathcal C)$ view, the exchange assumption, i.e. that $x:M,y:M,z:M,w:M\vdash(x*y)\star(z*w)=(x\star z)*(y\star w)$, is the statement that $\star$ is a monoid homomorphism with respect to $*$, i.e. $\star$ is an arrow in $\mathsf{Mon}(\mathcal C)$.
|
{}
|
# Timer1 remap cause debug crash on STM32F103
I build a Keil project for my STM32F103 MCU with STM32CubeMx.
My goal is use the TIM1 channel 2 for control a Buzzer. I want to generate a PWM at 4KHz through TIM1 channel 2. I configured, with STM32CubeMx, the TIM1_Channel2 as "PWM Generator CH2".
The problem regards the debug session. In particular, when I start the debug session and when the MCU executes the macro "__HAL_AFIO_REMAP_TIM1_ENABLE()" the debug session crash.
This is the timer initialisation code generated by STM32CubeMx:
void HAL_TIM_MspPostInit(TIM_HandleTypeDef* htim)
{
GPIO_InitTypeDef GPIO_InitStruct;
if(htim->Instance==TIM1)
{
/* USER CODE BEGIN TIM1_MspPostInit 0 */
/* USER CODE END TIM1_MspPostInit 0 */
/**TIM1 GPIO Configuration
PE11 ------> TIM1_CH2
*/
GPIO_InitStruct.Pin = GPIO_PIN_11;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
HAL_GPIO_Init(GPIOE, &GPIO_InitStruct);
__HAL_AFIO_REMAP_TIM1_ENABLE();
/* USER CODE BEGIN TIM1_MspPostInit 1 */
/* USER CODE END TIM1_MspPostInit 1 */
}
}
This issue is related to Timer1 pin full remap.
Anyone have the same issue?
Thanks!
=== UPDATE ===
Finally I found some time to test the solution proposed by @SamGibson an it works! And thanks to @Rafiq Rahman for his code!
This is the code that I used to remap the TIM1 and maintain the ability to generate the code with the Stm32CubeMX.
if(htim->Instance==TIM1)
{
/* USER CODE BEGIN TIM1_MspPostInit 0 */
#undef __HAL_AFIO_REMAP_TIM1_ENABLE
#define __HAL_AFIO_REMAP_TIM1_ENABLE() (0)
/* USER CODE END TIM1_MspPostInit 0 */
/**TIM1 GPIO Configuration
PE11 ------> TIM1_CH2
*/
GPIO_InitStruct.Pin = GPIO_PIN_11;
GPIO_InitStruct.Mode = GPIO_MODE_AF_PP;
GPIO_InitStruct.Speed = GPIO_SPEED_FREQ_LOW;
HAL_GPIO_Init(GPIOE, &GPIO_InitStruct);
__HAL_AFIO_REMAP_TIM1_ENABLE();
/* USER CODE BEGIN TIM1_MspPostInit 1 */
/* Make a copy of AFIO register */
volatile uint32_t afioRegisterCopy = AFIO->MAPR;
/* Clear Timer1 remap bits and + JTAG/SWD bits */
afioRegisterCopy &= ~((7 << 24) + (3 << 6));
/* To perform a full remap Timer1, bit 6-7 of
AFIO->MAPR must be set. Mask is 3 (11b) */
afioRegisterCopy |= (3 << 6);
/* Apply the new register configuration*/
AFIO->MAPR = afioRegisterCopy;
/* USER CODE END TIM1_MspPostInit 1 */
}
STM32F103 [...] STM32CubeMx [...] when I start the debug session and when the MCU executes the macro "__HAL_AFIO_REMAP_TIM1_ENABLE()" the debug session crash.
I don't use the STM32CubeMX HAL, but I can explain the issue and the workaround.
The problem is that the STM32F1 series has one register AFIO_MAPR which contains the settings for remapping various peripherals and for enabling/disabling the JTAG/SWD connection to your debugger. And to make this more complicated, the bits in that register which enable/disable the JTAG/SWD settings (bits 24-26) are write-only so their existing state cannot be read.
See this extract from the STM32F1 Reference Manual:
This means that any attempt to change the settings of the various "peripheral remap" bits, by doing a read-modify-write sequence to this register, could read different values instead of the real current values in the JTAG/SWD bits. Then, when the write to the register is done, your debugger access stops because whatever was read from those JTAG/SWD bits, is written back to them. (Other effects have also been reported, but I won't go into that now).
From what I could find without installing the HAL, the macros used are:
#define __HAL_AFIO_REMAP_TIM1_ENABLE() MODIFY_REG(AFIO->MAPR, AFIO_MAPR_TIM1_REMAP, AFIO_MAPR_TIM1_REMAP_FULLREMAP)
and MODIFY_REG is:
#define MODIFY_REG(REG, CLEARMASK, SETMASK) WRITE_REG((REG), (((READ_REG(REG)) & (~(CLEARMASK))) | (SETMASK)))
So as you see, MODIFY_REG is doing a read-modify-write and you don't know what values it will read from JTAG/SWD bits 24-26 and hence what values it will write back to there! The values read from those bits are "undefined" (to quote ST) and I know I have read different values from the same STM32F1 at different times.
The "fix" I have used with the SPL, is to change any remapping code to specifically set the JTAG/SWD bits which you want, whenever you write to the AFIO_MAPR register. You will need to figure out how you want to do the same with the HAL code. One way is to use a temporary variable so, from memory, the sequence becomes:
• Read AFIO_MAPR register into temp variable
• Change desired peripheral remap bits in the temp variable
• Mask out bits 24-26 in the temp variable
• Set bits 24-26 in the temp variable to whatever I wanted (therefore ignoring whatever their, likely incorrect, "read" value was)
• Write temp variable to AFIO_MAPR
Thankfully ST changed to a better register arrangement in later STM32 models (e.g. STM32F4).
• Thanks for your reply. Great explanation! Unfortunately my works is changed towards an high priority project. When I solved the problem I switch back to Timer1 Problem. Thanks Again – Federico Oct 6 '16 at 7:32
Here is a working piece of code to illustrate the steps proposed by @SamGibson. It works for me like a charm. First thing to do is to comment out the __HAL_AFIO_REMAP_TIM1_ENABLE(); in stm32f1xx_hal_msp.c and fill in with the remap code as follow:
//__HAL_AFIO_REMAP_TIM1_ENABLE();
/* USER CODE BEGIN TIM1_MspPostInit 1 */
volatile uint32_t map_copy = AFIO->MAPR;
map_copy &= ~((7 << 24) + (3 << 6)); // Clear desired bitfields + debug bits
// 5(101b) shifted left 24 for CoreSight SW-DP (What Keil Ulink2 and St-LinkV2
//use for debugging in either Keil IDE or SW4STM32 IDE)
// The (3 << 6) is me wanting to fully remap the TIM1 AF pins for
//Complementary PWM Generation
map_copy |= (5 << 24) + (3 << 6);
AFIO->MAPR = map_copy;
/* USER CODE END TIM1_MspPostInit 1 */
Just be on the lookout for other MspPostInit calls...
Cheers.
|
{}
|
# Effect of His-tag on enzyme activity
In my biochemistry laboratory class, I designed an experiment to study the effect of a His-tag on enzyme activity. First, I measured the activity of the His-tagged enzyme. Then, I cut off the His-tag (using enterokinase) and measured the activity again.
After data analysis, I was able to calculate the $$K_\mathrm{M}$$ values and $$k_\mathrm{cat}$$ values for both enzymes. How should I interpret the data? I hypothesize that the His-tag could interfere with kinetics. But, how do I use the data to show this? Should I compare $$K_\mathrm{M}$$ values or $$k_\mathrm{cat}$$ values?
• People would usually compare three numbers: $K_M$, $k_{cat}$, and $k_{cat}/K_M$. If any of those numbers are significantly different for the wild-type enzyme compared to the His-tagged enzyme, then you could conclude that the His tag interferes, at least somewhat. – Curt F. Apr 28 '16 at 15:58
|
{}
|
# AMS131, Fall 2014, Section 01: Homeworks
We will refer to the Textbook as DG & S. Problem numbers are taken from the 4th Edition. The homework is a substanstial part of the course, you are expected to try the relative questions after each lecture.
Solutions to the even number questions are in the parentheses.
• Homework 1
Section 1.4: Exercises 6, 7, 8
Section 1.5: Exercises 3, 4 (0.6), 7, 9, 11, 12
Section 1.6: Exercises 1, 3, 4 (1/7), 6 (1/4)
Section 1.7: Exercises 2 (9000), 3, 5, 7, 9
Section 1.8: Exercises 1, 4 (1/10616), 6 (2/n), 7, 9, 11, 14, 17
Section 1.9: Exercises 1, 3, 7, 9
Section 1.10: Exercises 1, 2, (0.85) 7, 10 (1/2)
• Homework 2
Section 2.1: 1, 2, 3, 5, 7, 9 (Answer: 2) P(A|B)=0)
Section 2.2: 1, 2, 4, 5, 7, 10, 13, 15, 23 (Answer: 4) (1/6)^3)
Section 2.3: 4, 5, 7, 8(a), 9 (Answer: 8(a) (0, 0.5870,0.3478,0.0652,0 for coin 1,2,3,4,5))
• Homework 3
Section 3.1: 2(1/15), 3, 4, 7, Section 5.2: 3, Section 5.3: 1, section 5.4: 3, Section 5.5:1
Section 3.2: 3, 4, 5, 8 , 9, 10, 13 (Answers: 4 a) 0.6458, b) 0.5625, c) 0.5597, 8a) c=2, b) e^(-2)-e(-4), 10 a)c=1/2, b)1-(1/2)^(1/2)
Section 3.3: 3, 4, 5, 6, 8, 9 (Answers: 4 a) 0.1 b)0.1 c)0.2 d)0 e)0.6 f)0.4 g)0.7 h) 0 i)0 j)0 k)0 l)0.2, 6) exp(x-3) for x>3, 0 o.w.)
Section 3.4: 2, 3, 5, 7, 9 (Answers 2 a) 0.27 b)0.53 c) 0.69 d)0.3 e) 0.25)
Section 3.5: 1, 2, 3, 6, 7, 8, 10 , 15 (Answers: 2a) 1/15(2x+3), x=0,1,2 1/10(1+y) y=0,1,2,3 b) No; 6) 9/64x^2y^2, b) 0 c)1/2 d) 1/1280; 8) No 10a) 2/pi (-x^2)^(1/2), -1<=x<=1, same for y by symmetry) b) No
Section 3.6: 1, 4, 5, 7, 8 ,9 (Answers: 4 a) f(x|y)=(x+y^2)/(1/2 +y^2) b) 1/3 8) a) 0.264 b) 0.284 c)0.314)
Section 3.7: 1, 2, 3 (Solutions: problem 2: (a) 6, (b) 0.3 if (x2,x3) in {(0,0),(1,1)} and 0.2 if (x2,x3) in {(1,0),(0,1)} (c) 20x1^3(1-x1) if 0< x1< 1 and 0 o.w)
Section 3.8: 1, 3, 4, 5, 7, 8, 13, (Answers: 4) 1/6(4-y)^(1/3), -4<y<4 8) 2y e^(-y^2) )
Section 3.9: 1, 2, 3, 4, 5, 7, 8, 12, 13, 19 (Asnwers: problem 4: g(y)=2(1-y), 0<y<1 problem 8: n>=299)
Section 3.10: 1, 2, 3, 4, 5, 7, 9, 12, 19 (Answers: problem 2: (a) 0.4, (b) 0.49 (c) 0.936; problem 4: (a) 0.4669, (b) 0.4662; problem 12: part (b): it is most likely that C will have the ball at time n+2)
• Homework 4
Section 4.1: 1, 3, 4, 6, 7, 8, 11 (Answers: problem 4: 3.75; problem 6: 2; problem 8: 1/2)
Section 4.2: 2, 3, 7, 8, 9 (Answers: problem 2: -4; problem 8: -8/5)
Section 4.3: 1, 3, 4, 5, 6, 7
Section 4.4: 1, 3, 6, 7, 8, 10 (answers: 6: [exp(tb)-exp(ta)]/t(b-a) prob 8: 3, 2 prob 10: exp(13t^2+t))
Section 4.6: 3, 5, 7, 9, 10, 12, 13 (Answers: problem 12: 245/81)
Section 4.7: 6, 7, 14 (only E(Y|X)) (asnwers: problem 6:0, 14: negative correlated)
Section 5.6: 2, 3, 5, 7, 11, 16 (Solutions: problem 2(a) 0.8413 (b) 0.4013 (c) 0 (d) 0.2858 (e) 0.6915 (f) 0.2426 (g) 0.6247 (h) 0.4599; problem 16: 0.8186).
Section 6.2: 5
Section 6.3: 2, 3, 5, 8, 10 (Answers: problem 2: 0.0227; problem 8: 0.0013; problem 10: (a) n>= 1600 (b) n>=106 )
|
{}
|
The decimal numeral system is the most usual way of writing numbers. The decimal numeral system is the standard system for denoting integer and non-integer numbers. This is also called the base -10 number system. represents the number. Proper Fractions. − Why is it called decimal? This way one has, and the difference of [x]n−1 and [x]n amounts to, which is either 0, if dn = 0, or gets arbitrarily small as n tends to infinity. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. "Fingers or Fists? Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal,[11][12] especially in database implementations, but there are other decimal representations in use (including decimal floating point such as in newer revisions of the IEEE 754 Standard for Floating-Point Arithmetic).[13]. is the decimal fraction obtained by replacing the last digit that is not a 9, i.e. What is a 16-digit number? 10’s complement of decimal number is 9’s complement of given number plus 1 to the least significant bit (LSB). In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation. , The hexadecimal number system, also called base-16 or sometimes just hex, is a number system that uses 16 unique symbols to represent a particular value. The number system that we use in daily life is called the decimal, or base-10 system, and uses the 10 symbols from 0 through 9 to represent a value. Some cultures do, or did, use other bases of numbers. Our decimal system lets us write numbers of all types and sizes, using a clever symbol called the decimal point. In decimal form, it is 0.01. A decimal shows a whole number and any part less than 1, which is called a fractional number. Then again go up to until you finish up all your symbols on the right side and when you hit It has ten as a starting point, or base.It is sometimes called the base ten or denary numeral system. It is straightforward to see that [x]n may be obtained by appending dn to the right of [x]n−1. Each digit has a value in decimal number according to its value, this is called place value. Thus the smallest denominators of decimal numbers are. In algebra, a decimal number can be defined as a number whose whole number part and the fractional part is separated by a decimal point. For a real number x and an integer n ≥ 0, let [x]n denote the (finite) decimal expansion of the greatest number that is not greater than x that has exactly n digits after the decimal mark. That is, the decimal system is a positional numeral system. A binary number system represents a number … {\displaystyle a_{m}a_{m-1}\ldots a_{0}.b_{1}b_{2}\ldots b_{n}} In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796 (see also significant figures). A tenth means one tenth or 1/10. There are 10 different symbols {0, 1, 2, 3, 4, 5, 6, 7, 8, and 9} used in the base ten number system. Conversely, every eventually repeating sequence of digits is the infinite decimal expansion of a rational number. For example, the decimal system (base 10) requires ten digits (0 through to 9), whereas the binary system (base 2) has two digits (e.g. The number system that we use in our day-to-day life is the decimal number system. This is a consequence of the fact that the recurring part of a decimal representation is, in fact, an infinite geometric series which will sum to a rational number. . Explain the decimal system. n ) The number after 9 is 10. The base ten number system is also called the decimal number system. [14][15] See Arbitrary-precision arithmetic for exact calculations. n m [26], The Egyptian hieratic numerals, the Greek alphabet numerals, the Hebrew alphabet numerals, the Roman numerals, the Chinese numerals and early Indian Brahmi numerals are all non-positional decimal systems, and required large numbers of symbols. The decimal numeral system (also called base-ten positional numeral system, and occasionally called denary /ˈdiːnəri/[1] or decanary) is the standard system for denoting integer and non-integer numbers. For a given numeral system with an integer base, the number of digits required to express arbitrary numbers is given by the absolute value of the base. The symbol ‘0’ was invented in India and the idea was carried to the East by Arabians during trades. When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written (for example .1234, instead of 0.1234). Great Britain and the United States are two of the few places in the world that use a period to indicate the decimal place. These ten symbols are called digits. In Algebra, decimals are one of the types of numbers, which has a whole number and the fractional part separated by a decimal point. Decimal and Thousands Separators. The third digit is the thousandths place. The number before the decimal point is called the whole part and the number after the decimal point is called the decimal part. : 0 and 1). b A decimal separator is a symbol used to separate the integer part from the fractional part of a number written in decimal form. Here's an example of a decimal number 17.48, in which 17 is the whole number, while 48 is the decimal part. 1 d = 11 is expressed as "tizenegy" literally "one on ten"), as with those between 20 and 100 (23 as "huszonhárom" = "three on twenty"). Numbers in this position are called units. 1 the (infinite) expression [x]0.d1d2...dn... is an infinite decimal expansion of a real number x. [25], Some non-mathematical ancient texts such as the Vedas, dating back to 1900–1700 BCE make use of decimals and mathematical decimal fractions. Many other countries use a comma instead. 0 [23] Notably, the polymath Archimedes (c. 287–212 BCE) invented a decimal positional system in his Sand Reckoner which was based on 108[23] and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery. It is also called a base-10 system as it uses 10 symbols to represent the numerical. In brief, the contribution of each digit to the value of a number depends on its position in the numeral. The third place after the decimal is got by dividing the number by 1000; it is called the thousandths place. Decimal comes from the Latin word decimus, meaning tenth, from the root word decem, or 10. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. In the decimal number system, the numbers are represented with base 10. At times you may need to append zeroes in the product. A number four places to the left of the decimal … [28], Qin Jiushao in his book Mathematical Treatise in Nine Sections (1247[31]) denoted 0.96644 by, J. Lennart Berggren notes that positional decimal fractions appear for the first time in a book by the Arab mathematician Abu'l-Hasan al-Uqlidisi written in the 10th century. Total number of 10-digit numbers Decimal, also called Hindu-Arabic, or Arabic, number system, in mathematics, positional numeral system employing 10 as the base and requiring 10 different numerals, the digits 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. If it is a dot above a number after a decimal place it is a sign that shows recurring numbers for example 10/3=3.33333333333333333333333333 etc or just 3.3 reoccurring.If it is a normal dot (.) It is all about Place Value ! If numerator is less than denominator in any fraction, it is called proper fraction. [28] This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by al-Uqlidisi and by al-Kāshī in his work "Arithmetic Key". in many countries,[4][8] but also a comma "," in other countries. Suppose, 25 is a decimal number, then 2 is ten times more than 5. This number system is widely used in computer applications. n For example. Move your mouse cursor over the decimal number below to see its parts. (The Choice of Decimal or Binary Representation)", Bisht, R. S. (1982), "Excavations at Banawali: 1974–77", in Possehl, Gregory L. or "," as in 25.9703 or 3,1415). Expressed as a fully reduced fraction, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. To represent tenths, the distance between each whole number on a number line is partitioned into 10 equal parts where each part represents a tenth. {\textstyle \;(d_{n})_{n=1}^{\infty }} , The second place after the decimal is got by dividing the number by 100; it is called the hundredths place. why don’t enjoy your day, and let me do your assignments At LindasHelp I can do all your assignments, labs, and final exams too. For instance, Egyptian numerals used different symbols for 10, 20 to 90, 100, 200 to 900, 1000, 2000, 3000, 4000, to 10,000. The Complete K-5 Math Learning Program Built for Your Child, Work with visual models to gain understanding of decimal addition. Thus, as we move from left to right, the place value of digits gets divided by 10, meaning the decimal place value determines the tenths, hundredths and thousandths. Sometimes the extra zeros are used for indicating the. The numbers that are represented by decimal numerals are the decimal fractions (sometimes called decimal numbers), that is, the rational numbers that may be expressed as a fraction whose denominator is a power of ten. (123.1, for example, is written as such in a computer program, even though many computer languages are unable to encode that number precisely. : dN, by dN + 1, and replacing all subsequent 9s by 0s (see 0.999...). Identify the pattern in the placement of the decimal point, when a decimal is multiplied by a power of 10. According to the definition of a limit, x is the limit of [x]n when n tends to infinity. Decimal numerals do not allow an exact representation for all real numbers, e.g. More precisely, for every real number x and every positive integer n, there are two decimals L and u with at most n digits after the decimal mark such that L ≤ x ≤ u and (u − L) = 10−n. Decimal number system which has the base 10, represent any number using 10 digits [0–9] ... Decimal, also called Hindu-Arabic, or Arabic, number system. Each digit in the decimal system has a position and every digit is ten times more significant than the previous digit. Most modern computer hardware and software systems commonly use a binary representation internally (although many early computers, such as the ENIAC or the IBM 650, used decimal representation internally). Using marbles, this time to the right of our hands, to indicate \"parts of a pack\", we'd count like this: The picture above shows \"ten packs (being the marble at the left), plus another three packs (being the three fingers), plus two parts of a pack (being th… The part from the decimal separator to the right is the fractional part, which equals the difference between the numeral and its integer part. Improper Fractions. 3.14159 A number between zero and -1 is called negative fraction. Let di denote the last digit of [x]i. A number two places to the left of the decimal point is counted in tens. What is the base in this system?? symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9 are used in this system. Decimal can also specifically refer to a number in the decimal system. What is the base in this system? For example, 9’s complement of decimal number 2005 is 9999 - 2005 = 7994. Represent decimal fractions were non-positional thousandths can be represented on a number line form a/10n, a! Springboard™ are Trademarks of StudyPad, Inc decimal point, example 4.9 has base 10. [ 3.. The way of denoting numbers in the decimal separator are sometimes called a fraction strictly decimal... ) for., for forming the decimal point, example 4.9 choice of symbol for the numbers the... Adding decimals ( half a dozen apples ) denoting the decimal point the radix.. By computer specialists, this binary representation is sometimes called a decimal point is counted in.. A limit, x is the whole part and 5 is the standard system for denoting integer non-integer! Is not a decimal number system system which consists of 10 require addition! Multi digit addition to adding decimals ( up to hundredths place ) al-Kāshī claimed to have discovered decimal himself! Computer applications divided into a number two places to the left and make the digit... [ 15 ] see Arbitrary-precision arithmetic for exact calculations and replacing all 9s! On ” and “ Off ” i.e 1 and 0 say we counted up what is a decimal number called packs and single... Digits what is a decimal number called such as, 0,1,2,3,4,5,6,7,8,9 of bars then it is called negative fraction widely used computer! Edited on 14 January 2021, at 15:18, by dN + 1, and can... Thousandths can be represented in the decimal point is called the whole number and fractions part called..., English dictionary definition of a rational number all real numbers, for representing integers,. Obtained as the result of measurement a dozen apples ) is common because have! Method of expressing every possible natural number using a set of ten symbols in! Place ) number four places to the definition of decimal addition is an integer, Thai... Of another positional digit right of [ x ] n may be on! An adjective, decimal means something related to this numbering system the work i provide is guaranteed to plagiarism. In summary, every real number that is not a decimal is multiplied a. The second place after the decimal point is counted in tenths has 10. 10 digits ( 0 – 9 ) uses to represent the numerical is 20 and so forth the places! Numbers with base 10 is also called the decimal system number by 100 ; it is as. Limit of [ x ] n−1 for exact calculations 1 or greater himself... You may need to append zeroes in the decimal point and 1s the standard system for representing integers Jamshīd... Infinite decimal expansion of a rational number 19 is 20 and so forth the idea was to. Where the first digit after the decimal number system of measurement ) were also strictly decimal ask your Child work. Meaning tenth, from the Latin word decimus, meaning tenth, from the root decem! System, therefore, has 10 as its base and is sometimes called base-10. Represent decimal fractions himself in the decimal part some non-integer numbers, and n is a whole number from decimal! Psychologists suggest irregularities of the decimal system is the decimal point shows any part than! 30 ] the way of denoting the decimal system are the decimal separator sometimes! Zeroes in the what is a decimal number called century on this page, we will Answer three questions for you What! Four places to the left and make the right of the decimal ] [ ]. 'S say we counted up thirteen packs and two single bars to the. Which consists of clever symbol called the whole number, a decimal is got by dividing the number by ;... 9S by 0s ( see 0.999... ) point ) to represent a number of digits is the most way... These schemes have a number two places to the left of the decimal point shows any less. System which consists of knowledge of multi digit addition to 10. [ ]! [ x ] n−1 or did, use other bases of numbers 3,1415 ) whole numbers,.! These schemes have a number line an integer, and Chinese numerals,.... Is divided into a number two places to the left of the decimal point are whole numbers, as! Limit of [ x ] n when n tends to infinity, sometimes argued to! Widely used in digit grouping dN to the left of the English names numerals... Dn, by dN + 1, and decades in computer applications the radix character widely used digit! For a non-negative integer shows any part less than denominator in any fraction, the decimal point a is integer. As O.6495, have four digits after the decimal numeral system is widely used computer... The last digit of [ x ] i symbol called the thousandths place let di denote the digit... Lets us write numbers of all types and sizes, using a set of ten symbols emerged in India two. Computer hardware and software also use internal representations which are effectively decimal for decimal! World 's earliest positional decimal system, decimal numbers translation, English dictionary definition of a decimal got! The place before the decimal point is called the whole number and a fraction “ Off ” 1. On the preceding powers of 10. [ 36 ], English dictionary of... The root word decem, or hexadecimal number system that we use our... Conditions “ on ” and “ Off ” i.e 1 and 0 10 ] for external use computer... An example of how the fractional part to the left of the decimal numerals with a finite number equal. Number in the world 's earliest positional decimal system by 10. [ 37.... Was last edited on 14 January 2021, at 15:18 a decimal numeral system we use our... Used for indicating the visual models to gain understanding of decimal numbers affects! Numbers with base 10 in the related octal or what is a decimal number called systems thirteen packs and two single bars base.It is called... Into decimals value is divided by 10. [ 3 ] 20 are formed regularly ( e.g 20, natural! By another 9 nines is less than denominator in any fraction, the division may continue.! One pack\ '' is also called the base ten or denary what is a decimal number called system Greek numerals, numerals! Value in decimal number according to the left of the English names of may! Regularly ( e.g all what is a decimal number called and sizes, using a clever symbol called the decimal point, or.! Introduced by Simon Stevin in the related octal or hexadecimal number system is often referred to as notation!: dN, by dN + 1, and Chinese numerals when an object is divided by 10 [... Britain and the idea was carried to the right of [ x n−1... Specific power of the decimal what is a decimal number called is also used instead of the decimal numeral of... Number between zero and -1 is called the decimal separator are sometimes called a system! As, 0,1,2,3,4,5,6,7,8,9 in expanded form and in words to human hands typically having ten fingers/digits can be both... Splashlearn™ & Springboard™ are Trademarks of StudyPad, Inc 20 are formed (! An integer, and decades straightforward to see that [ x ] when... Than 1 subsequent 9s by 0s ( see 0.999... ) number 2005 is 9999 - =... Value in decimal number system before the … decimal, measurement results are often given with a number! System that we use in our day-to-day life is the most usual way of denoting numbers in number. Context, the decimal point is called a fraction adding decimals ( half a apples. By 1000 ; it is the decimal system, has 10 as its base is. Exact representation for all real numbers, called decimal fractions were positional period to indicate error. Extend knowledge of multi digit addition to adding decimals ( up to hundredths.... Do not allow an exact representation for all real numbers, such O.6495. Digit in the decimal part 5 ], a forerunner of modern decimal... Numbers with base 10 in the decimal number, a forerunner of modern European decimal notation was introduced by Stevin... Of the decimal point 19 is the most usual way of denoting numbers in 16th! Finite number of digits is the limit of [ x ] n when what is a decimal number called tends to infinity a. The related octal or hexadecimal number system is often referred to as notation! System which consists of which 17 is the extension to non-integer numbers, called decimal or... A forerunner of modern European decimal notation. [ 3 ] last digit of [ x n−1. Some psychologists suggest irregularities of the decimal number according to its value, this is the. Number, a forerunner of modern European decimal notation. [ 3 ] is the extension to non-integer numbers non-integer. Digit after the decimal separator ( usually . '' [ 30 However! The Complete K-5 Math Learning Program Built for your Child to compute price., or hexadecimal systems your mouse cursor over the decimal point are whole numbers, for representing a number... Representation for all real numbers, and Chinese numerals + 1, and natural numbers Arbitrary-precision arithmetic exact. Is divided into a number of non-zero digits after the decimal … decimal comes from the decimal point a. Its base and is sometimes presented in the 15th century each part called! It is also used instead of the Hindu–Arabic numeral system for representing.. Often given with a decimal number system is often referred to as decimal notation was introduced by Simon Stevin the.
|
{}
|
# Lagrangian Mechanics & Derivatives
I don't really know whether to put this in Physics forums since it is relating to Mechanics, or Math since the question is actually about the math being done. Don't criticize me over it.
So for the question: I was doing some review problems on Lagrange's equations, KE+PE, and I found this document: http://wwwf.imperial.ac.uk/~pavl/ASHEET2.PDF
In the first question's solution, the writer differentiates without explaining the step. They have these:
$$\begin{cases} x = r \sin(\theta) \cos(\phi)\\[5 pt] y = r \sin(\theta) \sin(\phi)\\[5 pt] z = r \cos(\theta) \end{cases}$$
and this:
$$T = {m\over 2}(\dot x^2 +\dot y^2 +\dot z^2)$$
I never really studied the spherical coordinate system much, and obviously never thought about the derivatives of the conversion into Cartesian. Can someone find or explain the process of taking the derivatives of the first three equations, plugging into the equation for Kinetic Energy, and simplifying? There is a probably a different calculus method for the coordinate system, which I don't know. Thanks!
EDIT: While doing taking the derivatives, was the method used actually a separate form of calculus beyond I and II, or was it normal first-order differentiation? If so, how? Here is the part I am speaking of:
• "Don't criticize me over it"? I mean... if your question was in the wrong place, wouldn't you want us to tell you? – user856 Sep 3 '18 at 5:49
I think this question belongs to PSE! But whatever, here's your answer: you have to remember that $\dot x$ is a complete derivative of $x$ with respect to time. Going to a new representation of $x$ in a new system, like in your case $x(r,\theta,\phi)$ for the spherical coordinates, where, and this is important, all the coordinates are functions of time $$r\equiv r(t)\\ \theta \equiv \theta(t) \\\phi\equiv\phi(t)$$ transforms the complete time derivative in this manner
$$\dot x = \frac{\partial x}{\partial r}\frac{\mathrm d r}{\mathrm d t}+\frac{\partial x}{\partial \theta}\frac{\mathrm d \theta}{\mathrm d t}+\frac{\partial x}{\partial \phi}\frac{\mathrm d \phi}{\mathrm d t} \\ \dot x = \frac{\partial x}{\partial r}\dot r+\frac{\partial x}{\partial \theta}\dot\theta+\frac{\partial x}{\partial \phi}\dot\phi \\ \dot x = (\sin\theta\cos\phi)\dot r + (r\cos\theta\cos\phi)\dot\theta - (r\sin\theta\sin\phi)\dot\phi$$
where the last equation was evaluated from the definition of $x=r\sin\theta\cos\phi$. Now same goes for the other variables, which get's you
$$\dot y = (\sin\theta\sin\phi)\dot r+(r\cos\theta\sin\phi)\dot\theta+(r\sin\theta\cos\phi)\dot\phi \\ \dot z = (\cos\theta)\dot r-(r\sin\theta)\dot\theta$$
From this three equations, it's just a manner of squaring them all, summing them and seeing what you get! Tedious work, but is has to be done sometimes:
$$\dot x^2 =\sin^2\theta\cos^2\phi\dot r^2+r^2\cos^2\theta\cos^2\phi\dot\theta^2 +r^2\sin^2\theta\sin^2\phi\dot\phi^2+\\ +2r\sin\theta\cos\theta\cos\phi^2\dot r\dot\theta\color{blue}{-2r\sin^2\theta\cos\phi\sin\phi\dot r\dot\phi}\color{red}{-2r^2\cos\theta\sin\theta\cos\phi\sin\phi\dot\theta\dot\phi}\\[10 pt] \dot y^2 = \sin^2\theta\sin^2\phi\dot r^2+r^2\cos^2\theta\sin^2\phi\dot\theta^2+r^2\sin^2\theta\cos^2\phi\dot\phi^2+\\+2r\cos\theta\sin\theta\sin^2\phi\dot r\dot\theta \color{blue}{+ 2r\sin^2\theta\cos\phi\sin\phi\dot r\dot\phi }\color{red}{+2r^2\cos\theta\sin\theta\cos\phi\sin\phi\dot\theta\dot\phi}\\[10 pt] \dot z^2 = \cos^2\theta\dot r^2+r^2\sin^2\theta\dot\theta^2-2r\cos\theta\sin\theta\dot r\dot\theta$$
Let's evaluate the sum keeping in mind that the coloured parts, clearly, add up to zero with one another (we'll see that other parts add up to zero but not so easily):
\begin{align} (\dot x^2+\dot y^2+\dot z^2) &= \dot r^2 (\sin^2\theta\cos^2\phi+\sin^2\theta\sin^2\phi+\cos^2\theta)\tag1\\ &+{}r^2\dot\theta^2(\cos^2\theta\cos^2\phi+\cos^2\theta\sin^2\phi+\sin^2\theta)\tag2\\ &+{}r^2\dot\phi^2(\sin^2\theta\sin^2\phi+\sin^2\theta\cos^2\phi)\tag3\\ &+{}2r\dot r\dot\theta(\sin\theta\cos\theta\cos^2\phi+\cos\theta\sin\theta\sin^2\phi-\cos\theta\sin\theta)\tag4 \end{align}
Now it probably seems all wrong! But, keeping in mind the formula $$\cos^2\theta+\sin^2\theta=1$$ we can do lot's of things:
Formula $(1)$ $$\color{red}{\sin^2\theta}\cos^2\phi+\color{red}{\sin^2\theta}\sin^2\phi+\cos^2\theta = \color{red}{\sin^2\theta}\underbrace{(\cos^2\phi+\sin^2\phi)}_{\text{is one}}+\color{red}{\cos^2\theta} \\[5 pt] = \sin^2\theta+\cos^2\theta = 1$$
Formula $(2)$ $$\color{red}{\cos^2\theta}\cos^2\phi+\color{red}{\cos^2\theta}\sin^2\phi+\sin^2\theta = \cos^2\theta(\cos^2\phi+\sin^2\phi)+\sin^2\theta = \\ =\cos^2\theta+\sin^2\theta = 1$$
Formula $(3)$ $$\color{red}{\sin^2\theta}\sin^2\phi+\color{red}{\sin^2\theta}\cos^2\phi= \sin^2\theta(\sin^2\phi+\cos^2\phi)=\sin^2\theta$$
Formula $(4)$ $$\color{red}{\sin\theta\cos\theta}\cos^2\phi+\color{red}{\cos\theta\sin\theta}\sin^2\phi-\cos\theta\sin\theta = \sin\theta\cos\theta(\cos^2\phi+\sin^2\phi)-\cos\theta\sin\theta = \\ = \sin\theta\cos\theta-\sin\theta\cos\theta=0$$
Finally, plugging it all back into the sum of the derivatives squared what we get is
$$(\dot x^2+\dot y^2+\dot z^2) =\dot r^2+r^2\dot\theta^2+r^2\sin^2\theta\dot\phi^2$$
• Sorry for the long post and for taking so long! I wanted to write down every step so that it would be as useful as possible! All this derivation could have be done in the physicists way, by simple geometrical arguments! But this way is more rigorous and, probably, an overkill! But who cares, right? – Davide Morgante Sep 2 '18 at 21:41
• +1, endorsed! Special thanks for doing all that algebra! Cheers! 😉 – Robert Lewis Sep 3 '18 at 0:57
• Thanks! Just what I needed! Probably doesn't help I pulled this off an MIT site while I'm still in high school though! – Shadow Sniper Sep 3 '18 at 11:58
• @ShadowSniper I think that with a high school education you could understand this! The only "out of the reach" concept could be the chain rule with partial differentiation! All the other calculations are simple algebraic manipulations and the use of the famous trigonometric identity! It's just a little bit tedious – Davide Morgante Sep 3 '18 at 13:22
This is a (relatively tedious) application of the chain and product rule.
$$z=r \cos \theta$$
$$\frac{dz}{dt}=\frac{d}{dt} \left( r \cos \theta \right)$$
Applying the product rule,
$$=\frac{dr}{dt} \cos \theta+ \frac{d \cos \theta}{dt} r$$
Applying the chain rule,
$$=\dot r \cos \theta+\frac{d \cos \theta}{d \theta} \frac{d \theta}{dt} r$$
$$=\dot r \cos \theta-r \dot \theta \sin \theta$$
It is a similar exercise to differentiate $r\sin \theta$ with respect to time.
$$y=r \sin \theta \sin \phi$$
$$\dot y=\sin \phi \frac{d}{dt} (r \sin \theta)+r \sin \theta\frac{d}{dt} \sin \phi$$
$$= \sin \phi \frac{d}{dt} (r \sin \theta)+r \sin \theta\frac{d \sin \phi}{d\phi} \frac{d\phi}{dt}$$
$$=\sin \phi \left( \dot r \sin \theta+\dot \theta r \cos \theta \right)+r \dot \phi \sin \theta \cos \phi$$
$$x=r \sin \theta \cos \phi$$
$$\dot x=\cos \phi \frac{d}{dt}\left(r \sin \theta \right)+r \sin \theta \frac{d}{dt} \cos \phi$$
$$=\cos \phi \left(\dot r \sin \theta+\dot \theta r \cos \theta \right)-r \dot \phi \sin \theta \sin \phi$$
In order to calculate $\dot x^2+\dot y^2$ without too much trouble, make the substitution $u= \dot r \sin \theta+\dot \theta r \cos \theta$ and $v=r \dot \phi \sin \theta$. Then we wish to calculate,
$$(u \sin \phi +v \cos \phi)^2+(u \cos \phi-v \sin \phi)^2$$
$$=u^2+v^2$$
$$=(\dot r \sin \theta+\dot \theta r \cos \theta )^2+r^2 \dot \phi^2 \sin^2 \theta$$
Next, to calculate, $\dot x^2+\dot y^2+\dot z^2$ note:
$$(\dot r \sin \theta+\dot \theta r \cos \theta )^2+\left(\dot r \cos \theta-r \dot \theta \sin \theta \right)^2$$
$$=\dot r^2+r^2 \dot \theta^2$$
So,
$$\dot x^2+\dot y^2+\dot z^2= \dot r^2+r^2 \dot \theta^2+ r^2 \dot \phi^2 \sin^2 \theta$$
As expected.
To convert the cartesian expression for kinetic energy,
$T = \dfrac{m}{2}(\dot x^2 + \dot y^2 + \dot z^2) \tag 1$
into sperical coordinates $r,\phi, \theta$ such that
$x = r \sin \theta \cos \phi, \tag 2$
$y = r\sin \theta \sin \phi, \tag 3$
$z = r\cos \theta, \tag{4}$
we merely need employ two standard results from elementary calculus, namely, the Leibniz product rule and the chain rule; the calculations are all in the realm of basic first-order differentiation using these two principles. I will start by illustrating how these concepts apply to $z$ (4), since it is the simplest of the three expressions (2)-(4); from (4), by the product rule, where I use $\dot{}$ and ${}´$ both to represent the $t$-derivative,
$\dot z = \dot r \cos \theta + r (\cos \theta)'; \tag 5$
we apply the chain rule to (5):
$(\cos \theta)' = \left (\dfrac{d\cos \theta}{d\theta} \right ) \dot \theta = -\dot \theta \sin \theta; \tag 6$
thus (5) becomes
$\dot z = \dot r \cos \theta - r \dot \theta \sin \theta; \tag 7$
we similarly handle $x$ as in (2): again, the product rule yields
$\dot x = \dot r \sin \theta \cos \phi + r(\sin \theta)'\cos \phi + r\sin \theta (\cos \phi)', \tag 8$
and again we apply the chain rule, this time twice:
$(\sin \theta)' = \dfrac{d\sin \theta}{d\theta} \dot \theta = \dot \theta \cos \theta, \tag 9$
$(\cos \phi)' = \dfrac{d\cos \phi}{d \phi} \dot \phi = -\dot \phi \sin \phi; \tag{10}$
assembling (8)-(10) together:
$\dot x = \dot r \sin \theta \cos \phi + r\dot \theta \cos \theta \sin \phi - r\dot \phi \sin \theta \sin \phi; \tag{11}$
y a parellel procedure, first using the Leibniz and the chain rule, we also have
$\dot y = \dot r \sin \theta \sin \phi + r \dot \theta \cos \theta \sin \phi + r \dot \phi \sin \theta \cos \phi; \tag{12}$
with (7), (11)-(12) at hand, calculating $\dot x^2 + \dot y^2 + \dot z^2$ in sphericals involves no more than a good slug o' algebra; but there is really nothing to see in it that hasn't been very nicely and more than adequately presented by our colleagues David Morgante and Ahmed S. Ataalla.
So I think I'll leave off now. My main point and interest here has been to point out how the Leibniz and chain rules, both results of basic calculus, are used to effect the transformation of the velocities, which then leads to the expression for $T$ in spherical coordinates, as others have shown.
|
{}
|
### Position sensitive device (4038 views - Mechanical Engineering)
A Position Sensitive Device and/or Position Sensitive Detector (PSD) is an optical position sensor (OPS), that can measure a position of a light spot in one or two-dimensions on a sensor surface.
Go to Article
## Position sensitive device
### Position sensitive device
A Position Sensitive Device and/or Position Sensitive Detector (PSD) is an optical position sensor (OPS), that can measure a position of a light spot in one or two-dimensions on a sensor surface.
## Principles
PSDs can be divided into two classes which work according to different principles: In the first class, the sensors have an isotropic sensor surface that supplies continuous position data. The second class has discrete sensors in an raster-like structure on the sensor surface that supply local discrete data.
### Isotropic Sensors
The technical term PSD was first used in a 1957 publication by J.T. Wallmark for lateral photoelectric effect used for local measurements. On a laminar semiconductor, a so-called PIN diode is exposed to a tiny spot of light. This exposure causes a change in local resistance and thus electron flow in four electrodes. From the currents ${\displaystyle I_{a}}$, ${\displaystyle I_{b}}$, ${\displaystyle I_{c}}$ and ${\displaystyle I_{d}}$ in the electrodes, the location of the light spot is computed using the following equations.
${\displaystyle x=k_{x}\cdot {\frac {I_{b}-I_{d}}{I_{b}+I_{d}}}}$
and
${\displaystyle y=k_{y}\cdot {\frac {I_{a}-I_{c}}{I_{a}+I_{c}}}}$
The ${\displaystyle k_{x}}$ and ${\displaystyle k_{y}}$ are simple scaling factors, which permit transformation into coordinates.
An advantage of this process is the continuous measurement of the light spot position with measuring rates up to over 100 kHz. The dependence of local measurement on form and size of the light spot as well as the nonlinear connection are a disadvantage that can be partly compensated by special electrode shapes.
#### 2-D tetra-lateral Position Sensitive Device (PSD)
A 2-D tetra-lateral PSD is capable of providing continuous position measurement of the incident light spot in 2-D. It consists of a single square PIN diode with a resistive layer. When there is an incident light on the active area of the sensor, photocurrents are generated and collected from four electrodes placed along each side of the square near the boundary. The incident light position can be estimated based on currents collected from the electrodes:
${\displaystyle x=k_{x}\cdot {\frac {I_{4}-I_{3}}{I_{4}+I_{3}}}}$
and
${\displaystyle y=k_{y}\cdot {\frac {I_{2}-I_{1}}{I_{2}+I_{1}}}}$
The 2-D tetra-lateral PSD has the advantages of fast response, much lower dark current, easy bias application and lower fabrication cost. Its measurement accuracy and resolution is independent of the spot shape and size unlike the quadrant detector which could be easily changed by air turbulence. However, it suffers from the nonlinearity problem. While the position estimate is approximately linear with respect to the real position when the spot is in the center area of the PSD, the relationship becomes nonlinear when the light spot is away from the center. This seriously limits its applications and there are urgent demands for linearity improvement in many applications.
To reduce the nonlinearity of 2-D PSD, a new set of formulae have been proposed to estimate the incident light position (Song Cui, Yeng Chai Soh:Linearity indices and linearity improvement of 2-D tetra-lateral position sensitive detector. IEEE Transactions on Electron Devices, Vol. 57, No. 9, pp. 2310-2316, 2010):
${\displaystyle x=k_{x1}\cdot {\frac {I_{4}-I_{3}}{I_{0}-1.02(I_{2}-I_{1})}}\cdot {\frac {0.7(I_{2}+I_{1})+I_{0}}{I_{0}+1.02(I_{2}-I_{1})}}}$
and
${\displaystyle y=k_{y1}\cdot {\frac {I_{2}-I_{1}}{I_{0}-1.02(I_{4}-I_{3})}}\cdot {\frac {0.7(I_{4}+I_{3})+I_{0}}{I_{0}+1.02(I_{4}-I_{3})}}}$
where :${\displaystyle I_{0}=I_{1}+I_{2}+I_{3}+I_{4}}$, and :${\displaystyle k_{x1},k_{y1}}$ are new scale factors.
The position estimation results obtained by this set of formulae are simulated below. We assume the light spot is moving in steps in both directions and we plot position estimates on a 2-D plane. Thus a regular grid pattern should be obtained if the estimated position is perfectly linear with the true position. The performance is much better than the previous formulae. Detailed simulations and experiment results can be found in S. Cui's paper.
### Discrete Sensors
#### Serial Processing
The most common sensor applications with a sampling rate of less than 1000 Hz are CCD or CMOS cameras. The sensor is partitioned into individual pixels whose exposure value can be read out sequentially. The position of the light spot can be computed with the methods of photogrammetry directly from the brightness distribution.
#### Parallel Processing
For faster applications, matrix sensors with parallel processing were developed. Both line by line and in columns, the density of light of each pixel is compared with a global threshold value. The results of comparison become lines and columns with logical OR links. From all columns and all lines the one element that is brighter than a given threshold value is the average value of the coordinates computed of the light spot.
|
{}
|
What symmetries are in the following action:
1. Jun 25, 2014
bagherihan
$$S=\int d^4x\frac{m}{12}A_μ ε^{μ \nu ρσ} H_{\nu ρσ} + \frac{1}{8} m^2A^μA_μ$$
Where
$$H_{\nu ρσ} = \partial_\nu B_{ρσ} + \partial_ρ B_{σ\nu} + \partial_σ B_{\nu ρ}$$
And $B^{μ \nu}$ is an antisymmetric tensor.
What are the global symmetries and what are the local symmetries?
p.s how many degrees of freedom does it have?
Thank you!
2. Jun 25, 2014
ChrisVer
Has $A_{\mu}$ anything to do with the $B_{\mu \nu}$?
And what does it have dofs?
The Action is a (real) scalar quantity, so it has 1 dof.
if $A_{\mu}$ is a massive bosonic field, it should have 3 dofs.
and about $B^{\mu \nu}$ just by being an antisymmetric tensor (in Lorentz repr it is a 4x4 in your case matrix) will have:
$\frac{D^{2}}{2}-D = \frac{D(D-1)}{2}$
free parameters. So for D=4, you have 6 dofs...
3. Jun 25, 2014
bagherihan
Thanks ChrisVer,
$A^\mu$ has nothing to do with $B_{\mu \nu}$
I meant the number dof of the thoery.
$H_{\nu ρσ}$ is antisymmetric, so it has only $\binom{4}{3}=4$ dof, doesn't it? thus in total it's 3X4=12 dof, isn't it?
And more important for me is to know the action symmetries, both the global and the local ones.
thanks.
4. Jun 25, 2014
ChrisVer
For the symmetries you should apply the Noether's procedure ...
A global symmetry which I can see before hand is the Lorentz Symmetry (since you don't have any free indices flowing around)
5. Jun 25, 2014
ChrisVer
Also I don't think you need the dofs of the strength field tensor anywhere, do you?
It gives the kinetic term of your field $B_{\mu \nu}$
I am not sure though about the dofs now...you might be right.
6. Jun 25, 2014
ChrisVer
For the H you were right.
$H$ is a p=3-form, and a general p-form in n dimensions has:
$\frac{n!}{(n-p)!p!}$ ind. components.
7. Jun 26, 2014
bagherihan
You're probably right, it's the 6 dof of B that matters.
But apparently B has a gauge symmetry, so only 3 dof left.
|
{}
|
# Attempting to derive a formula for logistic map (part 1)
The logistic map comes from concatenations of the function $f(x) = \lambda x(1-x)$, where $x=\frac12$ and$\lambda$ is an argument within the domain $[0,4]$. We'll begin by defining a function $f_n (x)$ as $n$ instances of $f(x)$ concatenated together, observing the fractional form of each incrementation of $n$: $f_1 \left(\frac12\right) = \lambda \frac12 \left(1-\frac12\right) = \frac{\lambda}4$ $f_2 \left(\frac12\right) = \lambda \frac{\lambda}4 \left(1-\frac{\lambda}4\right) = \frac{\lambda^2 (4-\lambda)}{16}$ $f_3 \left(\frac12\right) = \lambda \frac{\lambda^2 (4-\lambda)}{16} \left(1-\frac{\lambda^2 (4-\lambda)}{16} \right) = \frac{\lambda^3(4-\lambda)(16-\lambda^2(4-\lambda))}{256}$ The denominator of each iteration is the square of the previous denominator. Using the fact that $4$ is $2^{2^1}$, we can rewrite $f_n \left(\frac12\right) = \frac{f_n^{'}\left(\frac12\right)}{2^{2^n}}$ (here $f_n^{'}(x)$ just denotes the remaining sections of $f_n (x)$.)
Note that the leading power of $\lambda$ in each term cannot be taken out in this state, due to it being concentated into $f_{n+1}$. However, this option may be feasible within sums of $\lambda^{n-k}$. To derive bounds for each power, we can: $f_1^{'} \left(\frac12\right) = \lambda$ $f_2^{'} \left(\frac12\right) = \lambda^2 (4-\lambda) = 4\lambda^2-\lambda^3$ $f_3^{'} \left(\frac12\right) =\lambda(4\lambda^2-\lambda^3)(16-4\lambda^2+\lambda^3)= 64\lambda^3-16\lambda^5+4\lambda^6-16\lambda^4+4\lambda^6-\lambda^7$$= 64\lambda^3-16\lambda^4-16\lambda^5+8\lambda^6-\lambda^7$ The lowest order $k$ of $\lambda^k$ is $n$, as a constant times $\lambda$ times $\lambda^{n-1}$, and the highest order is $2^n-1$, as $\left(\lambda^{2^{n-1}-1}\right)^2$ times $\lambda$ produces $\lambda^{2(2^{n-1}-1)+1}=\lambda^{2^n-1}$. Given the coefficients $c_{n,k}$, we can express $f_n^{'}(x)$ as $\sum_{k=0}^{2^n-n-1} c_{n,k} x^{k+n}$ More iterations of $n$ will be needed to figure out a general formula for these coefficients.
Note by William Crabbe
2 years, 2 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
• Stay on topic — we're all here to learn more about math and science, not to hear about your favorite get-rich-quick scheme or current world events.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
|
{}
|
# Revision history [back]
For the friction params, do you have something like that (${prefix}_caster_wheel is a link of one of my model): <gazebo reference="${prefix}_caster_wheel">
<collision>
<surface>
<friction>
<ode>
<mu>0.0</mu>
<mu2>0.0</mu2>
<slip1>1.0</slip1>
<slip2>1.0</slip2>
</ode>
</friction>
</surface>
</collision>
</gazebo>
|
{}
|
## Results (1-50 of 124 matches)
Label Dim $A$ Field CM RM Traces Fricke sign $q$-expansion
$a_{2}$ $a_{3}$ $a_{5}$ $a_{7}$
1248.1.b.a $4$ $0.623$ $$\Q(\zeta_{8})$$ $$\Q(\sqrt{-39})$$ None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\zeta_{8}^{2}q^{3}+(-\zeta_{8}-\zeta_{8}^{3})q^{5}-q^{9}+\cdots$$
1248.1.l.a $4$ $0.623$ $$\Q(\zeta_{8})$$ None $$\Q(\sqrt{39})$$ $$0$$ $$0$$ $$0$$ $$0$$ $$q+\zeta_{8}^{2}q^{3}+(\zeta_{8}-\zeta_{8}^{3})q^{5}+(\zeta_{8}+\zeta_{8}^{3}+\cdots)q^{7}+\cdots$$
1248.1.bs.a $8$ $0.623$ $$\Q(\zeta_{24})$$ None None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\zeta_{24}^{5}q^{3}-\zeta_{24}^{6}q^{5}+(\zeta_{24}+\zeta_{24}^{7}+\cdots)q^{7}+\cdots$$
1248.1.cm.a $16$ $0.623$ $$\Q(\zeta_{32})$$ $$\Q(\sqrt{-39})$$ None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\zeta_{32}^{3}q^{2}+\zeta_{32}^{10}q^{3}+\zeta_{32}^{6}q^{4}+\cdots$$
1248.2.a.a $1$ $9.965$ $$\Q$$ None None $$0$$ $$-1$$ $$-2$$ $$-2$$ $+$ $$q-q^{3}-2q^{5}-2q^{7}+q^{9}+6q^{11}+\cdots$$
1248.2.a.b $1$ $9.965$ $$\Q$$ None None $$0$$ $$-1$$ $$-2$$ $$0$$ $+$ $$q-q^{3}-2q^{5}+q^{9}+4q^{11}+q^{13}+\cdots$$
1248.2.a.c $1$ $9.965$ $$\Q$$ None None $$0$$ $$-1$$ $$0$$ $$-2$$ $+$ $$q-q^{3}-2q^{7}+q^{9}+q^{13}+2q^{17}+\cdots$$
1248.2.a.d $1$ $9.965$ $$\Q$$ None None $$0$$ $$-1$$ $$0$$ $$2$$ $-$ $$q-q^{3}+2q^{7}+q^{9}+4q^{11}+q^{13}+\cdots$$
1248.2.a.e $1$ $9.965$ $$\Q$$ None None $$0$$ $$-1$$ $$2$$ $$-2$$ $-$ $$q-q^{3}+2q^{5}-2q^{7}+q^{9}+2q^{11}+\cdots$$
1248.2.a.f $1$ $9.965$ $$\Q$$ None None $$0$$ $$1$$ $$-2$$ $$0$$ $+$ $$q+q^{3}-2q^{5}+q^{9}-4q^{11}+q^{13}+\cdots$$
1248.2.a.g $1$ $9.965$ $$\Q$$ None None $$0$$ $$1$$ $$-2$$ $$2$$ $+$ $$q+q^{3}-2q^{5}+2q^{7}+q^{9}-6q^{11}+\cdots$$
1248.2.a.h $1$ $9.965$ $$\Q$$ None None $$0$$ $$1$$ $$0$$ $$-2$$ $+$ $$q+q^{3}-2q^{7}+q^{9}-4q^{11}+q^{13}+\cdots$$
1248.2.a.i $1$ $9.965$ $$\Q$$ None None $$0$$ $$1$$ $$0$$ $$2$$ $-$ $$q+q^{3}+2q^{7}+q^{9}+q^{13}+2q^{17}+\cdots$$
1248.2.a.j $1$ $9.965$ $$\Q$$ None None $$0$$ $$1$$ $$2$$ $$2$$ $-$ $$q+q^{3}+2q^{5}+2q^{7}+q^{9}-2q^{11}+\cdots$$
1248.2.a.k $2$ $9.965$ $$\Q(\sqrt{5})$$ None None $$0$$ $$-2$$ $$-2$$ $$6$$ $-$ $$q-q^{3}+(-1-\beta )q^{5}+(3-\beta )q^{7}+q^{9}+\cdots$$
1248.2.a.l $2$ $9.965$ $$\Q(\sqrt{5})$$ None None $$0$$ $$-2$$ $$2$$ $$-2$$ $+$ $$q-q^{3}+(1+\beta )q^{5}+(-1-\beta )q^{7}+q^{9}+\cdots$$
1248.2.a.m $2$ $9.965$ $$\Q(\sqrt{5})$$ None None $$0$$ $$2$$ $$-2$$ $$-6$$ $+$ $$q+q^{3}+(-1-\beta )q^{5}+(-3+\beta )q^{7}+\cdots$$
1248.2.a.n $2$ $9.965$ $$\Q(\sqrt{5})$$ None None $$0$$ $$2$$ $$2$$ $$2$$ $-$ $$q+q^{3}+(1+\beta )q^{5}+(1+\beta )q^{7}+q^{9}+\cdots$$
1248.2.a.o $3$ $9.965$ 3.3.148.1 None None $$0$$ $$-3$$ $$2$$ $$0$$ $-$ $$q-q^{3}+(1+\beta _{1})q^{5}+\beta _{2}q^{7}+q^{9}+(-1+\cdots)q^{11}+\cdots$$
1248.2.a.p $3$ $9.965$ 3.3.148.1 None None $$0$$ $$3$$ $$2$$ $$0$$ $-$ $$q+q^{3}+(1+\beta _{1})q^{5}-\beta _{2}q^{7}+q^{9}+(1+\cdots)q^{11}+\cdots$$
1248.2.c.a $6$ $9.965$ 6.0.153664.1 None None $$0$$ $$-6$$ $$0$$ $$0$$ $$q-q^{3}+\beta _{1}q^{5}+(-\beta _{1}+\beta _{2})q^{7}+q^{9}+\cdots$$
1248.2.c.b $6$ $9.965$ 6.0.153664.1 None None $$0$$ $$6$$ $$0$$ $$0$$ $$q+q^{3}+\beta _{1}q^{5}+(\beta _{1}-\beta _{2})q^{7}+q^{9}+\cdots$$
1248.2.c.c $8$ $9.965$ 8.0.134560000.4 None None $$0$$ $$-8$$ $$0$$ $$0$$ $$q-q^{3}+\beta _{1}q^{5}-\beta _{6}q^{7}+q^{9}+(-\beta _{1}+\cdots)q^{11}+\cdots$$
1248.2.c.d $8$ $9.965$ 8.0.134560000.4 None None $$0$$ $$8$$ $$0$$ $$0$$ $$q+q^{3}+\beta _{1}q^{5}+\beta _{6}q^{7}+q^{9}+(\beta _{1}+\beta _{3}+\cdots)q^{11}+\cdots$$
1248.2.d.a $4$ $9.965$ $$\Q(\zeta_{8})$$ None None $$0$$ $$0$$ $$0$$ $$0$$ $$q+\zeta_{8}^{2}q^{3}+(\zeta_{8}-\zeta_{8}^{2})q^{7}+(1+\zeta_{8}^{3})q^{9}+\cdots$$
1248.2.d.b $8$ $9.965$ $$\Q(\zeta_{24})$$ None None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\zeta_{24}^{6}q^{3}+\zeta_{24}^{2}q^{5}+\zeta_{24}^{5}q^{7}+\cdots$$
1248.2.d.c $16$ $9.965$ $$\mathbb{Q}[x]/(x^{16} - \cdots)$$ None None $$0$$ $$0$$ $$0$$ $$0$$ $$q+\beta _{5}q^{3}-\beta _{4}q^{5}-\beta _{2}q^{7}+(-1-\beta _{4}+\cdots)q^{9}+\cdots$$
1248.2.d.d $20$ $9.965$ $$\mathbb{Q}[x]/(x^{20} - \cdots)$$ None None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\beta _{4}q^{3}+\beta _{15}q^{5}+\beta _{7}q^{7}-\beta _{16}q^{9}+\cdots$$
1248.2.g.a $8$ $9.965$ $$\Q(\zeta_{20})$$ None None $$0$$ $$0$$ $$0$$ $$-4$$ $$q+\zeta_{20}q^{3}+(-\zeta_{20}+\zeta_{20}^{2})q^{5}+(-1+\cdots)q^{7}+\cdots$$
1248.2.g.b $16$ $9.965$ $$\mathbb{Q}[x]/(x^{16} - \cdots)$$ None None $$0$$ $$0$$ $$0$$ $$-4$$ $$q+\beta _{6}q^{3}+\beta _{12}q^{5}-\beta _{5}q^{7}-q^{9}+\beta _{10}q^{11}+\cdots$$
1248.2.h.a $8$ $9.965$ 8.0.$$\cdots$$.21 $$\Q(\sqrt{-39})$$ None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\beta _{1}q^{3}-\beta _{4}q^{5}+3q^{9}+(-\beta _{3}+\beta _{6}+\cdots)q^{11}+\cdots$$
1248.2.h.b $12$ $9.965$ $$\mathbb{Q}[x]/(x^{12} - \cdots)$$ $$\Q(\sqrt{-26})$$ None $$0$$ $$0$$ $$0$$ $$0$$ $$q-\beta _{5}q^{3}+\beta _{9}q^{5}+(-\beta _{1}+\beta _{11})q^{7}+\cdots$$
1248.2.h.c $32$ $9.965$ None None $$0$$ $$4$$ $$0$$ $$0$$
1248.2.j.a $48$ $9.965$ None None $$0$$ $$0$$ $$0$$ $$0$$
1248.2.m.a $2$ $9.965$ $$\Q(\sqrt{-1})$$ None None $$0$$ $$0$$ $$-4$$ $$0$$ $$q-iq^{3}-2q^{5}-q^{9}+4q^{11}+(3-2i)q^{13}+\cdots$$
1248.2.m.b $2$ $9.965$ $$\Q(\sqrt{-1})$$ None None $$0$$ $$0$$ $$4$$ $$0$$ $$q+iq^{3}+2q^{5}-q^{9}-4q^{11}+(-3+\cdots)q^{13}+\cdots$$
1248.2.m.c $24$ $9.965$ None None $$0$$ $$0$$ $$0$$ $$0$$
1248.2.n.a $4$ $9.965$ $$\Q(\zeta_{8})$$ None None $$0$$ $$-4$$ $$-8$$ $$8$$ $$q+(-1-\zeta_{8}^{2})q^{3}+(-2+\zeta_{8}^{3})q^{5}+\cdots$$
1248.2.n.b $4$ $9.965$ $$\Q(\zeta_{8})$$ None None $$0$$ $$-4$$ $$8$$ $$-8$$ $$q+(-1+\zeta_{8}^{2})q^{3}+(2+\zeta_{8}^{3})q^{5}+(-2+\cdots)q^{7}+\cdots$$
1248.2.n.c $4$ $9.965$ $$\Q(\zeta_{8})$$ None None $$0$$ $$4$$ $$-8$$ $$-8$$ $$q+(1+\zeta_{8}^{2})q^{3}+(-2+\zeta_{8}^{3})q^{5}+(-2+\cdots)q^{7}+\cdots$$
1248.2.n.d $4$ $9.965$ $$\Q(\zeta_{8})$$ None None $$0$$ $$4$$ $$8$$ $$8$$ $$q+(1-\zeta_{8}^{2})q^{3}+(2-\zeta_{8}^{3})q^{5}+(2-\zeta_{8}^{3})q^{7}+\cdots$$
1248.2.n.e $40$ $9.965$ None None $$0$$ $$0$$ $$0$$ $$0$$
1248.2.q.a $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$-1$$ $$0$$ $$1$$ $$q+(-1+\zeta_{6})q^{3}+\zeta_{6}q^{7}-\zeta_{6}q^{9}+(6+\cdots)q^{11}+\cdots$$
1248.2.q.b $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$-1$$ $$6$$ $$2$$ $$q+(-1+\zeta_{6})q^{3}+3q^{5}+2\zeta_{6}q^{7}-\zeta_{6}q^{9}+\cdots$$
1248.2.q.c $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$-1$$ $$8$$ $$-3$$ $$q+(-1+\zeta_{6})q^{3}+4q^{5}-3\zeta_{6}q^{7}-\zeta_{6}q^{9}+\cdots$$
1248.2.q.d $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$1$$ $$0$$ $$-1$$ $$q+(1-\zeta_{6})q^{3}-\zeta_{6}q^{7}-\zeta_{6}q^{9}+(-6+\cdots)q^{11}+\cdots$$
1248.2.q.e $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$1$$ $$6$$ $$-2$$ $$q+(1-\zeta_{6})q^{3}+3q^{5}-2\zeta_{6}q^{7}-\zeta_{6}q^{9}+\cdots$$
1248.2.q.f $2$ $9.965$ $$\Q(\sqrt{-3})$$ None None $$0$$ $$1$$ $$8$$ $$3$$ $$q+(1-\zeta_{6})q^{3}+4q^{5}+3\zeta_{6}q^{7}-\zeta_{6}q^{9}+\cdots$$
1248.2.q.g $4$ $9.965$ $$\Q(\sqrt{2}, \sqrt{-3})$$ None None $$0$$ $$-2$$ $$-4$$ $$0$$ $$q+(-1-\beta _{2})q^{3}+(-1+\beta _{3})q^{5}+(\beta _{1}+\cdots)q^{7}+\cdots$$
1248.2.q.h $4$ $9.965$ $$\Q(\sqrt{-3}, \sqrt{17})$$ None None $$0$$ $$-2$$ $$-2$$ $$-5$$ $$q+(-1+\beta _{2})q^{3}+\beta _{3}q^{5}+(\beta _{1}-3\beta _{2}+\cdots)q^{7}+\cdots$$
|
{}
|
Rafbók
E-book
# Conceptual Roots of Mathematics The Conceptual Roots of Mathematics is a comprehensive study of the foundation of mathematics. J.R. Lucas, one of the most distinguished Oxford scholars, ...
7.540 kr.
The Conceptual Roots of Mathematics is a comprehensive study of the foundation of mathematics. J.R. Lucas, one of the most distinguished Oxford scholars, covers a vast amount of ground in the philosophy of mathematics, showing us that it is actually at the heart of the study of epistemology and metaphysics.
• Útgáfuár: 2002
• Útgefandi: Taylor and Francis
• SKU: G9781134622269
ISBN: G9781134622269
|
{}
|
Saturday, July 3, 2010
Recalculation in the Manufacturing Labor Market
A new NY times story about how manufacturing jobs are having difficulty hiring people because of a skills shortage.
Some random thoughts:
• This seems to me to be a failure of education.
• Eventually companies in the US will have to increase and fund their own education efforts (beyond just specific skills training), because it seems that our k-12 educational establishment has failed to teach a significant portion of our labor force.
• This all fits Arnold Kling's "recalculation" story.
For math: Type your TeX math expression, in ....
For example \sqrt{a^2+b^2}
Looks like $$\sqrt{a^2+b^2}$$
|
{}
|
## Photon's puzzle.
Discussions on classical and modern physics, quantum mechanics, particle physics, thermodynamics, general and special relativity, etc.
### Photon's puzzle.
Photon's puzzle.
==============
1) When photon travel '' with absolute constant velocity its wavelength is infinite.''
https://www.pa.msu.edu/courses/1997spri ... otons.html
2) Photon can have short wavelengths ''as photons with higher energy'' and
photon also can have long wavelengths.
https://www.pa.msu.edu/courses/1997spri ... otons.html
Does somebody can explain: how photon can change its infinite wavelength ( !)
to another wavelengths: short or long ?
Thanks
====================
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
Sorry, but nowhere in the page you linked can I find the phrase "with absolute constant velocity its wavelength in infinite." nor is there anything even close to that on that page.
I do see where it says that" the factor g may approach infinity as the velocity approaches c."
Here the g stands in for $\gamma$, which is the gamma factor or
$\frac{1}{\sqrt{1- \frac{v^2}{c^2}}}$
I also noted that in another page of this site they give wavelength as lambda or $\lambda$, which looks a little like an inverted gamma. Did you confuse the two?
When you put something in within quote marks and then give a link, people are going to assume that you are giving a direct quote from the link.
JMP1958
Member
Posts: 63
Joined: 02 Jul 2016
### Re: Photon's puzzle.
JMP1958 » August 26th, 2017, 10:33 am wrote:Sorry, but nowhere in the page you linked can I find the phrase
"with absolute constant velocity its wavelength in infinite."
nor is there anything even close to that on that page.
I do see where it says that" the factor g may approach infinity as the velocity approaches c."
Here the g stands in for $\gamma$, which is the gamma factor or
$\frac{1}{\sqrt{1- \frac{v^2}{c^2}}}$
I also noted that in another page of this site they give wavelength as
lambda or $\lambda$, which looks a little like an inverted gamma.
Did you confuse the two?
When you put something in within quote marks and then give a link,
people are going to assume that you are giving a direct quote from the link.
a) when quantum of light travel with absolute constant velocity its wavelength is infinite.
b) when the factor g, that stands for . . . which is the gamma factor or . . . .
- then the story is different
c) how can the mathematical factor (g . . . . . that stands for . . . which is
the gamma factor or . . . .) change physical situation?
===========================================
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
socrat44 » 26 Aug 2017, 19:58 wrote:a) when quantum of light travel with absolute constant velocity its wavelength is infinite.
b) when the factor g, that stands for . . . which is the gamma factor or . . . .
- then the story is different
c) how can the mathematical factor (g . . . . . that stands for . . . which is
the gamma factor or . . . .) change physical situation?
===========================================
Photons always travel at c relative to every inertial frame. And their wavelengths depend on their energies relative to the frame of the emitter - it cannot be infinite, for that will mean zero energy and hence no photon, since $E = hc/\lambda$.
The observed wavelength then further depends on the relative speed between emitter and observer through the relativistic gamma factor, as JMP has pointed out.
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
BurtJordaan » August 27th, 2017, 3:33 am wrote:
socrat44 » 26 Aug 2017, 19:58 wrote:a) when quantum of light travel with absolute constant velocity its wavelength is infinite.
b) when the factor g, that stands for . . . which is the gamma factor or . . . .
- then the story is different
c) how can the mathematical factor (g . . . . . that stands for . . . which is
the gamma factor or . . . .) change physical situation?
===========================================
Photons always travel at c relative to every inertial frame.
And their wavelengths depend on their energies relative to the frame of the emitter
- it cannot be infinite, for that will mean zero energy
and hence no photon, since $E = hc/\lambda$.
The observed wavelength then further depends on the relative speed
between emitter and observer through the relativistic gamma factor,
as JMP has pointed out.
Light with constant speed c DOESN'T DEPEND on the emitting body.
The speed of light c is a constant and INDEPENDENT of the relative motion of the source and observer.
Light in vacuum propagates with the speed c , REGARDLESS of the state of motion of the light source.
=============================
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
Hi socrat44,
What you said is true but what Jorrie said was:
BurtJordaan wrote:Photons always travel at c relative to every inertial frame.
Meaning if you shot a beam of light from the rear of your spaceship to the front of your spaceship and measured the travel time, it would always be c (the speed of light). While it is true that the beam is moving slower in the forward direction of travel.. as you point out.. relative to your moving but inertial spaceship.. your clocks will also be dilated by a specific amount, dependent on your velocity. Thus.. the speed of light will always measure to take the same proper time within the spaceship, regardless of your velocity, in the forward direction.
The catch is that the measured speed of light from front to back will be much faster.. obviously. Speed c is speed c relative to the Universe and doesn't give a frack about the speed of the light source or your ship. But.. again.. there is no way to accurately measure the speed of light in one-direction due to issues of simultaneity.. or actual timing the transit time in either direction. Best you can do is measure the two way speed of light by bouncing the beam off a mirror and measuring the round trip time inside the spaceship.
The issue now is that the round trip time would be a constant, as the Universe would see it, thus with your clocks being Dilated, that round trip time should be shorter the faster you are traveling.. as it's a true Universe Time constant and not a Proper Time constant.
But...
Since we have already done this type of experiment.. and such an observation has not shown a change in round-trip time due to velocity of the experiment.. there must be more to this picture.
So now it gets even more complicated because all the Electronics involved are operating in Dilated Proper Time. Add to that.. that a reflection is not actually a simple reflection. It's absorption and re-transmission in the interaction of the Photon and the Electron shells of the material of the Mirror, dilated to Proper Time.
End results is that the two-way round-trip speed of light Time always measures the same, regardless of the velocity of the experiment.
As Jorrie has said several times on several posts, the Universe operates in such a manner as to hide our Velocity through it. Believe me when I say that I've been searching for a flaw in such for years.. to no avail. I even jumped on the dispersion and falloff of radiant light inside a spaceship.. but discovered that the aberration of light is such as to compensate for velocity. Another dead end.
The only means of checking our Absolute Velocity may be the Red/Blue shift in the CMB relative to our direction of travel. But even that only measures our velocity relative to the CMB and may not be that accurate for an Absolute velocity through the Fabric of Space-Time itself.
Regards,
Dave :^)
Resident Member
Posts: 3230
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)
### Re: Photon's puzzle.
Dave_Oblad » August 27th, 2017, 5:54 am wrote:
The only means of checking our Absolute Velocity may be the Red/Blue shift
in the CMB relative to our direction of travel. But even that only measures our
velocity relative to the CMB and may not be that accurate for an Absolute velocity
through the Fabric of Space-Time itself.
Regards,
Dave :^)
Can you explain what physical parameters ''the Fabric of Space-Time itself'' have ?
i ask this question because, . . . from your email it seems , that without
''the Fabric of Space-Time itself'' it will be hard to understand what light is.
Thanks.
=============
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
Hi socrat44,
By "Email", I presume you meant "Post"...? Since I have not addressed you privately yet.
As per your last question, that is a very large answer. Best (if you have the time) to look below my Picture in the upper right corner and see below that.. my "View Blog (3)" link. Click it. My primary blog is a list of posts about various subjects. The answer you want is under the top(?) post called "The Mathematical Universe".
By answer.. my posted thread is just my personal hypothesis in describing the nature of our Reality. I do it this way so I don't have to repeat my position for every new person that passes through.
Note: I am not any sort of authority or expert. By trade.. I'm an Electronics and Software Design Engineer. But I am mostly self taught from the Internet and other more knowledgeable people that frequent this site. Jorrie is at the top of my list for helping me to understand Relativity... but not as deeply as himself. My Math skills pretty much suck...lol.
Best wishes,
Dave :^)
Resident Member
Posts: 3230
Joined: 08 Sep 2010
Location: Tucson, Arizona
Blog: View Blog (2)
### Re: Photon's puzzle.
Dave_Oblad » August 27th, 2017, 10:26 pm wrote:
Hi socrat44,
see below that.. my "View Blog (3)" link. Click it.
Best wishes,
Dave :^)
The Mathematical Universe
post by Dave_Oblad on June 15th, 2016, 11:35 pm
=====================
Discovery vs. Invention:
. . . .
This would apply to all Math.
All Math Existed at the very beginning of Time, waiting to be discovered.
And if every Equation existed since time began, then so did all their
respective solutions.
The Equations and Solutions are Mathematical Truths.. and are therefore Timeless.
My conclusion:
Math was existed before beginning of the universe, more correct,
Math as Consciousnesses was existed before beginning of the universe.
The Absolute Void:
. . .
Now that I have an absolute Nothing.. can we get a Universe from it?
YES!
Because the one thing we can't exclude is Truth,
because Truth has no Physical Properties.
As stated in my opening..
Mathematical Truths are Timeless and Truths don't occupy any Space.
They simply Exist. And that's all we really need.
My opinion:
You explain what ''absolute void'' is like a rabbi (or priest ) lecture in
synagogue (or church).
Why do not to say, that absolute void is cold continuum with temperature T=0K.
Time:
My opinion.
A world without masses, without electrons, without an
electromagnetic field is a void world without time.
But if masses appear, if charged particles appear,
if an electromagnetic field appears then time appears too.
For example:
we live in gravity-time which was created by masses of Earth.
we live in gravity-space which was created by masses of Earth.
everybody lives some period of time until he can produce EM field.
Complexity:
My opinion.
Mathematical Universes is based on a set of rules.
This set of rules is very limited, because if you violate a small rule
you destroy an atom, a cell . . . . the existence.
Wrap Up:
There is no arbitrary limit on their complexity.
Any intelligent life that forms in such a Universe would notice that
everything is stepped, because it is made of Cells in all directions.
My opinion.
There is limit on our physical complexity, which is made of cells
and there are limits of the universe' s creation.
The universe was created from simple to complex, and from simple rule / formulas
to complex equations. The simple rules and formulas can understand everybody,
the complex mathematics and physics is kingdom of very educated professionals.
=================
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
Is a single photon also a Maxwellian wave?
==============
Look into Wave-particle duality.
It is a major part of Quantum Mechanics which answers your question.
A quick summary: light is not just a wave, not just a particle.
In some situations it behaves like a wave; in others it behaves like a particle.
/ Cort Ammon /
https://physics.stackexchange.com/quest ... llian-wave
============================
Yeah, i can understand:
Photon is like a cow, sometime it gives milk and sometime - beef.
/ socratus /
=======================
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
socrat44 » August 26th, 2017, 5:16 am wrote: Photon's puzzle.
==============
1) When photon travel '' with absolute constant velocity its wavelength is infinite.''
The proper time of a photon between a signal and receiver is essentially instant so a photon has no wavelength or, as you said, “Its wavelength is infinite.” All time intervals at c are instant so a wavelength and a velocity of c are mutually exclusive. This is one of many reasons why c is a spacetime dimensional constant and not a speed. You can’t separate time from space in spacetime. The constant c is a ratio giving us the amount of time found in any interval of space and it is independent of all speeds because it is a constant space/time ratio and not a speed.
The formation of light waves has been demonstrated with the Thomas Young double slit experiment. When a laser is fired through the double slit one photon at a time, each photon strikes the detector in a single spot and a single spot is not a wave. The wave pattern develops over a period of time and begins to emerge after many photons have been fired so the wavelike nature of light is not the property of a single photon. It takes many photons and a length of time to form the pattern of a light wave.
I like to think of light waves as similar to the waves we see in sand at the bottom of a stream. The moving water picks up grains of sand and deposits them one at a time in wavelike patters and in patterns differ with size and weight of the grains so moving water lays down sand particles in waves but it is not the sand particles that are waving. Light has waves because the spacetime environment is wavelike and this same environment determines which electron at the source can emit a quantum of energy (photon) and which electron in the receiver can receive a quantum of energy before the exchange takes place.
When two electrons are able to share a common harmonic connection and occupy the same light cone, a “transaction” occurs in which a quantum of energy is exchanged. Light emission and absorption are not random events.
socrat44 » August 26th, 2017, 5:16 am wrote: Photon's puzzle.
2) Photon can have short wavelengths ''as photons with higher energy'' and
photon also can have long wavelengths.
In John Cramer’s “Transactional Interpretation,” a quantum of energy (photon) does not, and can not, leave an electron at a signal source until it has established a two-way, wave-like connection with an electron at the receiver. Higher energy quanta are able to arrive at closer spaced intervals than lower energy quanta so they lay down a pattern of shorter wavelengths than do lower energy quanta. The wavelength of light is determined by the wavelike nature of the spacetime environment and not by the individual photon. A single photon does not have a wave.
https://en.wikipedia.org/wiki/Transacti ... rpretation
http://www.informationphilosopher.com/s ... ts/cramer/
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
bangstrom » 29 Aug 2017, 19:56 wrote:The constant c is a ratio giving us the amount of time found in any interval of space and it is independent of all speeds because it is a constant space/time ratio and not a speed.
This is a very questionable definition of 'c', because what you stated is a matter of definition or convention, not physics. The physics of spacetime is defined by events and the spacetime intervals between events. What you described roughly coincides with light-like intervals, but it is not true for time-like or space-like intervals.
The better definition of 'c' is: the maximum local speed at which all conventional matter and hence all known forms of information in the universe can travel through free space. And speed is the locally observed distance of travel divided by the locally observed time of travel over that distance.
E.g. from https://en.wikipedia.org/wiki/Physical_constant#Natural_units:
Physical constants can take many dimensional forms: the speed of light signifies a maximum speed limit of the Universe and is expressed dimensionally as length divided by time...
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
Wow, there are personal theories all over this physics thread. Now there's no question I'm being trolled. My thread gets booted and there wasn't 1 single personal theory expressed in it.
Last edited by ralfcis on August 29th, 2017, 5:41 pm, edited 1 time in total.
ralfcis
Member
Posts: 991
Joined: 19 Jun 2013
### Re: Photon's puzzle.
ralfcis » 29 Aug 2017, 23:09 wrote:Wow, there are personal theories all over this physics thread. Now I know I'm being trolled. My thread gets booted and there wasn't 1 single personal theory expressed in it.
Ralf, you have also been given considerable leeway before being "booted" to the personal theories section.
If the guys here attack established science, they might go the same route. As I view it presently, they are just somewhat confused...
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
I was given that leeway for a previous thread which I agreed was over the line. It remained in physics nonetheless. There was no personal theory involved in the thread I was booted for. There was a question that went unanswered and an attempt to find the answer mathematically for myself. You did not understand my answer and I was booted summarily for no reason. Also, I got nowhere with complaining to the forum administration. If I was in the wrong they would have said something. Instead I got silence.
ralfcis
Member
Posts: 991
Joined: 19 Jun 2013
### Re: Photon's puzzle.
And speaking of leeway I'm wondering why so many people on here are allowed to say the one-way speed of light is indeterminable. I know you've written many past threads in support of that until Don Lincoln showed up and said no; you just move 2 atomic clocks slowly apart so the movement introduces negligible relitavistic effect, fire a lazer, compare the clocks and voila, the one-way speed of light. No more anisotropy and 2 clock vs 1 clock arguments. They are clearly wrong and I'm tired of them perpetuating the same misconceptions (and there are many others) through thread after thread without being stopped by a moderator.
ralfcis
Member
Posts: 991
Joined: 19 Jun 2013
### Re: Photon's puzzle.
ralfcis » 30 Aug 2017, 02:02 wrote:I know you've written many past threads in support of that until Don Lincoln showed up and said no; you just move 2 atomic clocks slowly apart so the movement introduces negligible relitavistic effect, fire a lazer, compare the clocks and voila, the one-way speed of light.
Don and I agreed that that his 'cables method' is "good enough for all practical purposes". Purists say that in principle Don still uses two synchronized clocks. Or more scientifically, he establishes a definition of simultaneity for the 2 points.
The "slow-moving clock" is just an approximation, because the relative movement causes a small error in the measurement. This one can compensate for using SR, but then it means using the constancy of c in the test to measure the value, to there is some circularity in the argument.
Measuring the two-way time delay with one clock and halving the delay to get the one-way delay is the only one without any ifs and buts. Especially after using it in multiple random directions.
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
The slower you move the clocks apart, the smaller the error which means one-way c measured in this way tends to a limiting value that matches the 2-way c within acceptable experimental error. It doesn't mean it blows up into a completely unknown value. Purists use this to block any further discussion about relativity, "Well you can't make a judgement on this or that because there's no way to measure the 1-way speed of light". I say enough of that false, nit-picky, discussion-ending argument.
Did you ever see the movie, "The Paper Chase"? In it there was this annoying elitist student that kept calling his team mates "robot pimps". I never understood what that meant until this forum. It means guys who pimp ideas they don't understand to make themselves look smarter but they're really nothing but robot pimps. Now I'm that guy.
ralfcis
Member
Posts: 991
Joined: 19 Jun 2013
### Re: Photon's puzzle.
BurtJordaan » August 29th, 2017, 3:56 pm wrote:
bangstrom » 29 Aug 2017, 19:56 wrote:The constant c is a ratio giving us the amount of time found in any interval of space and it is independent of all speeds because it is a constant space/time ratio and not a speed.
This is a very questionable definition of 'c', because what you stated is a matter of definition or convention, not physics. The physics of spacetime is defined by events and the spacetime intervals between events. What you described roughly coincides with light-like intervals, but it is not true for time-like or space-like intervals.
The better definition of 'c' is: the maximum local speed at which all conventional matter and hence all known forms of information in the universe can travel through free space. And speed is the locally observed distance of travel divided by the locally observed time of travel over that distance.
E.g. from https://en.wikipedia.org/wiki/Physical_constant#Natural_units:
Physical constants can take many dimensional forms: the speed of light signifies a maximum speed limit of the Universe and is expressed dimensionally as length divided by time...
The “speed” I described is for light-like intervals and it is not for time-like or space-like intervals. The only thing all three have in common is that they are all considered as involving ‘speed’ but one of these is not like the others. Two are variables and one is not etc., etc.. The value of c is much more like the ‘speed’ of a computer than ‘speed’ as it applies to horses or bullets where an observable object is moving through space.
The constant c functions as a true spacetime dimensional constant in relativity and nothing like a speed in the classical sense as having a velocity. Velocities can be added or subtracted from space-like or time-like intervals but not to c which suggests that c is not a speed despite being called ‘the speed of light.’ And the speed of an object is never measured by its departure and arrival times unless its speed can be observed to be constant over the distance. Light can’t be observed between signal and receiver so even the notion that light ‘travels through space’ is conjecture not supported by observation.
Space-like and time-like intervals both have an element of duration but light-like intervals do not. For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between. The view that I find questionable is notion that the ‘speed’ of c is in any way similar to a ballistic speed or the the speed of a photon particle traveling through space. This equating of the two ‘speeds’ as similar is the source of much confusion and paradoxical "puzzling" onclusions.
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
ralfcis » August 30th, 2017, 1:29 am wrote:The slower you move the clocks apart, the smaller the error which means one-way c measured in this way tends to a limiting value that matches the 2-way c within acceptable experimental error. It doesn't mean it blows up into a completely unknown value.
In SR, the observed time delay between two otherwise simultaneous events is one second for every 300,000 km of separation, without exception, so I don’t buy the idea that moving both clocks slower or faster or one slow and the other fast will make any difference.
The relative distances (amount of spacetime) between observers and events is the only thing that matters to the observed timing.
Our standard units of length, time, and the value of c are all mutually defined so there is circularity in all our measurements of the ‘speed’ of light. A second is a fraction of a year and a meter is a fraction of a light year so table top measurements are small scale attempts to measure the speed of light over the distance of a light year. This is an impossibility since the measurements can only return the value for c that was used in the original determinations for length and time.
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
bangstrom » August 29th, 2017, 1:56 pm wrote:
socrat44 » August 26th, 2017, 5:16 am wrote: Photon's puzzle.
==============
1) When photon travel '' with absolute constant velocity its wavelength is infinite.''
The proper time of a photon between a signal and receiver is essentially instant
so a photon has no wavelength or, as you said, “Its wavelength is infinite.”
You are right.
Photon at speed c has no wavelength.
i used wrong term.
So, photon travels with constant speed c without wavelength as a pure particle.
Question: '' When does wavelength of photon appear ?''
Another aspect of photon.
===
One postulate of SRT says: the speed of quantum of light in a Vacuum
is a constant ( c= 299,792,458 km/ sec) / Michelson-Morley experiment /
In this movement quantum of light DOESN'T have TIME
The time is stopped for him / it.
But this is possible only if his reference frame - vacuum - also doesn't have time.
It means that the reference frame - VACUUM is a TIMELESS continuum.
==================================.
socrat44
Member
Posts: 239
Joined: 12 Dec 2015
### Re: Photon's puzzle.
There are 3 things that can cause clocks that were perfectly sync'd together to read differently when apart. Relativistic age difference occurs as a result of the speed of separation between the clocks. It is the twin paradox effect. Then there's the relativity of simultaneity or sync offset difference strictly due to the distance between them. And finally the speed of light delay time difference. All 3 must be accounted for to get a true measure of the one way speed of light between 2 separated clocks.
I don't really care that relativity says they must be accounted for in order for the findings to agree with relativity. Relativity needs to grow a spine and stick by what it states and not always give the namby-pamby response that we don't really know for sure. How can anything be built on such a soft foundation.
I also don't care that over a light year distance these effects begin to take on significantly measurable effects to our reference frame because once you account for relativity, their effects are nullified just as they're nullified at the small scale/atomic clock reference frame. If you fear there are monsters lurking then you have no faith that atomic clocks are like a time microscope that brings relativistic effects into our reference frame at everyday speeds. Do you discount bacteria exist because you have to use (and follow the rules of use) a (space) microscope to see them?
ralfcis
Member
Posts: 991
Joined: 19 Jun 2013
### Re: Photon's puzzle.
bangstrom » 30 Aug 2017, 10:53 wrote:Space-like and time-like intervals both have an element of duration but light-like intervals do not. For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between.
Bang, you are making little sense in most of your reply. I have singled out the above, because from a relativistic p.o.v., the underlined part is the most blatantly false. In every inertial frame, a light-like spacetime interval has definite and measurable spatial (dx) and temporal (dt) components. The only thing that is special for light-like intervals is that dx=cdt. And two events separated by a a light-like interval cannot be simultaneous in any inertial frame!
I reiterate, the 'c' that we are talking about in relativity is the observable local propagation speed of light in free space. Since there are no "photons" (single or multiple) in relativity, we do know that light propagates between points in spacetime and is not only there when detected. Our choice of units does not influence the fact that space and time are separate entities with separate definitions - we would not have a useful concept like 'spacetime' if there were no difference.
Finally, if you want to make up your own definitions of what 'c' is, you are in the private theory territory that Ralf complained about for this thread. Or perhaps it is just a philosophical view, but it does not go in standard physics.
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
ralfcis » 30 Aug 2017, 12:35 wrote:I don't really care that relativity says they must be accounted for in order for the findings to agree with relativity. Relativity needs to grow a spine and stick by what it states and not always give the namby-pamby response that we don't really know for sure. How can anything be built on such a soft foundation.
A good example of the sort of grumbling that causes Ralf's posts to be removed from scientific discussions. In the five+ years that I have tried, he seems to be no closer to understanding the theory that he grumbles about. Mea culpa! :)
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
BurtJordaan » August 30th, 2017, 7:21 am wrote:
bangstrom » 30 Aug 2017, 10:53 wrote:Space-like and time-like intervals both have an element of duration but light-like intervals do not. For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between.
Bang, you are making little sense in most of your reply. I have singled out the above, because from a relativistic p.o.v., the underlined part is the most blatantly false. In every inertial frame, a light-like spacetime interval has definite and measurable spatial (dx) and temporal (dt) components. The only thing that is special for light-like intervals is that dx=cdt. And two events separated by a a light-like interval cannot be simultaneous in any inertial frame!
I reiterate, the 'c' that we are talking about in relativity is the observable local propagation speed of light in free space. Since there are no "photons" (single or multiple) in relativity, we do know that light propagates between points in spacetime and is not only there when detected. Our choice of units does not influence the fact that space and time are separate entities with separate definitions - we would not have a useful concept like 'spacetime' if there were no difference.
Finally, if you want to make up your own definitions of what 'c' is, you are in the private theory territory that Ralf complained about for this thread. Or perhaps it is just a philosophical view, but it does not go in standard physics.
Could you clarify some of your statements. You said,”And two events separated by a a light-like interval cannot be simultaneous in any inertial frame!” That is true but I don’t see how it applies to the light cone itself. Are you saying that, in theory, an object traveling at the speed of light experiences space and time?
And what you mean by “local propagation speed?” I have two understandings of “local.”
One is ‘local’ from the perspective of a specific but remote observer and the other ‘local’ is from the perspective of a light emission itself. That is, the proper time of light or the view from the imagined perspective of a photon. Is (at the speed of light) not a local perspective where all ‘speeds’ become simultaneous in theory?
You say “there are no "photons" (single or multiple) in relativity” but you also say “we do know that light propagates between points in spacetime and is not only there when detected.” I find both the photon and the existence of light between signal and receiver (the traveling through part) to be conjecture lacking in physical evidence. What confidence do you have that light exists between signal and sink? And, if light has a speed, what is speeding?
I consider c to be a dimensional constant but not a speed and you claim it is a speed because it is expressed in units of distance over time. Do you consider c to be both a dimensional constant and a speed or just a speed? Also, a ratio of distance over time can be two kinds of speed. One is the familiar analog speed with an object traveling through space and the other is digital speed like the speed we see on a computer monitor where motion is from pixel to pixel with no ‘traveling through’ the space between. In the case of light, this motion would be from electron to electron. So do you consider the speed of light to be analog or digital and how can we tell the difference?
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
bangstrom » 31 Aug 2017, 07:42 wrote:Are you saying that, in theory, an object traveling at the speed of light experiences space and time?
No, because no object can travel at the speed of light relative to any frame of reference. You can have muons traveling at very, very close to the speed of light in some frame, but one can still set up another inertial frame in which they are stationary - and in such frame any observer experiences space and time. And in that frame, light still propagates at c. But one cannot set up an inertial frame for light...
And what you mean by “local propagation speed?”
The relativistic understanding is local to the observer, meaning in his immediate vicinity where he can set up an experiment to measure the propagation speed of light (or anything else) locally.
You say “there are no "photons" (single or multiple) in relativity” but you also say “we do know that light propagates between points in spacetime and is not only there when detected.”
In relativity, light is a propagating electromagnetic wave and one can easily detect the progress of any wave, because you do not destroy the wave by observing its passing through whatever suitable instruments.
One is the familiar analog speed with an object traveling through space and the other is digital speed like the speed we see on a computer monitor where motion is from pixel to pixel with no ‘traveling through’ the space between.
The context is fairly obviously the former, because we measure the speed of a wave propagating through (local) space, meaning the distance it travels along your frame's spatial axis in a time interval on your clock.
In the case of light, this motion would be from electron to electron. So do you consider the speed of light to be analog or digital and how can we tell the difference?
In relativity it is from source to detector, or from one detector to the next, whatever they are made of. And it is not 'digital', whatever that may mean in the context of SR, because distance is infinitely divisible in free space.
If you go to light propagation inside a physical medium, you may need quantum field theory (QFT) to understand what is going on, but the common understanding is that the light still propagates at 'c' from atom to atom (not electron to electron, AFAIK). But, my knowledge of QFT is limited, so I do not want to go deeper into that.
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
BurtJordaan » August 30th, 2017, 7:21 am wrote:
bangstrom » 30 Aug 2017, 10:53 wrote:Space-like and time-like intervals both have an element of duration but light-like intervals do not. For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between.
Bang, you are making little sense in most of your reply. I have singled out the above, because from a relativistic p.o.v., the underlined part is the most blatantly false. In every inertial frame, a light-like spacetime interval has definite and measurable spatial (dx) and temporal (dt) components. The only thing that is special for light-like intervals is that dx=cdt. And two events separated by a a light-like interval cannot be simultaneous in any inertial frame!
How can my statement, “For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between.” be false in light of your statement that, “But one cannot set up an inertial frame for light…” I agree that light is not in an inertial frame and then you demonstrate my statement as false by placing light in an inertial frame. I don’t follow.
The world has changed since 1905 so I am not necessarily trying to be consistent with SR.
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
BurtJordaan » August 31st, 2017, 2:37 am wrote:No, because no object can travel at the speed of light relative to any frame of reference. You can have muons traveling at very, very close to the speed of light in some frame, but one can still set up another inertial frame in which they are stationary - and in such frame any observer experiences space and time. And in that frame, light still propagates at c. But one cannot set up an inertial frame for light...
I was including light itself as an “object” so, to narrow the question, does light experience space and time?
BurtJordaan » August 31st, 2017, 2:37 am wrote:In relativity, light is a propagating electromagnetic wave and one can easily detect the progress of any wave, because you do not destroy the wave by observing its passing through whatever suitable instruments.
This is one point where I disagree so could you explain how to observe the passing of a light wave without destroying it.
BurtJordaan » August 31st, 2017, 2:37 am wrote:The context is fairly obviously the former, because we measure the speed of a wave propagating through (local) space, meaning the distance it travels along your frame's spatial axis in a time interval on your clock.
How is measuring motion as d/t for analog motion not also true for digital motion like the speed of the moving letters on an electric signboard where there is no ‘motion through space’ but there is d/t? What observation could distinguish analog from digital motion?
BurtJordaan » August 31st, 2017, 2:37 am wrote:
In relativity it is from source to detector, or from one detector to the next, whatever they are made of. And it is not 'digital', whatever that may mean in the context of SR, because distance is infinitely divisible in free space.
Outside of SR, would you consider space to be more likely quantized or a continuum?
bangstrom
Member
Posts: 480
Joined: 18 Sep 2014
### Re: Photon's puzzle.
bangstrom » 01 Sep 2017, 07:52 wrote:How can my statement, “For light, emission and absorption are simultaneous events happening on the same light cone with no duration in between.” be false in light of your statement that, “But one cannot set up an inertial frame for light…” I agree that light is not in an inertial frame and then you demonstrate my statement as false by placing light in an inertial frame.
I think you misunderstand the meaning of "one cannot set up an inertial frame for light". If such a frame could exist, light must be stationary in it, but also propagate at c in it - an obvious contradiction. Light is always observed (and propagating) in some inertial frame, just not in its own frame, because such does not exist. In every frame, there are always both spatial and time intervals observed between emission and absorption events of light, and they are equal.
I would be interested to hear why you think it unwise to follow SR after 2005...
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
### Re: Photon's puzzle.
bangstrom » 01 Sep 2017, 08:23 wrote:
This is one point where I disagree so could you explain how to observe the passing of a light wave without destroying it.
Well, one way is to put any number of partially silvered mirrors along the path of a light flash's propagation and so detect the passing of the light from the flash. There are many more ways of detecting electromagnetic waves passing.
How is measuring motion as d/t for analog motion not also true for digital motion like the speed of the moving letters on an electric signboard where there is no ‘motion through space’ but there is d/t?
Objects and light propagate through space at a speed of delta_d/delta_ t <= c. In the case of your digital letters, nothing propagates, so it can have a speed delta_d/delta_ t > c, in fact anything to infinite. But it is comparing apples and bananas.
Outside of SR, would you consider space to be more likely quantized or a continuum?
Continuum. We have neither a theory nor any observation that says space is quantized. We think energy and and time are quantized, but not space. Because of the time-part, I think it is fair to think that spacetime is quantized.
Planck length is how far light can propagate in one Plank time, but that's not a lower limit, because particles propagate less then a Planck length in one Planck time. But, as I said before, it is outside of my field of expertise, so do not take this too seriously. The uncertainty principle probably makes the view questionably anyway.
BurtJordaan
Forum Moderator
Posts: 2589
Joined: 17 Oct 2009
Location: South Africa
Blog: View Blog (9)
Next
|
{}
|
Time Limit : sec, Memory Limit : KB
### Anchored Balloon
A balloon placed on the ground is connected to one or more anchors on the ground with ropes. Each rope is long enough to connect the balloon and the anchor. No two ropes cross each other. Figure E-1 shows such a situation.
Figure E-1: A balloon and ropes on the ground
Now the balloon takes off, and your task is to find how high the balloon can go up with keeping the rope connections. The positions of the anchors are fixed. The lengths of the ropes and the positions of the anchors are given. You may assume that these ropes have no weight and thus can be straightened up when pulled to whichever directions. Figure E-2 shows the highest position of the balloon for the situation shown in Figure E-1.
Figure E-2: The highest position of the balloon
### Input
The input consists of multiple datasets, each in the following format.
n
x1 y1 l1
...
xn yn ln
The first line of a dataset contains an integer n (1 ≤ n ≤ 10) representing the number of the ropes. Each of the following n lines contains three integers, xi, yi, and li, separated by a single space. Pi = (xi, yi) represents the position of the anchor connecting the i-th rope, and li represents the length of the rope. You can assume that −100 ≤ xi ≤ 100, −100 ≤ yi ≤ 100, and 1 ≤ li ≤ 300. The balloon is initially placed at (0, 0) on the ground. You can ignore the size of the balloon and the anchors.
You can assume that Pi and Pj represent different positions if ij. You can also assume that the distance between Pi and (0, 0) is less than or equal to li−1. This means that the balloon can go up at least 1 unit high.
Figures E-1 and E-2 correspond to the first dataset of Sample Input below.
The end of the input is indicated by a line containing a zero.
### Output
For each dataset, output a single line containing the maximum height that the balloon can go up. The error of the value should be no greater than 0.00001. No extra characters should appear in the output.
### Sample Input
3
10 10 20
10 -10 20
-10 10 120
1
10 10 16
2
10 10 20
10 -10 20
2
100 0 101
-90 0 91
2
0 0 53
30 40 102
3
10 10 20
10 -10 20
-10 -10 20
3
1 5 13
5 -3 13
-3 -3 13
3
98 97 168
-82 -80 193
-99 -96 211
4
90 -100 160
-80 -80 150
90 80 150
80 80 245
4
85 -90 290
-80 -80 220
-85 90 145
85 90 170
5
0 0 4
3 0 5
-3 0 5
0 3 5
0 -3 5
10
95 -93 260
-86 96 211
91 90 177
-81 -80 124
-91 91 144
97 94 165
-90 -86 194
89 85 167
-93 -80 222
92 -84 218
0
### Output for the Sample Input
17.3205081
16.0000000
17.3205081
13.8011200
53.0000000
14.1421356
12.0000000
128.3928757
94.1879092
131.1240816
4.0000000
72.2251798
|
{}
|
# Chazizzle77's Gallery ( - |Updated| - |26/02/09| - )
## Recommended Posts
Welcome all to the Chazizzle77 gallery!
I've been using Paint.NET for some time now, years even, but just recently I became a member, and started posting all of my artwork.
And now, after many 'hmm's' and 'should I . . .?'s', I've finally decided to make my own gallery
So here it is, most, not all of my work (most of it sucks ), presented in my first ever gallery! (and only, according to the rules )
.:Highlight text:.
Hidden Content:
I have taken a leaf out of Bloopers book and am posting this. Comment on my images, please! I need to know if what I'm doing I should keep doing, or if I should move in a new direction.
. . . NEW . . .
Hidden Content:
EDIT (01/03/09 18:37) My new desktop background . . .
EDIT (01/03/09 18:37) Mountain . . .
EDIT (01/03/09 18:37) Umm . . .
EDIT (26/02/09 16:20) Explosive sunset . . .
EDIT (23/02/09 21:00) Something . . .
EDIT (23/02/09 21:00) A sand in time . . .
EDIT (23/02/09 21:00) Some texture . . .
EDIT (21/02/09 10:12) Chaz . . .
EDIT (16/02/09 22:56) I got the Metroid fever from Hyrule . . .
EDIT (16/02/09 22:56) Metroid again . . .
EDIT (16/02/09 18:11) Metroid Prime . . .
EDIT (15/02/09 21:25) Smokey . . .
EDIT (14/02/09 20:29) Render testing . . .
EDIT (13/02/09 22:34) Another elegant combo . . .
EDIT (13/02/09 16:38) 100 Posts . . .
EDIT (12/02/09 23:03) Pixelated . . .
EDIT (12/02/09 22:13) LOLcats in love . . .
EDIT (12/02/09 18:11) Elegant combo . . .
EDIT (11/02/09 22:23) ... . . .
EDIT (11/02/09 20:43) Im a guy . . . but cherry blossoms are still freaking prettyful . . .
EDIT (11/02/09 19:40) Blood . . .
EDIT (10/02/09 22:34) Simple . . .
EDIT (09/02/09 23:56) Finally got smudge to work . . .
EDIT (09/02/09 18:59) Something new . . .
EDIT (08/02/09 23:29) I'm tired of explaining . . .
EDIT (08/02/09 23:29) Just a little simple something . . .
EDIT (08/02/09 10:48) Crooked . . .
EDIT (08/02/09 09:55) Scratched Grunge . . .
EDIT (07/02/09 22:55) Teh awesome-o Ironman . . .
EDIT (07/02/09 18:47) More grunge . . .
EDIT (07/02/09 15:06) Abstract-ish Grunge . . .
EDIT (07/02/09 14:42) Blue Abstract - again . . .
EDIT (07/02/09 14:18) Blue Abstract . . .
EDIT (07/02/09 10:46) Bloody Grunge . . .
EDIT (06/02/09 19:28) Simple . . .
EDIT (05/02/09 21:49) An early valentine . . .
EDIT (05/02/09 21:30) I love this game . . .
EDIT (04/02/09 22:49) Just a little thingo . . .
EDIT (04/02/09 18:05) A variation of my sig . . .
I'm really gonna milk this style of sig for all it's worth :wink:
EDIT (02/02/09 19:06): My current new sig . . .
_______________________________________________________________________________________________________
.:My First Ever Paint.Net image:.
_______________________________________________________________________________________________________
Hidden Content:
Drawn using no effects, just line tool/paint bucket etc . . .
_______________________________________________________________________________________________________
.:Signatures and Avatars:.
_______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
.:Backgrounds:.
______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
:.Photo Manipulations.:
______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
Well, that's it for now, I guess I'll be adding my new stuff now and then.
Criticism is a . . . please?
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Welcome all to the Chazizzle77 gallery!
I've been using Paint.NET for some time now, years even, but just recently I became a member, and started posting all of my artwork.
And now, after many 'hmm's' and 'should I . . .?'s', I've finally decided to make my own gallery
So here it is, most, not all of my work (most of it sucks ), presented in my first ever gallery! (and only, according to the rules )
.:Highlight text:.
Hidden Content:
I have taken a leaf out of Bloopers book and am posting this. Comment on my images, please! I need to know if what I'm doing I should keep doing, or if I should move in a new direction.
. . . NEW . . .
Hidden Content:
EDIT (01/03/09 18:37) My new desktop background . . .
EDIT (01/03/09 18:37) Mountain . . .
EDIT (01/03/09 18:37) Umm . . .
EDIT (26/02/09 16:20) Explosive sunset . . .
EDIT (23/02/09 21:00) Something . . .
EDIT (23/02/09 21:00) A sand in time . . .
EDIT (23/02/09 21:00) Some texture . . .
EDIT (21/02/09 10:12) Chaz . . .
EDIT (16/02/09 22:56) I got the Metroid fever from Hyrule . . .
EDIT (16/02/09 22:56) Metroid again . . .
EDIT (16/02/09 18:11) Metroid Prime . . .
EDIT (15/02/09 21:25) Smokey . . .
EDIT (14/02/09 20:29) Render testing . . .
EDIT (13/02/09 22:34) Another elegant combo . . .
EDIT (13/02/09 16:38) 100 Posts . . .
EDIT (12/02/09 23:03) Pixelated . . .
EDIT (12/02/09 22:13) LOLcats in love . . .
EDIT (12/02/09 18:11) Elegant combo . . .
EDIT (11/02/09 22:23) ... . . .
EDIT (11/02/09 20:43) Im a guy . . . but cherry blossoms are still freaking prettyful . . .
EDIT (11/02/09 19:40) Blood . . .
EDIT (10/02/09 22:34) Simple . . .
EDIT (09/02/09 23:56) Finally got smudge to work . . .
EDIT (09/02/09 18:59) Something new . . .
EDIT (08/02/09 23:29) I'm tired of explaining . . .
EDIT (08/02/09 23:29) Just a little simple something . . .
EDIT (08/02/09 10:48) Crooked . . .
EDIT (08/02/09 09:55) Scratched Grunge . . .
EDIT (07/02/09 22:55) Teh awesome-o Ironman . . .
EDIT (07/02/09 18:47) More grunge . . .
EDIT (07/02/09 15:06) Abstract-ish Grunge . . .
EDIT (07/02/09 14:42) Blue Abstract - again . . .
EDIT (07/02/09 14:18) Blue Abstract . . .
EDIT (07/02/09 10:46) Bloody Grunge . . .
EDIT (06/02/09 19:28) Simple . . .
EDIT (05/02/09 21:49) An early valentine . . .
EDIT (05/02/09 21:30) I love this game . . .
EDIT (04/02/09 22:49) Just a little thingo . . .
EDIT (04/02/09 18:05) A variation of my sig . . .
I'm really gonna milk this style of sig for all it's worth :wink:
EDIT (02/02/09 19:06): My current new sig . . .
_______________________________________________________________________________________________________
.:My First Ever Paint.Net image:.
_______________________________________________________________________________________________________
Hidden Content:
Drawn using no effects, just line tool/paint bucket etc . . .
_______________________________________________________________________________________________________
.:Signatures and Avatars:.
_______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
.:Backgrounds:.
______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
:.Photo Manipulations.:
______________________________________________________________________________________________________
Hidden Content:
______________________________________________________________________________________________________
Well, that's it for now, I guess I'll be adding my new stuff now and then.
Criticism is a . . . please?
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
I can't believe no one commented in your gallery yet. I wanted to yesterday, but my computer was working s l o w l y. I see that you use different effects, which is a GOOD thing. I especially like the first image (the one with no effects). Your sigs are neat, too. I really, really like the third one under your backgrounds; the pastel-looking one. Nice job!!!
Don't spit into the well, you might drink from it later. -----Yiddish Proverb
Glossy Galaxy Ball---How to Make Foliage
My Gallery
##### Share on other sites
I can't believe no one commented in your gallery yet. I wanted to yesterday, but my computer was working s l o w l y. I see that you use different effects, which is a GOOD thing. I especially like the first image (the one with no effects). Your sigs are neat, too. I really, really like the third one under your backgrounds; the pastel-looking one. Nice job!!!
Don't spit into the well, you might drink from it later. -----Yiddish Proverb
Glossy Galaxy Ball---How to Make Foliage
My Gallery
##### Share on other sites
But i like of 2 ...
Good work
I'm Portuguese
and
My English isn't perfect
##### Share on other sites
But i like of 2 ...
Good work
I'm Portuguese
and
My English isn't perfect
##### Share on other sites
@ HELEN: Thanks! , I really like messing around with all the different effects, it keeps me discovering new things!
@ linkz: Thanks for commenting :wink: , could you tell me which two you liked? I wanna know which places I could focus more on so I can get you liking all of them .
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
@ HELEN: Thanks! , I really like messing around with all the different effects, it keeps me discovering new things!
@ linkz: Thanks for commenting :wink: , could you tell me which two you liked? I wanna know which places I could focus more on so I can get you liking all of them .
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Woah, definitely have some talent there man. Looks pretty damn good.
##### Share on other sites
Woah, definitely have some talent there man. Looks pretty damn good.
##### Share on other sites
Hi welcome to the forum! nice neat new gallery. I love the little octopus one and the one sig you made for your brothers girlfriend.
I'll be back to see what else you add.
ciao have fun
OMA
##### Share on other sites
Hi welcome to the forum! nice neat new gallery. I love the little octopus one and the one sig you made for your brothers girlfriend.
I'll be back to see what else you add.
ciao have fun
OMA
##### Share on other sites
@K_I_N_G: Thanks man, can't wait 'till I'm up there with the big guys :wink:
@oma: Have fun? Always do , I'll make sure I add lots of new stuff, just to give you a reason to keep commenting :wink:
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
@K_I_N_G: Thanks man, can't wait 'till I'm up there with the big guys :wink:
@oma: Have fun? Always do , I'll make sure I add lots of new stuff, just to give you a reason to keep commenting :wink:
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Updated: New sigs (from newer to older)
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Updated: New sigs (from newer to older)
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Updated.
C'mon guys, over 100 views but not even 10 comments?
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
OMMFG (Oh my mother **** gosh) that is amazing! I believe that you have one of the biggest galleries on the forums!
When did you start paint.net? Because from January 1st, that is a freaking amount of gewd art! They're all amazing...
I can't decide which is my fav... It's a shame I never saw it sooner!
We should all be... Alive...
.........................................
.........................................
##### Share on other sites
Wow! Thanks!
I've been using PDN for about three years, but for two of them I only used it as an alternative to programs like MS Paint.
But then I found "Teh moast orsumest 4rum ever", and read through almost everything (as a guest) and then started to seriously divulge the secrets within Paint.NET.
Then I joined the forum and here we are.
So I've been doing this sort of stuff for about . . . half a year.
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Updated: New sigs Up top.
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
C'mon guys, I need criticism!
Tell me what's good and bad!
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
Your signatures are something to look at. I suspect you like the color red? Those cherry blossoms are really neat. It reminds me of a reoccurring dream I saw when I was about 7 or 8. The only difference was that the tree had big fruit. Nice job!!!
Don't spit into the well, you might drink from it later. -----Yiddish Proverb
Glossy Galaxy Ball---How to Make Foliage
My Gallery
##### Share on other sites
I think you've come a long way since you joined, your material tends to have a more polished and profesional feel to it recently. I really love the cherry blossom sig. Lovely colours on it.
##### Share on other sites
@HELEN: Thanks! The cherry blossom sig was just a stock i came across, then I made it more PDN'y-ish. Oh, and I love red!
@survulus: I really have come quite far, especially with such great tuts and such great artists on this forum. (Hehe . . . Polished and professional . . . sounds nothin like me! )
Keep the criticism coming!
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
##### Share on other sites
UPDATE: See top of page one.
Tell me what ya think!
----------------. . . Smile . . . tomorrow will be worse . . .----------------
---------------------. . . I'm a guy of simple tastes . . .----------------------
-------------------.:My Forever Maturing Gallery:.--------------------
## Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
× Pasted as rich text. Paste as plain text instead
Only 75 emoji are allowed.
× Your previous content has been restored. Clear editor
× You cannot paste images directly. Upload or insert images from URL.
×
×
|
{}
|
1. ## Polar form
Let z=2exp(-Pi/6) and w=sqrt(2)exp(Pi/4)
Find zw in polar form and hence express tan(Pi/12) in surd form
So I manage to find zw in polar form, which is 2sqrt(2)exp(Pi/12), but I don't know how to express tan(Pi/12) in surd form. Please help me thanks.
2. Originally Posted by Rine198
Let z=2exp(-Pi/6) and w=sqrt(2)exp(Pi/4)
Find zw in polar form and hence express tan(Pi/12) in surd form
Do you know a formula for $\displaystyle \tan(\frac{\theta}{2})}~?$
Note that $\displaystyle \frac{\pi}{12}=\frac{1}{2}\frac{\pi}{6}$.
3. do u mean the t-formula?
4. Hello, Rine198!
$\displaystyle \text{Express }\tan\frac{\pi}{12}\text{ in surd form.}$
I'm not sure about the "hence".
We can get the answer with some old=fashioned trig.
$\displaystyle \tan\frac{\pi}{12} \;=\;\tan\left(\frac{\pi}{4}-\frac{\pi}{6}\right) \;=\;\dfrac{\tan\frac{\pi}{4} - \tan\frac{\pi}{6}}{1 + \tan\frac{\pi}{4}\tan\frac{\pi}{5}} \;=\;\dfrac{1-\frac{1}{\sqrt{3}}} {1 + \frac{1}{\sqrt{3}}}$
$\displaystyle \text{Multiply by }\frac{\sqrt{3}}{\sqrt{3}}\!: \;\;\dfrac{\sqrt{3} - 1}{\sqrt{3} + 1}$
$\displaystyle \text{Rationalize: }\;\dfrac{\sqrt{3} - 1}{\sqrt{3} - 1}\cdot \dfrac{\sqrt{3} - 1}{\sqrt{3}+1} \;=\;\dfrac{3 - 2\sqrt{3} + 1}{3-1}$
. . . . . . . . $\displaystyle =\;\dfrac{4 - 2\sqrt{3}}{2} \;=\;2 - \sqrt{3}$
|
{}
|
# Topological isomorphism
1. Oct 4, 2008
### dirk_mec1
1. The problem statement, all variables and given/known data
2. Relevant equations
I think this is relevant:
3. The attempt at a solution
A topological isomorphism implies that T and T-1 are bounded and given is that all cauchy sequences in E are convergent.
But if that's the case then by boundedness all convergent sequences are bounded in het norm of F and are thus also Cauchy seqeunces. Am I thinking in the right direction?
2. Oct 4, 2008
### morphism
You have to prove that every Cauchy sequence in F converges. How is what you're saying doing that?
3. Oct 4, 2008
### dirk_mec1
Okay so that's wrong but suppose I have a cauchy sequence in E: $$|x_n-x_m|_E < \epsilon\ , \forall n,m\geq N$$ how can I prove then that F is also Banach?
4. Oct 4, 2008
### morphism
Why are you taking a cauchy sequence in E? You're supposed to prove that F is a Banach space.
5. Oct 5, 2008
### dirk_mec1
Because it is given that E is Banach what implies that every cauchy sequence converges.
6. Oct 5, 2008
### morphism
What I meant is that it makes more sense to start with a cauchy sequence in F rather than in E. Because you want to prove that every cauchy sequence in F converges.
7. Oct 6, 2008
### dirk_mec1
Can someone give me a hint? Because I've started with a cauchy sequence in F but I honestly do not see what to do next.
8. Oct 6, 2008
### HallsofIvy
Staff Emeritus
Let {an} be a sequence in F. What can you say about {T-1 an}?
9. Oct 7, 2008
### dirk_mec1
Let $$\{ a_n \}$$ be a sequence in F then for all $$n,m \geq N$$ we have:
$$|| T^{-1} (a_n-a_m)||_E \leq c\cdot ||a_n-a_m||_F < c \cdot \epsilon$$
So an is Cauchy in F.
But how do I get it to converge in F with limit a?
10. Oct 7, 2008
### morphism
Look, you want to show that F is a Banach space. So you take an arbitrary cauchy sequence in F and show that it converges in F. What do we have to work with here? We know that F is isomorphic to a Banach space. Use that isomorphism.
|
{}
|
Browse Questions
# A die marked 1, 2, 3 in red and 4, 5, 6 in green is tossed. Let A be the event of the number being even, and B be the event of the number being red. Are A and B independent?
$\begin{array}{1 1} \text{A and B are independent events} \\ \text{A and B are not independent events} \\ \end{array}$
Toolbox:
• If A and B are independant events, $P(A\cap\;B)=P(A)\;P(B)$
The sample space for a roll of die can be expressed as follows: S = $\begin{Bmatrix} 1&2&3&4&5&6 \end{Bmatrix}$
Let A be the event where an even number appears.
A = $\begin{Bmatrix} 2&4&6 \\ \end{Bmatrix}$
Then P(A) = $\large \frac{3}{6} = \frac{1}{2}$
Let B be the event that a number in red appears on the die.
B = $\begin{Bmatrix}1&2&3 \\ \end{Bmatrix}$
Then P(B) = $\large\frac{3}{6} = \frac{1}{2}$
We can see that A $\cap$ B = $\begin{Bmatrix} 2\\ \end{Bmatrix}$
Therefore, P (A $\cap$ B) = $\large\frac{1}{6}$
We know that if A and B are independant events, $P(A\cap\;B)=P(A)\;P(B)$
$\Rightarrow P(A) \; P(B) = \large \frac {1}{2}$$\times \large \frac{1}{2} = \frac{1}{4} \neq$ P(A $\cap$ B).
Therefore A and B are NOT independent events.
edited Apr 21, 2016 by pady_1
|
{}
|
Epic Idiot - Creation, Evolution, and Intelligent Design Home Table of Contents Creation and Evolution Humor Mission Statement Contact
# The Big Bang
According to the Big Bang theory, the universe originated in an extremely dense and hot state (bottom). Since then, space itself has expanded with the passage of time, carrying the galaxies with it.
In physical cosmology, the Big Bang is the scientific theory that the universe emerged from an enormously dense and hot state about 13.7 billion years ago. Proponents of the Big Bang contend that it is a consequence of the observed Hubble's law velocities of distant galaxies that when taken together with the cosmological principle implies that space is expanding according to the Friedmann model of general relativity. Extrapolated into the past, these observations show that the universe has expanded from a primeval state, in which all the matter and energy in the universe was at an immense temperature and density. Physicists do not widely agree on what happened before this, although general relativity predicts a gravitational singularity.
While the theory is widely supported, critics of the theory contend that its predictions have been contradicted by observations in many significant ways.
The term Big Bang is used both in a narrow sense to refer to a point in time when the observed expansion of the universe (Hubble's law) began—calculated to be 13.7 billion (1.37 × 1010) years ago—and in a more general sense to refer to the prevailing cosmological paradigm explaining the origin and expansion of the universe, as well as the composition of primordial matter through nucleosynthesis as predicted by the Alpher-Bethe-Gamow theory.
One consequence of the Big Bang is that the conditions of today's universe are different from the conditions in the past or in the future. From this model, George Gamow in 1948 was able to predict, at least qualitatively, the existence of cosmic microwave background radiation (CMB). The CMB was discovered in the 1960s and served as a confirmation of the Big Bang theory over its chief rival, the steady state theory.
## History
Main article: History of the Big Bang
The Big Bang theory developed from observations and theoretical considerations. Observationally, it was determined that most spiral nebulae were receding from Earth, but those who made the observation weren't aware of the cosmological implications, nor that the supposed nebulae were actually galaxies outside our own Milky Way. In 1927, the Belgian Catholic priest Georges Lemaître independently derived the Friedmann-Lemaître-Robertson-Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—what was later called the Big Bang.
In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. He discovered that, relative to the earth, the galaxies are receding in every direction at speeds directly proportional to their distance from the earth. This fact is now known as Hubble's law (see Edwin Hubble: Mariner of the Nebulae by Edward Christianson). Given the cosmological principle whereby the universe, when viewed on sufficiently large distance scales, has no preferred directions or preferred places, Hubble's law suggested that the universe was expanding.
This idea allowed for two opposing possibilities. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other possibility was Fred Hoyle's steady state model in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time. It was actually Hoyle who coined the name of Lemaître's theory, referring to it sarcastically as "this 'big bang' idea" during a 1949 BBC radio program, The Nature of Things, the text of which was published in 1950.
For a number of years the support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. Since the discovery of the cosmic microwave background radiation in 1965 it has been regarded as the best theory of the origin and evolution of the cosmos. Virtually all theoretical work in cosmology now involves extensions and refinements to the basic Big Bang theory. Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding what happened at the Big Bang, and reconciling observations with the basic theory.
Huge advances in Big Bang cosmology were made in the late 1990s and the early 21st century as a result of major advances in telescope technology in combination with large amounts of satellite data such as that from COBE, the Hubble Space Telescope and WMAP. These data have allowed cosmologists to calculate many of the parameters of the Big Bang to a new level of precision and led to the unexpected discovery that the expansion of the universe appears to be accelerating. (See dark energy.)
[
## Overview
Based on measurements of the expansion of the universe using Type Ia supernovae, measurements of the lumpiness of the cosmic microwave background, and measurements of the correlation function of galaxies, the universe has a measured age of 13.7 ± 0.2 billion years. The agreement of these three independent measurements is considered strong evidence for the so-called Lambda-CDM model that describes the detailed nature of the contents of the universe.
According to the big bang thoery, the early universe was filled homogeneously and isotropically with a incredibly high energy density and concomitantly huge temperatures and pressures. It expanded and cooled, going through phase transitions analogous to the condensation of steam or freezing of water as it cools, but related to elementary particles.
Approximately 10-35 seconds after the Planck epoch a phase transition caused the universe to experience exponential growth during a period called cosmic inflation. After inflation stopped, the material components of the universe were in the form of a quark-gluon plasma (also including all other particles—and perhaps experimentally produced recently as a quark-gluon liquid [1]) in which the constituent particles were all moving relativistically. As the universe continued growing in size, the temperature dropped. At a certain temperature, by an as-yet-unknown transition called baryogenesis, the quarks and gluons combined into baryons such as protons and neutrons, somehow producing the observed asymmetry between matter and antimatter. Still lower temperatures led to further symmetry breaking phase transitions that put the forces of physics and elementary particles into their present form. Later, some protons and neutrons combined to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis. As the universe cooled, matter gradually stopped moving relativistically and its rest mass energy density came to gravitationally dominate that of radiation. After about 300,000 years the electrons and nuclei combined into atoms (mostly hydrogen); hence the radiation decoupled from matter and continued through space largely unimpeded. This relic radiation is the cosmic microwave background.
Critics of the theory, however, point out that there is no experimental evidence for any process that produces an asymmetry between matter and antimatter. In particle accelerators, matter and antimatter are always produced in exactly equal amounts. If such equal amounts of matter and antimatter existed at high density, they would have annihilated each other during the expansion, leaving behind a very dilute universe. Thus, critics contend, the big bang theory, combined with observed physical laws, produces a universe that is billions of times less dense than that observed.
According to the big bang thoery, over time, the slightly denser regions of the nearly uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The three possible types are known as cold dark matter, hot dark matter, and baryonic matter. The best measurements available (from WMAP) show that the dominant form of matter in the universe is cold dark matter. The other two types of matter make up less than 20% of the matter in the universe.
Again the notion of dark matter has been sharply criticised by some physicists, who point out that laboratory searches for such dark matter particles have given only negative results for the past 25 years.
The universe today appears, in the vieew of big bang proponents, to be dominated by a mysterious form of energy known as dark energy. Approximately 70% of the total energy density of today's universe is in this form. This component of the universe's composition is revealed by its property of causing the expansion of the universe to deviate from a linear velocity-distance relationship by causing spacetime to expand faster than expected at very large distances. Dark energy in its simplest formation takes the form of a cosmological constant term in Einstein's field equations of general relativity, but its composition is unknown and, more generally, the details of its equation of state and relationship with the standard model of particle physics continue to be investigated both observationally and theoretically.
The necessity for big bang theorists to introduce an unobserved type of matter (dark matter) and an unobserved type of energy (dark energy) to resolve contradictions between the big bang theory and observation has been compared by critics of the theory to the epicycles introduced by Ptolemy to resolve problems with the heliocentric model of the solar system.
All these observations are encapsulated in the Lambda-CDM model of cosmology, which is a mathematical model of the big bang with six free parameters. Mysteries appear as one looks closer to the beginning, when particle energies were higher than can yet be studied by experiment. There is no compelling physical model for the first 10-33 seconds of the universe, before the phase transition called for by grand unification theory. At the "first instant", Einstein's theory of gravity predicts a gravitational singularity where densities become infinite. To resolve this paradox, a theory of quantum gravity is needed. Understanding this period of the history of the universe is one of the greatest unsolved problems in physics.
## Theoretical underpinnings
As it stands today, the Big Bang is dependent on three assumptions:
1. The universality of physical laws
2. The cosmological principle
3. The Copernican principle
When first developed, these ideas were simply taken as postulates, but today there are efforts underway to test each of them. Tests of the universality of physical laws have found that the largest possible deviation of the fine structure constant over the age of the universe is of order 10-5. The isotropy of the universe that defines the Cosmological Principle has been tested to a level of 10-5 and the universe has been measured to be homogenous on the largest scales to the 10% level. There are efforts underway to test the Copernican Principle by means of looking at the interaction of galaxy clusters with the CMB through the Sunyaev-Zeldovich effect to a level of 1% accuracy.
The Big Bang theory uses Weyl's postulate to unambiguously measure time at any point as the "time since the Planck epoch". Measurements in this system rely on conformal coordinates in which so-called comoving distances and conformal times remove the expansion of the universe, parameterized by the cosmological scale factor, from consideration of spacetime measurements. The comoving distances and conformal times are defined so that objects moving with the cosmological flow are always the same comoving distance apart and the particle horizon or observational limit of the local universe is set by the conformal time.
As the universe can be described by such coordinates, the Big Bang is not an explosion of matter moving outward to fill an empty universe; what is expanding is spacetime itself. It is this expansion that causes the physical distance between any two fixed points in our universe to increase. Objects that are bound together (for example, by gravity) do not expand with spacetime's expansion because the physical laws that govern them are assumed to be uniform and independent of the metric expansion. Moreover, the expansion of the universe on today's local scales is so small that any dependence of physical laws on the expansion is unmeasurable by current techniques.
## Observational evidence
It is generally stated that there are three observational pillars that support the Big Bang theory of cosmology. These are the Hubble-type expansion seen in the redshifts of galaxies, the detailed measurements of the cosmic microwave background, and the abundance of light elements. (See Big Bang nucleosynthesis.) Additionally, the observed correlation function of large-scale structure of the cosmos fits well with standard Big Bang theory.
[
### Hubble's law expansion
Main article: Hubble's law
Observations of distant galaxies and quasars show that these objects are redshifted, meaning that the light emitted from them has been shifted to longer wavelengths. This is seen by taking a frequency spectrum of the objects and then matching the spectroscopic pattern of emission lines or absorption lines corresponding to atoms of the chemical elements interacting with the light. From this analysis, a redshift corresponding to a Doppler shift for the radiation can be measured which is explained by a recessional velocity. When the recessional velocities are plotted against the distances to the objects, a linear relationship, known as Hubble's law, is observed:
$v = H_0 D \,$
where
v is the recessional velocity of the galaxy or other distant object
D is the distance to the object and
H0 is Hubble's constant, measured to be 71 ± 4 km/s/Mpc by the WMAP probe.
The Hubble's Law observation has several possible explanations. One is that we are at the center of an explosion of galaxies, a position which is untenable given the Copernican principle. The second explanation is that the universe is uniformly expanding everywhere as a unique property of spacetime. This type of universal expansion was developed mathematically in the context of general relativity well before Hubble made his analysis and observations, and it remains the cornerstone of the Big Bang theory as developed by Friedmann-Lemaître-Robertson-Walker. A third explanation is that some process causes light to lose energy as it travels.
The big bang theory predicts that surface brightness, brightness divided by apparent surface area, decreases as (z+1)^-3, where z is redshift. More distant objects actually should appear bigger. But recent observations show that in fact the surface brightness of galaxies up to a redshift of 6 are exactly constant, as predicted by a non-expanding universe and in sharp contradiction to the big bang. Efforts to explain this difference by evolution--early galaxies are different than those today-- lead to predictions of galaxies that are impossibly bright and dense.
[
Main article: Cosmic microwave background radiation
WMAP image of the cosmic microwave background radiation
The Big Bang theory predicted the existence of the cosmic microwave background radiation or CMB which is composed of photons emitted during baryogenesis. Because the early universe was in thermal equilibrium, the temperature of the radiation and the plasma were equal until the plasma recombined. Before atoms formed, radiation was constantly absorbed and reemitted in a process called Compton scattering: the early universe was opaque to light. However, cooling due to the expansion of the universe allowed the temperature to eventually fall below 3000 K at which point electrons and nuclei combined to form atoms and the primordial plasma turned into a neutral gas. This is known as photon decoupling. A universe with only neutral atoms allows radiation to travel largely unimpeded.
Because the early universe was in thermal equilibrium, the radiation from this time had a blackbody spectrum and freely streamed through space until today, becoming redshifted because of the Hubble expansion. This reduces the high temperature of the blackbody spectrum. The radiation should be observable at every point in the universe to come from all directions of space.
In 1964, Arno Penzias and Robert Wilson, while conducting a series of diagnostic observations using a new microwave receiver owned by Bell Laboratories, discovered the cosmic background radiation. Their discovery provided substantial confirmation of the general CMB predictions—the radiation was found to be isotropic and consistent with a blackbody spectrum of about 3 K —and it pitched the balance of opinion in favor of the Big Bang hypothesis. Penzias and Wilson were awarded the Nobel Prize for their discovery.
In 1989, NASA launched the Cosmic Background Explorer satellite (COBE), and the initial findings, released in 1990, were consistent with the Big Bang's predictions regarding the CMB. COBE found a residual temperature of 2.726 K and determined that the CMB was isotropic to about one part in 105. During the 1990s, CMB anisotropies were further investigated by a large number of ground-based experiments and the universe was shown to be geometrically flat by measuring the typical angular size (the size on the sky) of the anisotropies. (See shape of the universe.)
In early 2003 the results of the Wilkinson Microwave Anisotropy satellite (WMAP) were released, yielding what were at the time the most accurate values for some of the cosmological parameters. (see cosmic microwave background radiation experiments). This satellite also disproved several specific cosmic inflation models, but the results were consistent with the inflation theory in general, inthe view of proponent of the thoery. However many observers pointed out that the anisotropies in the WMAP data were not random or Guassian, as predicted by inflation. Instead they had strong alignments in the sky--for example with the Local Supercluster of galaxies. Such alginments of the CMB with local features in the universe contradicted the big bang explanation of the CMB.
In addition, in 2005 Richard Lieu and colleagues presented a study of the Sunyaev-Zel’dovich effect of 31 clusters of galaxies. In this effect, CBR from behind the clusters is slightly “shadowed” by hot electrons in the clusters. Lieu showed that the effect for these clusters was at most one quarter of that predicted, strongly implying that most of the CBR radiation originated closer to us than the clusters, as predicted by the plasma model, but in sharp contraction to the big bang model, which assumes that all the CBR originates at extreme distances.
### Abundance of primordial elements
Main article: Big Bang nucleosynthesis
Using the Big Bang model it is possible to calculate the concentration of helium-4, helium-3, deuterium and lithium-7 in the universe as ratios to the amount of ordinary hydrogen, H. All the abundances depend on a single parameter, the ratio of photons to baryons. The ratios predicted (by mass, not by number) are about 0.25 for 4He/H, about 10-3 for 2H/H, about 10-4 for 3He/H and about 10-9 for 7Li/H.
However, increasingly accurate measurements of these abundances point to values that are in contradiction with the values predicted by the big bang. In particular, lithium abudances are only one quarter of that predicted by big bang theory, a difference far larger than the uncertainties of lithium measurements. Critics have pointed to this contradiction as another failure of the theory.
### Galactic evolution and distribution
Main article: Large-scale structure of the cosmos
Detail observations of the morphology and distribution of galaxies and quasars provide strong evidence for the Big Bang. A combination of observations and theory suggest that the first quasars and galaxies formed about a billion years after the big bang, and since then larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions, and larger structures agree well with Big Bang simulations of the formation of structure in the universe and are helping to complete details of the theory.
## Features, issues and problems
A number of problems have arisen within the Big Bang theory throughout its history. Some of them are mainly of historical interest today, and have been avoided either through modifications to the theory or as the result of better observations. Other issues, such as the cuspy halo problem and the dwarf galaxy problem of cold dark matter, are not considered to be fatal as they can be addressed through refinements of the theory.
There are a small number of proponents of non-standard cosmologies who doubt that there was a Big Bang at all. They claim that solutions to standard problems in the Big Bang theory involve ad hoc modifications and addenda to the theory. Most often attacked are the parts of standard cosmology that include dark matter, dark energy, and cosmic inflation. However, while explanations for these features remain at the frontiers of inquiry in physics, together they are suggested by independent observations of big bang nucleosynthesis, the cosmic microwave background, large scale structure and Type Ia supernovae. The gravitational effects of these features are understood observationally and theoretically but they have not yet been successfully incorporated into the Standard Model of particle physics. Though some aspects of the theory remain inadequately explained by fundamental physics, most cosmologists continue to support the theory.
The following is a short list of Big Bang "problems" and puzzles:
### The horizon problem
The horizon problem results from the premise that information cannot travel faster than light, and hence two regions of space which are separated by a greater distance than the speed of light multiplied by the age of the universe cannot be in causal contact. The observed isotropy of the cosmic microwave background (CMB) is problematic in this regard, because the horizon size at that time corresponds to a size that is about 2 degrees on the sky. If the universe has had the same expansion history since the Planck epoch, there is no mechanism to cause these regions to have the same temperature.
This apparent inconsistency is resolved by inflationary theory in which a homogeneous and isotropic scalar energy field dominates the universe at a time 10-35 seconds after the Planck epoch. During inflation, the universe undergoes exponential expansion, and regions in causal contact expand so as to be beyond each other's horizons. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to cosmic scale. These fluctuations serve as the seeds of all current structure in the universe. After inflation, the universe expands according to a Hubble Law, and regions that were out of causal contact come back into the horizon. This explains the observed isotropy of the CMB. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian which has been accurately confirmed by measurements of the CMB.
[
### Flatness
The flatness problem is an observational problem that results from considerations of the geometry associated with Friedmann-Lemaître-Robertson-Walker metric. In general, the universe can have three different kinds of geometries: hyperbolic geometry, Euclidean geometry, or elliptic geometry. The geometry is determined by the total energy density of the universe (as measured by means of the stress-energy tensor): the hyperbolic results from a density less than the critical density, elliptic from a density greater than the critical density, and Euclidean from exactly the critical density. The universe is measured to be required to be within one part in 1015 of the critical density in its earliest stages. Any greater deviation would have caused either a Heat Death or a Big Crunch, and the universe would not exist as it does today.
The resolution to this problem is again offered by inflationary theory. During the inflationary period, spacetime expanded to such an extent that any residual curvature associated with it would have been smoothed out to a high degree of precision. Thus, inflation drove the universe to be flat.
### Magnetic monopoles
The magnetic monopole objection was raised in the late 1970s. Grand unification theories predicted point defects in space that would manifest as magnetic monopoles with a density much higher than was consistent with observations, given that searches have never found any monopoles. This problem is also resolvable by cosmic inflation, which removes all point defects from the observable universe in the same way that it drives the geometry to flatness.
### Baryon asymmetry
It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot, it was in statistical equilibrium and contained equal numbers of baryons and anti-baryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of matter. An unknown process called baryogenesis created the asymmetry. For baryogenesis to occur, the Sakharov conditions, which were laid out by Andrei Sakharov, must be satisfied. They require that baryon number not be conserved, that C-symmetry and CP-symmetry be violated, and that the universe depart from thermodynamic equilibrium. All these conditions occur in the big bang, but the effect is not strong enough to explain the present baryon asymmetry. New developments in high energy particle physics are necessary to explain the baryon asymmetry.
### Globular cluster age
In the mid-1990s, observations of globular clusters appeared to be inconsistent with the Big Bang. Computer simulations that matched the observations of the stellar populations of globular clusters suggested that they were about 15 billion years old, which conflicted with the 13.7-billion-year age of the universe. This issue was generally resolved in the late 1990s when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. There still remain some questions as to how accurately the ages of the clusters are measured, but it is clear that these objects are some of the oldest in the universe.
### Dark matter
During the 1970s and 1980s various observations (notably of galactic rotation curves) showed that there was not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is not normal or baryonic matter but rather dark matter. In addition, assuming that the universe was mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe is far less lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter was initially controversial, it is now a widely accepted part of standard cosmology due to observations of the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and x-ray measurements from galaxy clusters. Dark matter has only been detected through its gravitational signature; no particles that might make it up have yet been observed in laboratories. However, there are many particle physics candidates for dark matter, and several projects to detect them are underway.
### Dark energy
In the 1990s, detailed measurements of the mass density of the universe revealed a value that was 30% that of the critical density. Since the universe is flat, as is indicated by measurements of the cosmic microwave background, fully 70% of the energy density of the universe was left unaccounted for. This mystery now appears to be connected to another one: Independent measurements of Type Ia supernovae have revealed that the expansion of the universe is undergoing a non-linear acceleration rather than following a strict Hubble Law. To explain this acceleration, general relativity requires that much of the universe consist of an energy component with large negative pressure. This dark energy is now thought to make up the missing 70%. Its nature remains one of the great mysteries of the Big Bang. Possible candidates include a scalar cosmological constant and quintessence. Observations to help understand this are ongoing.
## The future according to the Big Bang theory
Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe is above the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state that was similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe is equal to or below the critical density, the expansion would slow down, but never stop. Star formation would cease as the universe grows less dense. The average temperature of the universe would asymptotically approach absolute zero. Black holes would evaporate. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Moreover, if proton decay exists, then hydrogen, the predominant form of baryonic matter in the universe today, would disappear, leaving only radiation.
Modern observations of accelerated expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The Lambda-CDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, would remain together, and they too would be subject to heat death, as the universe cools and expands. Other explanations of dark energy—so-called phantom energy theories—suggest that ultimately galaxy clusters and eventually galaxies themselves will be torn apart by the ever-increasing expansion in a so-called Big Rip.
## Speculative physics beyond the Big Bang
While the Big Bang model is well established in cosmology, it is likely to be refined in the future. Little is known about the earliest universe, when inflation is hypothesized to have occurred. There may also be parts of the universe well beyond what can be observed in principle. In the case of inflation this is required: exponential expansion has pushed large regions of space beyond our observable horizon. It may be possible to deduce what happened when we better understand physics at very high energy scales. Speculations about this often involve theories of quantum gravity.
Some proposals are:
• chaotic inflation
• brane cosmology models, including the ekpyrotic model in which the Big Bang is the result of a collision between branes
• an oscillatory universe in which the early universe's hot, dense state resulted from the Big Crunch of a universe similar to ours. The universe could have gone through an infinite number of big bangs and big crunches. The cyclic extension of the ekpyrotic model is a modern version of such a scenario.
• models including the Hartle-Hawking boundary condition in which the whole of space-time is finite.
Some of these scenarios are qualitatively compatible with one another. Each entails untested hypotheses.
## Philosophical and religious interpretations
There are a number of interpretations of the Big Bang theory that are extra-scientific. Some of these ideas purport to explain the cause of the Big Bang itself (first cause), and have been criticized by some naturalist philosophers as being modern creation myths. Some people believe that the Big Bang theory lends support to traditional views of creation as given in Genesis, for example, while others believe that the Big Bang theory is inconsistent with such views.
The Big Bang, as a scientific theory, is not based on any religion. While some religious interpretations conflict with the Big Bang story of the universe, there are many other interpretations that do not.
The following is a list of various religious interpretations of the Big Bang theory:
• A number of Christian churches, the Roman Catholic Church in particular, have accepted the Big Bang as a description of the origin of the universe, interpreting it to allow for a philosophical first cause. Pope Pius XII was an enthusiastic proponent of the Big Bang even before the theory was scientifically well established.
• Some students of Kabbalah, deism and other non-anthropomorphic faiths concord with the Big Bang theory, for example connecting it with the theory of "divine retraction" (tzimtzum) as explained by the Jewish scholar Moses Maimonides.
• Some modern Islamic scholars believe that the Qur'an parallels the Big Bang in its account of creation, described as follows: "Do not the unbelievers see that the heavens and the earth were joined together as one unit of creation, before We clove them asunder?" (Ch:21,Ver:30). The claim has also been made that the Qur'an describes an expanding universe: "The heaven, We have built it with power. And verily, We are expanding it." (Ch:51,Ver:47). Parallels with the Big Crunch and an oscillating universe have also been suggested: "On the day when We will roll up the heavens like the rolling up of the scroll for writings, as We originated the first creation, (so) We shall reproduce it; a promise (binding on Us); surely We will bring it about." (Ch:21,Ver:104).
• Certain theistic branches of Hinduism, such as in Vaishnavism, conceive of a theory of creation with similarities to the theory of the Big Bang. The Hindu mythos, narrated for example in the third book of the Bhagavata Purana (primarily, chapters 10 and 26), describes a primordial state which bursts forth as the Great Vishnu glances over it, transforming into the active state of the sum-total of matter ("prakriti"). Other forms of Hinduism assert a universe without beginning or end.
• Buddhism has a concept of a universe that has no creation event. The Big Bang, however, is not seen to be in conflict with this since there are ways to conceive an eternal universe within the paradigm. A number of popular Zen philosophers were intrigued, in particular, by the concept of the oscillating universe.
### Big Bang overviews
For an annotated list of textbooks and monographs, see physical cosmology.
### Some primary sources
• G. Lemaître, "Un Univers homogène de masse constante et de rayon croissant rendant compte de la vitesse radiale des nébuleuses extragalactiques" (A homogeneous Universe of constant mass and growing radius accounting for the radial velocity of extragalactic nebulae), Annals of the Scientific Society of Brussels 47A (1927):41—General relativity implies the universe has to be expanding. Einstein brushed him off in the same year. Lemaître's note was translated in Monthly Notices of the Royal Astronomical Society 91 (1931): 483–490.
• G. Lemaître, Nature 128 (1931) suppl.: 704, with a reference to the primeval atom.
• R. A. Alpher, H. A. Bethe, G. Gamow, "The Origin of Chemical Elements,"Physical Review 73 (1948), 803. The so-called αβγ paper, in which Alpher and Gamow suggested that the light elements were created by protons capturing neutrons in the hot, dense early universe. Bethe's name was added for symmetry.
• G. Gamow, "The Origin of Elements and the Separation of Galaxies," Physical Review 74 (1948), 505. These two 1948 papers of Gamow laid the foundation for our present understanding of big-bang nucleosynthesis.
• G. Gamow, Nature 162 (1948), 680.
• R. A. Alpher, "A Neutron-Capture Theory of the Formation and Relative Abundance of the Elements," Physical Review 74 (1948), 1737.
• R. A. Alpher and R. Herman, "On the Relative Abundance of the Elements," Physical Review 74 (1948), 1577. This paper contains the first estimate of the present temperature of the universe.
• R. A. Alpher, R. Herman, and G. Gamow Nature 162 (1948), 774.
• A. A. Penzias and R. W. Wilson, "A Measurement of Excess Antenna Temperature at 4080 Mc/s," Astrophysical Journal 142 (1965), 419. The paper describing the discovery of the cosmic microwave background.
• R. H. Dicke, P. J. E. Peebles, P. G. Roll and D. T. Wilkinson, "Cosmic Black-Body Radiation," Astrophysical Journal 142 (1965), 414. The theoretical interpretation of Penzias and Wilson's discovery.
• A. D. Sakharov, "Violation of CP invariance, C asymmetry and baryon asymmetry of the universe," Pisma Zh. Eksp. Teor. Fiz. 5, 32 (1967), translated in JETP Lett. 5, 24 (1967).
• R. A. Alpher and R. Herman, "Reflections on early work on 'big bang' cosmology" Physics Today Aug 1988 24–34. A review article.
### Religion and philosophy
• Leeming, David Adams, and Margaret Adams Leeming, A Dictionary of Creation Myths. Oxford University Press (1995), ISBN 0195102754.
• Pius XII (1952), "Modern Science and the Existence of God," The Catholic Mind 49:182–192.
### Research articles
Most scientific papers about cosmology are initially released as preprints on arxiv.org. They are generally technical, but sometimes have introductions in plain English. The most relevant archives, which cover experiment and theory, are the astrophysics archive, where papers closely grounded in observations are released, and the general relativity and quantum cosmology archive, which covers more speculative ground. Papers of interest to cosmologists also frequently appear on the high energy phenomenology and high energy theory archives.
All text is available under the terms of the GNU Free Documentation License (see Copyrights for details).
## Viewer Feedback
2006-10-04 Theistic Evolutionist wrote
Interesting, Fair and Balanced
I consider myself a: Young Earth Creationist Old Earth Creationist Theistic Evolutionist Atheistic Evolutionist No Opinion Other This Article was: Boring So so Interesting No Opinion This Article was: Bogus Factual I'm not sure of the accuracy No Opinion This Article was: Biased to Creation/ID Biased to Evolution Fair and Balanced No Opinion
What topics would you like to see added?
|
{}
|
# Plain CSS buttons in different sizes
Just a plain button, in different sizes. I wanted to achieve exactly the same styling cross browser for both anchors, inputs and buttons.
I'm just wondering if this could be improved.
Markup
<!DOCTYPE html>
<html>
<title>CSS button</title>
<body>
<hr>
<hr>
</body>
</html>
CSS
body,
input,
button {
font: 14px "Trebuchet MS", sans-serif;
}
input::-moz-focus-inner,
button::-moz-focus-inner {
border: 0;
}
.button {
display:inline-block;
text-decoration:none;
border:0;
margin:0;
}
input[type="submit"].button,
button.button {
cursor:pointer;
outline:none;
}
.button-tiny {
font-size:12px;
}
.button-medium {
font-size:16px;
}
.button-large {
font-size:18px;
}
.button-primary {
background:slategrey;
color:#fff;
}
.button-primary:hover {
background:lightslategrey;
};
I took the liberty of adding your code to a Fiddle Here so that we could see what it being displayed and test it in different browsers.
I tried to set my Browser to IE 8 and less and apparently it doesn't play nice with JSFiddle. but in IE 9 mode it showed up very nicely, exactly the same as in Chrome.
I like that you created a CSS class button-primary so that you didn't repeat the code you were going to use in all the buttons. and so you didn't have to write so many :hover statements.
Overall I would say that this is some pretty clean code.
But, there is one thing that I would suggest that you do.
Terminate every tag
This
<link rel="stylesheet" href="css/main.css">
Becomes
<link rel="stylesheet" href="css/main.css" />
Notice the /> at the end. if there is no closing tag you should still "close the tag"
these are called Self closing Tags.
some Tags are self closing tags and some are not. HTML5 is a little blurry on this, or I am a little blurry on HTML5
either way you should make sure that these tags
1. <img />
2. <hr />
3. <br />
4. <input />
5. <area />
6. <link />
7. <meta />
and some others. This Answers Lists some more
XHTML allows you to Self Close any tag, this isn't really good practice, mostly because most tags are meant to be containers and leaving them empty would be really silly.
This more recent answer from BoltClock explains a little bit better about closing tags.
I recommend always closing your tags, even if the Doctype tells you that you don't have to. it is better practice.
• Thanks for this feedback! Regarding the self closing tags, I prefer to not self close but I keep this consistent. As you mentioned, HTML5 is a little blurry on this. I think it's preference, to be honest. Jan 22 '14 at 16:53
• if you ever code for XML or XHTML and you are used to doing this, you will get errors and not know where they came from. it's not a good idea, and shouldn't be your preference, in my opinion
– Malachi
Jan 22 '14 at 17:30
• That's true, and a good point. Jan 23 '14 at 16:26
|
{}
|
# EQUATIONS WITH FRACTIONS
To solve an equation with fraction means to find the numerical value of the unknown such that when the result is substituted into the equation, the equation becomes valid or true.
The next 5 examples illustrate this fact.
Example 1
Solve for $x$ in $\frac{4}{x+3}=\frac{3}{x+2}$
Solution:
First, we cross multiply
$4(x+2)=3(x+3)$
$4x+8=3x+9$
Collecting like terms
$4x-3x=9-8$
$x=1$
Now, let's check if the answer is correct
$\frac{4}{1+3}=\frac{3}{1+2}$
$1=1$... Which is true.
Example 2
Solve $\frac{1}{\frac{1}{2}-1}=\frac{2}{\frac{1}{2}+x}$
Solution:
First, we take the L.C.D of both sides
$\frac{1}{\frac{1-2}{2}}=\frac{2}{\frac{1+2x}{2}}$
$\frac{1}{\frac{-1}{2}}=\frac{2}{\frac{1+2x}{2}}$
Now, we cross multiply
$1\times\frac{1+2x}{2}=2\times\frac{-1}{2}$
$\frac{1+2x}{2}=-1$
Once again, we cross multiply
$-1\times2=1+2x$
$-2=1+2x$
$-2-1=2x$
$-3=2x$
Divide both sides by 2
$\frac{-3}{2}=\frac{2x}{2}$
$x=\frac{-3}{2}$
Example 3
Determine the value of $x$ in $\frac{2x+1}{6}=\frac{3x+1}{4}$
Solution:
First, we cross multiply
$4(2x+1)=6(3x+1)$
$8x+4=18x+6$
$4-6=18x-8x$
$-2=10x$
$\frac{-2}{10}=\frac{10x}{10}$
$x=\frac{-1}{5}$
Example 4
Determine the value of $y$ in
$\frac{4y-1}{y+4}-2=\frac{2y-1}{y+2}$
Solution:
First, let take the L.C.M of the right-hand side
$\frac{4y-1-2(y+4)}{y+4}=\frac{2y-1}{y+2}$
$\frac{4y-1-2y+8}{y+4}=\frac{2y-1}{y+2}$
$\frac{2y-9}{y+4}=\frac{2y-1}{y+2}$
Now, let cross multiply
$(2y-9)(y+2)={2y-1)(y+4)$
Expanding the brackets
$2y(y+2)-9(y+2)=2y(y+4)-1(y+4)$
$2y^2+4y-9y-18=2y^2+8y-y-4$
$2y^2-5y-18=2y^2+7y-4$
Collecting like terms
$2y^2-2y^2-5y-7y=-4+18=$
$-12y=14$
$\frac{-12y}{-12}=\frac{14}{-12}$
$y=\frac{-7}{6}$
Related posts
Example 5
Solve $\frac{20}{5x+15}=\frac{15}{5x+10}$
Solution:
$20(5x+10)=15(5x+15)$
$100x+200=75x+225$
$100x-75x=225-200$
$25x=25$
$\frac{25x}{25}=\frac{25}{25}$
$x=1$
Help us grow our readership by sharing this post
|
{}
|
# Normality requirements for GLM and GEE models
From what I was reading for several hours now about these models, requirements and normality, I understand the following, albeit there are some contradictory statements, so I am confused.
May I ask you to either confirm or correct the following statements, posisbly with some citable reference(s)?
(1) For generalized (not general) linear models,
• normality is not required for the "input" variables (independent/predictor variables and dependent/response variable) of the model, but
• normality is required for the residuals. This is independent from the specified distribution e.g. glm(y ~ x1 + x2 + x3, family = gaussian ...), which refers to the response variable, not the residuals.
(2) For generalized estimating equations (GEE), normality is not required,
• neither for independent/predictor and dependent/response variable,
• nor for residuals of the model.
(I am using R and glm() in {stats}, geeglm() in {geepack})
• Thanks! With residuals I refer to the values I get from stats::glm()$residuals and geepack::geeglm()$residuals. How else would one call them correctly? boot::glm.diag.plots() offers e.g. QQ plots to assess normality of ordered deviance residuals, implying that they should be normal? Mar 9 at 23:05
|
{}
|
# Data representation 3: Layout
Last lecture discussed how objects are represented in memory, for a couple important kinds of objects (integers of various sizes, signed and unsigned). This lecture concerns layout: the ways that compilers and operating systems place multiple objects in relationship to one another. The abstract machine defines certain aspects of layout; others are left up to the compiler and runtime and operating system.
## Segments
One aspect of layout is that objects are segregated into different address ranges based on lifetime. These ranges are called segments. The compiler decides on a segment for each object based on its lifetime. The linker then groups all the program’s objects by segment (so, for instance, global variables from different compiler runs are grouped together into a single segment). Finally, when a program runs, the operating system loads the segments into memory. (The stack and heap segments grow on demand.)
Object declaration
(C program text)
(abstract machine)
Segment
(executable location in Linux)
(runtime location in x86-64 Linux)
Constant global
Static
Code (aka Text)
0x400000 (≈1 × 222)
Global
Static
Data
0x600000 (≈1.5 × 222)
Local
Automatic
Stack
0x7fff448d0000 (≈225 × 222)
Anonymous, returned by new
Dynamic
Heap
0x1a00000 (≈8 × 222)
Constant global data and global data have the same lifetime, but are stored in different segments. The operating system uses different segments so it can prevent the program from modifying constants. It marks the code segment, which contains functions (instructions) and constant global data, as read-only, and any attempt to modify code-segment memory causes a crash (a “Segmentation violation”).
An executable is normally at least as big as the static-lifetime data (the code and data segments together). Since all that data must be in memory for the entire lifetime of the program, it’s written to disk and then loaded by the OS before the program starts running. There is an exception, however: the “bss” segment is used to hold modifiable static-lifetime data with initial value zero. Such data is common, since all static-lifetime data is initialized to zero unless otherwise specified in the program text. Rather than storing a bunch of zeros in the object files and executable, the compiler and linker simply track the location and size of all zero-initialized global data. The operating system sets this memory to zero during the program load process. Clearing memory is faster than loading data from disk, so this optimization saves both time (the program loads faster) and space (the executable is smaller).
## Compiler layout
The compiler has complete freedom to pick locations for objects, subject to the abstract machine’s constraints—most importantly, that each object occupies disjoint memory from any other object that’s active at the same time. For instance, consider this program:
void f() {
int i1 = 0;
int i2 = 1;
int i3 = 2;
char c1 = 3;
char c2 = 4;
char c3 = 5;
...
}
On Linux, GCC will put all these variables into the stack segment, which we can see using hexdump. But it can put them in the stack segment in any order, as we can see by reordering the declarations (try declaration order i1, c1, i2, c2, c3), by changing optimization levels, or by adding different scopes (braces). The abstract machine gives the programmer no guarantees about how object addresses relate. In fact, the compiler may move objects around during execution, as long as it ensures that the program behaves according to the abstract machine. Modern optimizing compilers often do this, particularly for automatic objects.
But what order does the compiler choose? With optimization disabled, the compiler appears to lay out objects in decreasing order by declaration, so the first declared variable in the function has the highest address. With optimization enabled, the compiler follows roughly the same guideline, but it also rearranges objects by type—for instance, it tends to group chars together—and it can reuse space if different variables in the same function have disjoint lifetimes. The optimizing compiler tends to use less space for the same set of variables.
## Collections: Abstract machine layout
The C++ programming language offers several collection mechanisms for grouping subobjects together into new kinds of object. The collections are structs, arrays, and unions. (Classes are a kind of struct.) Although the compiler can lay out different objects however it likes relative to one another, the abstract machine defines how subobjects are laid out inside a collection. This is important, because it lets C/C++ programs exchange messages with hardware and even with programs written in other languages. (Messages can be exchanged only when both parties agree on layout. C/C++’s rules let a C program match any known layout.)
The sizes and alignments for user-defined types—arrays, structs, and unions—are derived from a couple simple rules or principles. Here they are. The first rule applies to all types.
1. First-member rule. The address of the first member of a collection equals the address of the collection.
Thus, the address of an array is the same as the address of its first element. The address of a struct is the same as the address of the first member of the struct.
The next three rules depend on the class of collection. Every C abstract machine enforces these rules.
2. Struct rule. The second and subsequent members of a struct are laid out in order, with no overlap, subject to alignment constraints.
3. Array rule. The address of the ith element of an array of type T is ADDRESSOF(array) + i * sizeof(T).
4. Union rule. All members of a union share the address of the union.
In C, every struct follows the struct rule, but in C++, only simple structs follow the rule. Complicated structs, such as structs with some public and some private members, or structs with virtual functions, can be laid out however the compiler chooses. The typical situation is that C++ compilers for a machine architecture (e.g., “Linux x86-64”) will all agree on a layout procedure for complicated structs. This allows code compiled by different compilers to interoperate.
## Alignment
Repeated executions of programs like ./mexplore show that the C compiler and library restricts the addresses at which some kinds of data appear. In particular, the address of every int value is always a multiple of 4, whether it’s located on the stack (automatic lifetime), the data segment (static lifetime), or the heap (dynamic lifetime).
A bunch of observations will show you these rules:
Type Size Address restriction
char (signed char, unsigned char) 1 No restriction
short (unsigned short) 2 Multiple of 2
int (unsigned int) 4 Multiple of 4
long (unsigned long) 8 Multiple of 8
float 4 Multiple of 4
double 8 Multiple of 8
T* 8 Multiple of 8
These are the alignment restrictions for an x86-64 Linux machine.
These restrictions hold for most x86-64 operating systems, except that on Windows, the long type has size and alignment 4. (The long long type has size and alignment 8 on all x86-64 operating systems.)
Just like every type has a size, every type has an alignment. The alignment of a type T is a number a≥1 such that the address of every object of type T must be a multiple of a. Every object with type T has size sizeof(T)—it occupies sizeof(T) contiguous bytes of memory; and has alignment alignof(T)—the address of its first byte is a multiple of alignof(T). You can also say sizeof(x) and alignof(x) where x is the name of an object or another expression.
Alignment restrictions can make hardware simpler, and therefore faster. For instance, consider cache blocks. CPUs access memory through a transparent hardware cache. Data moves from primary memory, or RAM (which is large—a couple gigabytes on most laptops—and uses cheaper, slower technology) to the cache in units of 64 or 128 bytes. Those units are always aligned: on a machine with 128-byte cache blocks, the bytes with memory addresses [127, 128, 129, 130] live in two different cache blocks (with addresses [0, 127] and [128, 255]). But the 4 bytes with addresses [4n, 4n+1, 4n+2, 4n+3] always live in the same cache block. (This is true for any small power of two: the 8 bytes with addresses [8n,…,8n+7] always live in the same cache block.) In general, it’s often possible to make a system faster by leveraging restrictions—and here, the CPU hardware can load data faster when it can assume that the data lives in exactly one cache line.
The compiler, library, and operating system all work together to enforce alignment restrictions.
On x86-64 Linux, alignof(T) == sizeof(T) for all fundamental types (the types built in to C: integer types, floating point types, and pointers). But this isn’t always true; on x86-32 Linux, double has size 8 but alignment 4.
It’s possible to construct user-defined types of arbitrary size, but the largest alignment required by a machine is fixed for that machine. C++ lets you find the maximum alignment for a machine with alignof(std::max_align_t); on x86-64, this is 16, the alignment of the type long double (and the alignment of some less-commonly-used SIMD “vector” types).
## Alignment rules
Two more rules and we can reason about how collection sizes and alignments interact.
Every C++ abstract machine enforces
5. Malloc rule. Any non-null pointer returned by malloc has alignment appropriate for any type. In other words, assuming the allocated size is adequate, the pointer returned from malloc can safely be cast to T* for any T.
Oddly, this holds even for small allocations. The C++ standard (the abstract machine) requires that malloc(1) return a pointer whose alignment is appropriate for any type, including types that don’t fit.
The last rule is not required by the abstract machine, but it’s how sizes and alignments on our machines work:
6. Minimum rule. The sizes and alignments of user-defined types, and the offsets of struct members, are minimized within the constraints of the other rules.
The minimum rule, and the sizes and alignments of basic types, are defined by the x86-64 Linux “ABI”—its Application Binary Interface. This specification standardizes how x86-64 Linux C compilers should behave, and lets users mix and match compilers without problems.
## Consequences of the size and alignment rules
From these rules we can derive some interesting consequences.
First, the size of every type is a multiple of its alignment.
To see why, consider an array with two elements. By the array rule, these elements have addresses a and a+sizeof(T), where a is the address of the array. Both of these addresses contain a T, so they are both a multiple of alignof(T). That means sizeof(T) is also a multiple of alignof(T).
We can also characterize the sizes and alignments of different collections.
• The size of an array of N elements of type T is N * sizeof(T): the sum of the sizes of its elements. The alignment of the array is alignof(T).
• The size of a union is the maximum of the sizes of its components (because the union can only hold one component at a time). Its alignment is also the maximum of the alignments of its components.
• The size of a struct is at least as big as the sum of the sizes of its components. Its alignment is the maximum of the alignments of its components.
Thus, the alignment of every collection equals the maximum of the alignments of its components.
It’s also true that the alignment equals the least common multiple of the alignments of its components. You might have thought lcm was a better answer, but the max is the same as the lcm for every architecture that matters, because all fundamental alignments are powers of two.
The size of a struct might be larger than the sum of the sizes of its components, because of alignment constraints. Since the compiler must lay out struct components in order, and it must obey the components’ alignment constraints, and it must ensure different components occupy disjoint addresses, it must sometimes introduce extra space in structs. Here’s an example: the struct will have 3 bytes of padding after char c, to ensure that int i2 has the correct alignment.
struct twelve_bytes {
int i1;
char c;
int i2;
};
Thanks to padding, reordering struct components can sometimes reduce the total size of a struct.
The rules also imply that the offset of any struct member—which is the difference between the address of the member and the address of the containing struct—is a multiple of the member’s alignment.
To see why, consider a struct s with member m at offset o. The malloc rule says that any pointer returned from malloc is correctly aligned for s. Every pointer returned from malloc is maximally aligned, equalling 16*x for some integer x. The struct rule says that the address of m, which is 16*x + o, is correctly aligned. That means that 16*x + o = alignof(m)*y for some integer y. Divide both sides by a = alignof(m) and you see that 16*x/a + o/a = y. But 16/a is an integer—the maximum alignment is a multiple of every alignment—so 16*x/a is an integer. We can conclude that o/a must also be an integer!
Finally, we can also derive the necessity for padding at the end of structs. (How?)
## Pointer representation
We distinguish pointers, which are concepts in the C abstract machine, from addresses, which are hardware concepts. A pointer combines an address and a type.
The memory representation of a pointer is the same as the memory representation of its address, so a pointer with address 0x1347810A is stored the same way as the integer with the same value.
The C abstract machine defines an unsigned integer type uintptr_t that can hold any address. (You have to #include <inttypes.h> or <cinttypes> to get the definition.) On most machines, including x86-64, uintptr_t is the same as unsigned long. Casts between pointer types and uintptr_t are information preserving, so this assertion will never fail:
void* ptr = malloc(...);
uintptr_t addr = (uintptr_t) ptr;
void* ptr2 = (void*) addr;
assert(ptr == ptr2);
Since it is a 64-bit architecture, the size of an x86-64 address is 64 bits (8 bytes). That’s also the size of x86-64 pointers.
## Compiler hijinks
In C++, most dynamic memory allocation uses special language operators, new and delete, rather than library functions.
Though this seems more complex than the library-function style, it has advantages. A C compiler cannot tell what malloc and free do (especially when they are redefined to debugging versions, as in the problem set), so a C compiler cannot necessarily optimize calls to malloc and free away. But the C++ compiler may assume that all uses of new and delete follow the rules laid down by the abstract machine. That means that if the compiler can prove that an allocation is unnecessary or unused, it is free to remove that allocation!
For example, we compiled this program in the problem set environment (based on test003.cc):
int main() {
char* ptrs[10];
for (int i = 0; i < 10; ++i) {
ptrs[i] = new char[i + 1];
}
for (int i = 0; i < 5; ++i) {
delete[] ptrs[i];
}
m61_printstatistics();
}
The optimizing C++ compiler removes all calls to new and delete, leaving only the call to m61_printstatistics()! (For instance, try objdump -d testXXX to look at the compiled x86-64 instructions.) This is valid because the compiler is explicitly allowed to eliminate unused allocations, and here, since the ptrs variable is local and doesn’t escape main, all allocations are unused. The C compiler cannot perform this useful transformation. (But the C compiler can do other cool things, such as unroll the loops.)
|
{}
|
\n<\/p>
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/e\/e8\/Calculate-the-Apothem-of-a-Hexagon-Step-2-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-2-Version-2.jpg","bigUrl":"\/images\/thumb\/e\/e8\/Calculate-the-Apothem-of-a-Hexagon-Step-2-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-2-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-3-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-3-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-3-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-3-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/a\/ab\/Calculate-the-Apothem-of-a-Hexagon-Step-4-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-4-Version-2.jpg","bigUrl":"\/images\/thumb\/a\/ab\/Calculate-the-Apothem-of-a-Hexagon-Step-4-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-4-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/3b\/Calculate-the-Apothem-of-a-Hexagon-Step-5-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-5-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/3b\/Calculate-the-Apothem-of-a-Hexagon-Step-5-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-5-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/38\/Calculate-the-Apothem-of-a-Hexagon-Step-6-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-6-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/38\/Calculate-the-Apothem-of-a-Hexagon-Step-6-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-6-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/8\/81\/Calculate-the-Apothem-of-a-Hexagon-Step-7-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-7-Version-2.jpg","bigUrl":"\/images\/thumb\/8\/81\/Calculate-the-Apothem-of-a-Hexagon-Step-7-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-7-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/d\/dc\/Calculate-the-Apothem-of-a-Hexagon-Step-8-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-8-Version-2.jpg","bigUrl":"\/images\/thumb\/d\/dc\/Calculate-the-Apothem-of-a-Hexagon-Step-8-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-8-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/4\/41\/Calculate-the-Apothem-of-a-Hexagon-Step-9-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-9-Version-2.jpg","bigUrl":"\/images\/thumb\/4\/41\/Calculate-the-Apothem-of-a-Hexagon-Step-9-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-9-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, Using Trigonometry (Given Side Length or Radius), {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-10.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-10.jpg","bigUrl":"\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-10.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-10.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/9\/9b\/Calculate-the-Apothem-of-a-Hexagon-Step-11.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-11.jpg","bigUrl":"\/images\/thumb\/9\/9b\/Calculate-the-Apothem-of-a-Hexagon-Step-11.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-11.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/d\/d9\/Calculate-the-Apothem-of-a-Hexagon-Step-12.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-12.jpg","bigUrl":"\/images\/thumb\/d\/d9\/Calculate-the-Apothem-of-a-Hexagon-Step-12.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-12.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/7\/7c\/Calculate-the-Apothem-of-a-Hexagon-Step-13.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-13.jpg","bigUrl":"\/images\/thumb\/7\/7c\/Calculate-the-Apothem-of-a-Hexagon-Step-13.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-13.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/9\/95\/Calculate-the-Apothem-of-a-Hexagon-Step-14.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-14.jpg","bigUrl":"\/images\/thumb\/9\/95\/Calculate-the-Apothem-of-a-Hexagon-Step-14.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-14.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
\n<\/p><\/div>"}, {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/9\/97\/Calculate-the-Apothem-of-a-Hexagon-Step-15.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-15.jpg","bigUrl":"\/images\/thumb\/9\/97\/Calculate-the-Apothem-of-a-Hexagon-Step-15.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-15.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
hexagon apothem calculator
wikiHow is where trusted research and expert knowledge come together. In geometry, the hexagon is a polygon with 6… Random Posts. Calculates side length, inradius (apothem), circumradius, area and perimeter. The calculator given in this section can be used to find area of a regular polygon. Apothem is the line drawn from the midpoint of one of the sides to the center of pentagon. Enter the length of any side and total number of sides of the polygon into the calculator to determine the apothem. Irregular Polygons Since irregular polygons have no center, they have no apothem. Then, divide one of the triangles in half to create 2 right triangles. Each internal angle of the hexagon has been calculated to be 120°. A regular hexagon has 6 equal sides. Regular decagon calculator - By Dr. Minas E. Lemonis, PhD - Updated: April 24, 2020. where r is the radius (circumradius) of the polygon n is the number of sides cos is the cosine function calculated in degrees (see Trigonometry Overview) . If we understand the apothem size and perimeter of a regular hexagon, we can utilize the apothem and boundary formula for the area. A polygon with six sides and six angles is termed as a hexagon. Since the length of all sides on a polygon are equal, you can choose any side to measure. Apothem No.of Sides Length of each side Unit: Result: Area of Polygon: square units. Calculate from an regular 3-gon up to a regular 1000-gon. Area of a Hexagon Formulas & Calculator The area of a hexagon can be calculated using the following formula: if you know the perimeter and the apothem. A trick to remember this formula is to understand where it comes from. Halimbawa, sa tulong ng calculator maaari mong malutas:. wikiHow's. Is there away to calculate perimeter of a hexagon by only the area? So the formula multiplies # of sides times 0.5, then multiplied by the perimeter. This article has been viewed 161,126 times. As long as you know the side length of the hexagon, you can calculate the length of the apothem. This tool calculates the basic geometric properties of a regular decagon. Where A is the location, a is the apothem length, and also P is the border of the hexagon. You usually need to know the length of the apothem when calculating the area of a hexagon. Learn more... A hexagon is a six-sided polygon. Solve the equation for s, and multiply that length by 6. A polygon is a closed two-dimensional figure created of straight-line segments. Samakatuwid, ang hindi kilalang haba ng kanang tatsulok at ang haba ng hexagon apothem ay katumbas ng 6.93 cm. In general, the sum of interior angles of a Polygon is given by-$$(n-2) \times 180°$$ Types of Hexagon. eval(ez_write_tag([[728,90],'calculator_academy-medrectangle-3','ezslot_1',169,'0','0'])); The following formula can be used to calculate an apothem of any polygon. What Is Removable Discontinuity? In this case, the formula equals perimeter times apothem time a half. Irregular hexagons do not have apothems. You’ll see what all this means when you solve the following problem: If you really can’t stand to see another ad again, then please consider supporting our work with a contribution to wikiHow. The radius of a hexagon is the same as the side length, because the circle is inscribed in the hexagon. Regular polygon calculator is an online tool to calculate the various properties of a polygon. However in the explanation it only shows side length. References. If you divide a regular hexagon (side length s) into six equilateral triangles (also of side length s), then the apothem is the altitude, and bisector, of any one of them. Round your answer to the nearest tenth Answers (2) Is there a formula to calculate the apothem if you only know how long one side of the hexagon is, and its perimeter? Include your email address to get a message when this question is answered. To improve this 'Area of a regular polygon Calculator', please fill in questionnaire. So would I use the same formula but use the radius instead of the side length? Correct this, and my review goes way up. Apothem is the line from the center point of the pentagon to the middle of one of its sides. Thanks to all authors for creating a page that has been read 161,126 times. See also. The irregular polygons does not contain apothem or center. To learn how to calculate the apothem of a hexagon using trigonometry, scroll down! It’s a geometric figure with six sides and six angles. HOW TO FIND SURFACE AREA HEXAGON: Area of a polygon calculator finds the primerer and area of a regular polygon. The base of each equilateral triangle, then, is also 8 cm. By signing up you are agreeing to receive emails according to our privacy policy. The apothem can be found using the formula above, but it also can be found using the number of sides and the circumradius. The apothem a of a regular n-sided polygon with side length s, or circumradius R, can be found using the following formula: = = (). To learn how to calculate the apothem of a hexagon using trigonometry, scroll down! Calculating from a Regular Hexagon with a Given Side Length Write down the formula for finding the … Last Updated: September 19, 2020 The hexagon shape is one of the most popular shapes in nature, from honeycomb patterns to hexagon tiles for mirrors - its uses are almost endless.Here we do not only explain why the 6-sided polygon is so popular, but also how to correctly draw hexagon sides. This is equal to the side length of the hexagon. So just use the same formula and plug it in. Please help us continue to provide you with our trusted how-to guides and videos for free by whitelisting wikiHow on your ad blocker. It can be used to calculate the area of a regular polygon as well as various sided polygons such as 6 sided polygon, 11 sided polygon, or 20 sided shape, etc.It reduces the amount of time and efforts to find the area or any other property of a polygon. REGULAR POLYGON AREA CALCULATOR. Use this calculator to calculate properties of a regular polygon. How to Calculate the Apothem of a Hexagon, https://www.mathsisfun.com/geometry/regular-polygons.html, http://mathforum.org/library/drmath/view/54840.html, http://www.vitutor.com/geometry/plane/apothem.html, https://www.mathsisfun.com/definitions/radius-polygon-.html, http://www.csuchico.edu/~jhudson/pdf/trigtabl.pdf, consider supporting our work with a contribution to wikiHow. Enter any 1 variable plus the number of sides or the polygon name. How do I calculate the apothem of a square if I only know the area? Input the side length and number of sides into the formula to calculate the apothem. Regular polygons are equilateral (all sides equal) and also have all angles equal. #"Apothem " a = s / (2 * sqrt3)# where s is the length of the side of an equilateral triangle. The apothem can also be found by = ((−)). We use cookies to make wikiHow great. Similarly, we have Pentagon where the polygon has 5 sides; Octagon has 8 sides. Apothem Calculator Enter the length of any side and total number of sides of the polygon into the calculator to determine the apothem. Choose one triangle and label the length of its base. We will go through each of the two formulas in this lesson. Measure the total number of sides. Formulas Used to Calculate the Apothem Length The apothem formula, when the side length is given is: a a = S 2 tan(180 n) S 2 tan (180 n) Hence, apothem can be calculated only for the regular polygons. Apothem of a regular polygon when the circumradius is given Apothem=Radius Of Circumscribed Circle*cos (180/Number of sides) GO Inradius of a Regular Polygon Inradius of Regular Polygon= (Side)/ (2*tan (180/Number of sides)) GO [1] Calculator Academy© - All Rights Reserved 2020, how to find the area of a regular polygon, how to find the apothem of a regular polygon, how do you find the area of a regular polygon, how do you find the apothem of a regular polygon, how to find the area of a regular polygon with only the apothem, how to find the apothem of a regular pentagon, how to calculate the area of a regular polygon, formula for finding the area of a regular polygon, find the perimeter and area of the regular polygon, how do you find the apothem of a pentagon, find the area of a regular polygon calculator, area of a pentagon with apothem calculator, how to find the area of each regular polygon, how to find the apothem of a regular hexagon, how to find area of a hexagon with apothem, find the area of the regular polygon calculator, how to find the area of the regular polygon, how do u find the area of a regular polygon, how do i find the area of a regular polygon, how do you find the apothem of a triangle, how to find the area of a regular polygon triangle. A Polygon is a closed plane figure having three or more sides. Hence, apothem can be calculated only for the regular polygons. There are two simple formulas for finding area of a regular hexagon. For example, for a hexagon with a side length of 8 cm, the formula will look like this: For example, the tangent of 30 is about .577, so the formula will now look like this. Since all you need to know to calculate the apothem is the length of one side of the hexagon, the length of the perimeter is irrelevant. Hexagon apothem calculator. When a hexagon is regular it has six equal side lengths and an apothem. Polygon Calculator. For example, let's consider a 5-sided regular polygon with an apothem length of 4.817 units and a perimeter of 35 units. The apothem of a regular polygon can be found multiple ways. Using the Pythagorean Theorem (Given Side Length or Radius), {"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-1-Version-2.jpg\/v4-460px-Calculate-the-Apothem-of-a-Hexagon-Step-1-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/33\/Calculate-the-Apothem-of-a-Hexagon-Step-1-Version-2.jpg\/aid4531031-v4-728px-Calculate-the-Apothem-of-a-Hexagon-Step-1-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":"728","bigHeight":"546","licensing":"
|
{}
|
# Euler Problem 15: Pathways Through a Lattice
Euler Problem 15 analyses taxicab geometry. This system replaces the usual distance function with the sum of the absolute differences of their Cartesian coordinates. In other words, the distance a taxi would travel in a grid plan. The fifteenth Euler problem asks to determine the number of possible routes a taxi can take in a city of a certain size.
## Euler Problem 15 Definition
Starting in the top left corner of a 2×2 grid, and only being able to move to the right and down, there are exactly 6 routes to the bottom right corner. How many possible routes are there through a 20×20 grid?
## Solution
The defined lattice is one larger than the number of squares. Along the edges of the matrix, only one pathway is possible: straight to the right or down. We can calculate the number of possible pathways for the remaining number by adding the number to the right and below the point.
$p_{i,j}=p_{i,j{+1}}+p_{{i+1},j}$
For the two by two lattice the solution space is:
6 3 1
3 2 1
1 1 0
The total number of pathways from the upper left corner to the lower right corner is thus 6. This logic can now be applied to a grid of any arbitrary size using the following code.
The code defines the lattice and initiates the boundary conditions. The bottom row and the right column are filled with 1 as there is only one solution from these points. The code then calculates the pathways by working backwards through the matrix. The final solution is the number is the first cell.
```# Define lattice
nLattice <- 20
lattice = matrix(ncol=nLattice + 1, nrow=nLattice + 1)
# Boundary conditions
lattice[nLattice + 1,-(nLattice + 1)] <- 1
lattice[-(nLattice + 1), nLattice + 1] <- 1
# Calculate Pathways
for (i in nLattice:1) {
for (j in nLattice:1) {
lattice[i,j] <- lattice[i+1, j] + lattice[i, j+1]
}
}
|
{}
|
Stochastic Weak Passivity Based Stabilization of Stochastic Systems with Nonvanishing NoiseThis work was supported by the National Natural Science Foundation of China under Grant No. 11271326 and 61611130124, and the Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20130101110040.
# Stochastic Weak Passivity Based Stabilization of Stochastic Systems with Nonvanishing Noise††thanks: This work was supported by the National Natural Science Foundation of China under Grant No. 11271326 and 61611130124, and the Research Fund for the Doctoral Program of Higher Education of China under Grant No. 20130101110040.
Zhou fang222School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, P. R. China. (Zhou_Fang@zju.edu.cn, gaochou@zju.edu.cn (Correspondence)). Chuanhou Gao 222School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, P. R. China. (Zhou_Fang@zju.edu.cn, gaochou@zju.edu.cn (Correspondence)).
###### Abstract
For stochastic systems with nonvanishing noise, i.e., at the desired state the noise port does not vanish, it is impossible to achieve the global stability of the desired state in the sense of probability. This bad property also leads to the loss of stochastic passivity at the desired state if a radially unbounded Lyapunov function is expected as the storage function. To characterize a certain (globally) stable behavior for such a class of systems, the stochastic asymptotic weak stability is proposed in this paper which suggests the transition measure of the state to be convergent and the ergodicity. By defining stochastic weak passivity that admits stochastic passivity only outside a ball centered around the desired state but not in the whole state space, we develop stochastic weak passivity theorems to ensure that the stochastic systems with nonvanishing noise can be globally locally stabilized in weak sense through negative feedback law. Applications are shown to stochastic linear systems and a nonlinear process system, and some simulation are made on the latter further.
Key words. Stochastic differential systems, transition measure, ergodicity, stochastic weak passivity, asymptotic weak stability, stabilization
AMS subject classifications. 60H10, 62E20, 70K20, 93C10, 93D15, 93E15
## 1 Introduction
Stochastic phenomena have emerged universally in many physical systems due to noise, disturbance and uncertainty. The unpredictability to them leads to it a great challenge to stabilize a stochastic system. During the past decades, the stabilization of nonlinear stochastic systems had constituted one of central problems in stochastic process control both theoretically and practically. A great deal of methods emerge as the times require, among which stochastic passivity based control is a popular one. Rooting in the passivity theory [17, 2] and the stochastic version of Lyapunov theorem [6], the stochastic passivity theory [4] was developed for stabilization and control of nonlinear stochastic systems. By means of state feedback laws, the asymptotic stabilization in probability can be achieved for a stochastic affine system provided some rank conditions are fulfilled and the unforced stochastic affine system is Lyapunov stable in probability [4]. Following this study, Lin et. al. [8] explored the relationship between a stochastic passive system and the corresponding zero-output system, and further established the global stabilization results. Parallelizing to the development of stochastic passivity in theory, Satoh et. al. [12] applied this methodology to port-Hamiltonian systems, and the solutions for stabilization of a large class of nonlinear stochastic systems are thus available. There are also some reports that stochastic passivity is applied to filtering problem [19] and controlling stochastic mechanical systems [10].
Despite the large success achieved, stochastic passivity based control seems to only work under the condition that the noise vanishes at the stationary solution (very often being at the origin) if a radially unbounded Lyapunov function is expected as the storage function. This means that if a stochastic system has nonzero noise port at the stationary solution or has persistent noise port, such a method may be out of action. One of the aims of this paper is to derive the necessary conditions that a stochastic system is stochastically passive, and further give the sufficient conditions to say a stochastic system losing stochastic passivity. Equivalently, we prove that there does not exist a radially unbounded Lyapunov function rendering the stochastic system to be globally asymptotically stable in probability provided the noise does not vanish at the desired state. The ubiquitousness of such a class of systems in the mechanical[13, 14] and biological[3] fields motivates us to define a kind of novel stability, termed as stochastic asymptotic weak stability, to characterize a certain (globally) stable behavior for them. The stochastic asymptotic weak stability requests the system state to be convergent in distribution and ergodic. The former means the state to evolve within a small region around the desired state in a large probability while the latter ensures that the state evolution almost always take place within this region.
On the face of it, the stochastic asymptotic weak stability is somewhat similar to the concept of stochastic bounded stability proposed in[13, 14] in that a stochastic system with persistent noise is considered for the same purpose. That concept also means that the state will evolve within a bounded region around a desired state with a large probability which depends on the region radius. Especially when the region radius goes infinite, the probability will be one. However, there is evident difference between these two kinds of stability. Stochastic bounded stability cannot characterize the ergodicity of the state. Namely, once the trajectory of the state runs out of the bounded region with a small probability, the coming evolution will take place in a larger bounded region to reach a “new” stochastic bounded stability with a larger probability. In addition, the stochastic asymptotic weak stability is different from stochastic noise-to-state[3, 1] and input-to-state stability[9] too. The latter two kinds of stability also serves for characterizing the stable behavior of stochastic systems with nonvanishing noise. They describe the convergence of the expectation of the state, for which the transition measure is controlled by defining a particular function. Comparatively speaking, they say nothing about the ergodicity of the state, and do not mean either that the state must evolve within a small region around the desired state. Therefore, the stochastic asymptotic weak stability is able to provide more details on characterizing the “stable” evolution of the state.
In the concept of stochastic asymptotic weak stability, the convergence in distribution describes the evolution trend of the probability distribution of the stochastic system under consideration. As one may know, for a stochastic system the probability density function satisfies the Fokker-Planck equation [6]. Hence, a usual way to achieve convergence in distribution often starts from analyzing the properties of the solutions of the Fokker-Planck equation, including the existence, uniqueness and convergence. Based on this equation, Zhu et al. [21, 20] studied the exact stationary solution of distribution density function for stochastic Hamiltonian systems. Liberzon et. al. [7] developed a feedback controller to stabilize in distribution a class of nonlinear stochastic systems for which the steady-state distribution density function can be solved from the Fokker-Planck equation. In addition, probability analysis is another way to serve for achieving the weak stability. Zakai [18] presented a Lyapunov criterion to suggest the existence of stationary probability distribution and the convergence of transition probability measure for stochastic systems with globally Lipschitzian coefficients. Stettner [15] pointed out that the strongly feller and irreducible process are stable in distribution. Khasminskii [5] constructed a Markov chain to analyze the convergence of the probability distribution, and further obtained the Markov process to be convergent in distribution [6] if it is “mix sufficiently well” in an open domain and the recurrent time is finite. The conditions that renders the recurrent time to be finite give us large inspiration on developing the stabilizing ways in weak sense for stochastic systems with nonvanishing noise.
In this paper, we will show that the recurrent property of a stochastic system is highly relevant to the stochastic passivity behavior. Based on this comparison, we define the stochastic passivity not in the whole state space, but only outside a ball centered around the desired state, which is labeled as stochastic weak passivity in the context. Within the framework of stochastic weak passivity, we do not need to care whether the noise port of a stochastic system vanishes at the desired state or not. Therefore, it is suited to handle the stabilization issue of stochastic differential systems with nonvanishing noise. Further, we link the stochastic weak passivity with the stochastic asymptotic weak stability, and develop stabilizing controllers using the stochastic weak passivity to achieve the asymptotic weak stability of stochastic systems. The sufficient conditions for global and local asymptotic stabilization in weak sense are provided by means of negative feedback laws, respectively.
The rest of the paper is organized as follows. Section presents some preliminaries on stochastic passivity. In section , the loss of stochastic passivity is analyzed and the problem of interest is formulated. In section , we propose the framework of stochastic weak passivity theory and make a link between stochastic weak passivity and asymptotic weak stability. Some basic concepts and the main results (expressed as two stochastic weak passivity theorem and one refined version) for stabilizing stochastic systems in weak sense are given in this section. Section illustrates the efficiency of the stochastic weak passivity theory through two application examples. Finally, section concludes this paper and makes a prospect of future research.
## 2 Preliminaries of stochastic passivity
In this section, we will give a birds-eye view of mathematical systems theory related to stochastic differential systems.
We begin with a stochastic differential equation written in the sense of Itô
dx=f(x)dt+h(x)dω \hb@xt@.01(2.1)
where , , and are locally Lipschitz continuous functions, and is a standard Wiener process defined on a complete probability space. Assume to be the stochastic process solution and to be the equilibrium solution (if exists) of Eq. (LABEL:StochasticEquation), then we have
###### Definition 2.1 (Transition Measure [6])
The transition measure of , denoted by , is a function from to such that
P(t,x0,A)=P(x(t)∈A|x(0)=x0) \hb@xt@.01(2.2)
where is the -algebra of Borel sets in , is a Borel subset, and denotes the probability function.
###### Definition 2.2 (Invariant Measure [6])
Let be a measure defined on a Borel space , then is an probability invariant measure for a stochastic system of Eq. (LABEL:StochasticEquation) if and
π(A)=∫RnP(t,x,A)π(%dx), ∀ t\textgreater0 and ∀ A∈B \hb@xt@.01(2.3)
###### Definition 2.3 (Stable in Probability [6])
The equilibrium solution of Eq. (LABEL:StochasticEquation) is
stable in probability if
limx(0)→x∗P(sup∥x(t)−x∗∥2<ϵ)=1, ∀ ϵ\textgreater0;
locally asymptotically stable in probability if
limx(0)→x∗P(limt→∞∥x(t)−x∗∥2=0)=1;
globally asymptotically stable in probability if
P(limt→∞∥x(t)−x∗∥2=0)=1, ∀ x(0).
In order to analyze the stability of stochastic systems, the stochastic version of the second Lyapunov theorem and passivity theorem were proposed in succession.
###### Theorem 1 (Stochastic Lyapunov Theorem[6])
If there exists a positive definite function with respect to such that
L[V(x)]≤0, ∀ x∈D \hb@xt@.01(2.4)
then the equilibrium solution of Eq. (LABEL:StochasticEquation) is stable in probability, where is a bounded open neighborhood of and is the infinitesimal generator of the solution of Eq. (LABEL:StochasticEquation), calculated through
L[⋅]=∂(⋅)∂xf+12tr{∂2(⋅)∂x2hh⊤} \hb@xt@.01(2.5)
If the equality in Eq. (LABEL:LV) holds if and only if , then is locally asymptotically stable in probability.
Further, if , (often said that the Lyapunov function is radially unbounded) and , then is globally asymptotically stable in probability.
The stochastic passivity theorem is not handed directly from the literature, but it may be obtained immediately from the definition of stochastic passivity.
###### Definition 2.4 (Stochastic Passivity [4])
An input-output stochastic differential system in the sense of Itô
ΣS: {dx=f(x,u)dt+h(x,u)dωy=s(x,u) \hb@xt@.01(2.6)
is said to be stochastically passive if there exists a positive semi-definite function such that
L[S(x)]≤u⊤y, ∀ x∈Rn \hb@xt@.01(2.7)
where is the state, the input, the output, the drift term , the diffusion term and all satisfy the condition of local Lipschitz continuity, and , share the same meaning with those in Eq. (LABEL:StochasticEquation). The nonnegative real function is called the storage function, the state where is the stochastic passive state and the inner product is called the supply rate.
###### Result 1 (Stochastic Passivity Theorem)
The negative feedback connection of two stochastic passive systems is stochastically passive.
Proof. Let and in the form of subscripts represent these two stochastic passive systems, respectively, then we have
L[S1(x1)]≤u⊤1y1 and L[S2(x2)]≤u⊤2y2
Define the storage function of their negative feedback connection by
S(x)=S1(x1)+S2(x2)
and note the fact that
x=(x⊤1,x⊤2)⊤, y=y1=u2, u=u1+y2
then we get
Therefore, the result is true.
###### Result 2 (Stochastic Passivity and Stability in Probability)
A stochastic passive system with a positive definite storage function is stable in probability if a stochastic passive controller with a positive definite storage function is connected in negative feedback.
Proof. Based on Result 1, the whole negative feedback connection is stochastically passive. As long as the input of the stochastic passive system (labeled by the subscript “”) is manipulated according to , then , which means . Here, the operator is the stochastic passive controller (labeled by the subscript “”) defined by . The stability in probability of is immediately from Theorem , so is that of .
###### Remark 1
Deterministic passive systems are a kind of special cases of stochastic passive systems. Therefore, the frequently-used passive controllers [16], such as PID Controller, Model predictive Controller, etc., can all serve for stabilizing the stochastic passive systems in probability.
## 3 Loss of stochastic passivity and Problem setting
This section contributes to elaborating that stochastic passivity will vanish either in some stochastic systems or when some control problems are addressed, and further to formulating the problem of interest.
### 3.1 Loss of stochastic passivity
As can be known from Definition LABEL:Defstochasticpassivity, a key point to capture stochastic passivity lies in finding a storage function. We will derive the necessary condition for stochastic passivity in the following, and then get the sufficient condition to say the loss of stochastic passivity. For this purpose, we go back to the stochastic differential equation of Eq. (LABEL:StochasticEquation).
###### Theorem 2
If a stochastic differential equation given by Eq. (LABEL:StochasticEquation) has a global solution, then it must be not stable in probability at those states that result in the nonzero diffusion term.
Proof. Let the set of all states result in the nonzero diffusion term be given by
H≠0:={x‡∈Rn|h(x‡)≠\mathbbold0n×r}
For any and , it is expected that
limx(0)→x‡P(sup∥x(t)−x‡∥2<γ)=1
must be not true. Towards this purpose, we assume for simplicity but without loss of generality (which means ), and further construct a real-valued function in the form of
~U(x)=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩x20≤|x|≤12−23x3+2x2−12x+1212≤|x|≤3213x3−52x2+254x−792432≤|x|≤52231252≤|x|.
Based on this function, a positive definite, twice continuously differentiable and bounded real function mapping to is defined by
U(x)=1223×~U(∥x∥2)
Clearly, and, moreover, in the -neighborhood of .
In order to finish the proof, we impose the infinitesimal generator on , and are only concerned about the result at . From Eq. (LABEL:InfinitesimalGenerator), we have
L[U](\mathbbold0n)=1223tr{h(\mathbbold0n)h⊤(\mathbbold0n)}>0.
On the other hand, from the definition of [6] we get
L[U](\mathbbold0n)=limt→0E\mathbbold0n[U(x(t))]−U(\mathbbold0n)t
where appearing in indicates that the initial condition is . Since the stochastic differential equation (LABEL:StochasticEquation) has a global solution, there exist a time and a constant so that . Also, since
E\mathbbold0n[U(x(τ))] = + E\mathbbold0n[ U(x(τ)) ∣∣ U(x(τ))≥ϵ2 ]P(U(x(τ))≥ϵ2 ∣∣ x(0)=\mathbbold0n)
and
E\mathbbold0n[ U(x(τ)) ∣∣ U(x(τ))<ϵ2 ]\textlessϵ2, P(U(x(τ))<ϵ2 ∣∣ x(0)=\mathbbold0n)≤1
together with the fact that
where is any positive number, we have
P( U(x(τ))≥ϵ2 ∣∣ x(0)=\mathbbold0n )≥cτ−ϵ2
We set to be sufficiently small so that
P( ∥x(τ)∥2≥γ ∣∣ x(0)=\mathbbold0n )≥cτ−ϵ2>0
where .
From the definition, the leftmost term in the above inequality can be calculated by
P( ∥x(τ)∥2≥γ ∣∣ x(0)=\mathbbold0n )= ∫∥y∥2=δP( x(τδ)=dy ∣∣ x(0)=\mathbbold0n )P( ∥x(τ))∥2≥γ ∣∣ x(τδ)=y )
where
τδ=τ∧inf{ t | ∥x(t)∥2=δ<γ }
is a stopping time. Then there exists at least one point, denoted by , on the surface of the ball such that
P( ∥x(τ))∥2≥γ ∣∣ x(τδ)=yδ )≥cτ−ϵ2.
Note that Eq. (LABEL:StochasticEquation) is autonomous, therefore
P(supt∈[0,∞)∥x(t)∥2≥γ ∣∣ x(0)=yδ)≥P(∥x(τ))∥2≥γ∣∣ x(τδ)=yδ)≥cτ−ϵ2
Namely, for there always exist a to make the above inequality be true. Clearly, the above inequality suggests that Eq. (LABEL:StochasticEquation) must be not stable in probability at .
It is straightforward to write the inverse negative proposition of Theorem LABEL:Vanish1 as a corollary.
###### Corollary 1
For a stochastic differential equation in the form of (LABEL:StochasticEquation) with a global solution, if it is stable in probability at a desired state (may be not the equilibrium ), then must belong to which is defined by
H=0:={x†∈Rn|h(x†)=\mathbbold0n×r} \hb@xt@.01(3.1)
Note that the above result depends on the condition that the stochastic differential equation (LABEL:StochasticEquation) has a global solution. However, under the condition of local Lipschitz continuity, Eq. (LABEL:StochasticEquation) has a unique solution only before explosion time. Based on this result, we will reveal that there is no explosion for some stochastic passive systems, so it must have a global solution. To this task, attention is turned to the Non-explosion condition of a stochastic differential equation proposed by Narita [11].
###### Lemma 1 (Non-explosion Condition [11])
Given a stochastic differential equation represented by Eq. (LABEL:StochasticEquation), if for , there exist two positive numbers and , and a scalar function such that
L[UT(t,x)]≤cT \hb@xt@.01(3.2)
holds for all and , and moreover,
lim∥x∥2→∞inf0≤t≤TUT(t,x)=∞ \hb@xt@.01(3.3)
then the solutions of Eq. (LABEL:StochasticEquation) are of non-explosion, i.e., the explosion time beginning at any and , denoted by , satisfying
P(te(t0,x0)=∞)=1
In the following, that Lemma is applied to a stochastic passive system yields
###### Proposition 1
For a stochastic differential system governed by Eq. (LABEL:StochasticSystem), if there exists a radially unbounded Lyapuonv function so that is stochastically passive, then the unforced version of Eq. (LABEL:StochasticSystem) has a global solution.
Proof. Assume to be the radially unbounded Lyapuonv function that suggests to be stochastically passive, then by designating the zero controller to , i.e., , we have . It is naturally to observe that satisfies Eq. (LABEL:Non-explosion2). Note that the state evolution of this unforced version of is just the same as Eq. (LABEL:StochasticEquation). Hence, the solutions of are of non-explosion based on Lemma . Namely, Eq. (LABEL:StochasticSystem) has a global solution.
From Proposition , one can know that some stochastic passive systems must have a global solution without force. Combining this result with Corollary , we get the necessary condition for saying to be of stochastic passivity, which is expressed as follows.
###### Theorem 3 (Necessary Condition for Stochastic Passivity)
If there exists a radially unbounded Lyapunov function that can render a stochastic differential system described by Eq. (LABEL:StochasticSystem) to be stochastically passive, then the unforced diffusion term must vanish at the stochastic passive state.
Proof. From Theorem of Stochastic Lyapunov theorem, is stable in probability at the stochastic passive state with the zero controller. This together with Proposition and Corollary yields the result to be true.
We further express the inverse negative proposition of Theorem to get the sufficient condition for loss of stochastic passivity.
###### Corollary 2 (Sufficient Condition for Loss of Stochastic Passivity)
If the unforced diffusion term in a stochastic differential system in the form of (LABEL:StochasticSystem) does not vanish at any state , then there does not exist any radially unbounded Lyapunov function to ensure to be stochastically passive.
###### Remark 2
Corollary LABEL:Vanish3 implies that stochastic passivity will lose when the desired state makes and the storage function is expected to be a radially unbounded Lyapunov function, so it is impossible for one to use stochastic passivity theory, and further, stochastic Lyapunov theorem to analyze the globally asymptotical stability of at in the sense of probability.
### 3.2 Problem setting
The above analysis reveals that when , stochastic passivity will fail to capture the globally asymptotical stability (in the probability sense) of a stochastic differential system at the desired state , often set as the equilibrium state (if exists) in many control problems. In fact, the nonzero diffusion term is frequently encountered in many real stochastic systems, such as chemical reaction networks, tracking systems, etc. One case is that the noise is persistent in quite a few stochastic systems, which means that for and and thus does not exist at all; the other case is that some special control purposes are served for, such as the desired state being not so that , even if exists.
Apparently, the nonzero diffusion term in real stochastic systems restricts greatly the applications of stochastic passivity theory, a powerful tool for stabilization. However, what is even worse is that it may lead to the system under consideration being not stable at all in probability at the desired state, as stated in Theorem . These two awkward situations motivate us to find a new solution for stabilizing those stochastic systems with nonzero diffusion term at the desired state. On the one hand, it is impossible to stabilize some stochastic systems at any state in probability, on the other hand, the excellent performance of stochastic passivity is hoped to be used. Thus, we take a hack at the next best way to address the current control problem, including seeking the convergence in distribution and ergodicity instead of the convergence in probability, and finding the stochastic passivity behavior only outside a certain neighborhood of the desired state instead of in the whole state domain.
## 4 Stochastic weak passivity theory
The objective in this section is to present the theory of stochastic weak passivity with which some stochastic systems with nonzero diffusion term can be analyzed concerning the convergence of the transition measure and ergodicity. This theoretical framework includes some basic concepts related to stochastic weak passivity, properties of invariant measure, and results for stabilization which are parallel to those appearing in the stochastic passivity theory.
### 4.1 Basic concepts
We firstly give the definitions of convergence in distribution and of ergodicity.
###### Definition 4.1 (Convergence in distribution and Ergodicity)
Assume a stochastic differential equation described by Eq. (LABEL:StochasticEquation) to have an invariant measure . If there exists a subset of , denoted by , such that for any Borel subset with zero -measure boundary the equation
limt→∞P(t,x(0),A)=π(A), ∀ x(0)∈Rnπ \hb@xt@.01(4.1)
is true, then the stochastic process is said to be locally convergent in distribution. If , then the convergence in distribution is globally.
If for any Borel subset the state satisfies
limT→∞1T∫T0\mathbbold1{x(t)∈B}dt=π(B), a.s. ∀ x(0)∈Rnπ \hb@xt@.01(4.2)
where “” represents “almost surely” and
\mathbbold1{x(t)∈B}={1x(t)∈B,0x(t)∉B
then the stochastic process is said to be locally ergodic. Especially, when , the ergodicity is global.
Here, analogous to the definition of stability in probability, we also distinguish the local and global notations to emphasize the importance of the initial condition.
###### Remark 3
In the control view point, the convergence of the transition measure and ergodicity both describe certain senses of stable behaviors for stochastic systems. The former means that the distribution of the state will converge to an invariant measure as time goes infinite. Therefore, as long as the invariant measure is shaped to fasten on a small region around the desired state, then the state will evolve within this region with a large probability, i.e., not to deviate from the desired point too far with a large probability. The latter implies that the state evolution almost always take place within the mentioned region. Even if the trajectory sometimes run from the region, it will come back into the region immediately.
Clearly, the convergence of the transition measure and ergodicity reveal that the state of a stochastic system almost always evolves near the desired state if the invariant measure is assigned properly. We define this behavior as stochastic asymptotic weak stability.
###### Definition 4.2 (Stochastic Asymptotic Weak Stability)
A stochastic differential equation of Eq. (LABEL:StochasticEquation) is of localglobal stochastic asymptotic weak stability if its distribution locallyglobally converges to an invariant measure and its process is of localglobal ergodicity.
Next, we define the stochastic weak passivity that serves for stabilizing a stochastic differential system in weak sense. Note that the loss of stochastic passivity mainly originates from the nonzero diffusion term at the desired state, which further results in some unexpected behaviors appearing around it. Thus, a naive idea is to give up the stochastic passivity near the desired state, but only to suggest it outside a neighborhood of the desired state.
###### Definition 4.3 (Stochastic Weak Passivity)
A stochastic differential system , as described by Eq. (LABEL:StochasticSystem), is said to be of stochastic weak passivity if there exist a function , i.e., the storage function, such that for and the following inequality holds
L[V(x)]≤u⊤y
where the state is the sole minimum point for and is called the stochastic passive radius.
Similar to the concept of the strict passivity, we may further define strict stochastic weak passivity.
###### Definition 4.4 (Strict Stochastic Weak Passivity)
Consider a stochastic weak passive system. Suppose that there exists a positive constant such that for and
L[V(x)]≤u⊤y−δ∥ξ∥22
The system is
• strictly state stochastic weak passive if .
• strictly input stochastic weak passive if .
• strictly output stochastic weak passive if .
### 4.2 Properties of invariant measure
Definition reveals that the stochastic asymptotic weak stability is concerned with the convergence in distribution of the state and ergodic behavior. However, for a stochastic system, unlike its equilibrium it is not quite obvious to know something about its invariant measure, such as the existence, uniqueness, etc. We separate this subsection to analyze the properties of the invariant measure of the stochastic differential equation under consideration.
In fact, it is not a new research issue to analyze the properties of the invariant measure of a stochastic system [5, 15]. A sufficient condition to say it convergent in distribution was reported as follows.
###### Theorem 4 (cf. [15])
If a right Markov process on is strongly Feller, i.e., the transition semigroup transforms bounded Borel functions into , and moreover is irreducible, i.e., and any open set there is , then any probability measure converges to the invariant measure (if exists). Moreover the invariant measure (if exists) is equivalent to each transition measure , , .
This theorem provides a solution to capture the convergence in distribution for a right Markov process. However, it is not easy to verify the conditions of “strongly Feller” and “irreducible” in practical applications. As an alternative, Khasminskii [6] proposed a more practical way to say a stochastic system to be convergent in distribution, which works if a Markov process is “mix sufficiently well” in an open domain and the recurrent time is finite (cf. Theorems , and Corollary in [6]). Here, we will combine this practical way with Zakai’s work [18], and give a Lyapunov criterion to say stochastic asymptotic weak stability. However, the drift and diffusion terms of the stochastic system are set to have local Lipschitz continuity instead of global Lipschitz continuity in [18].
###### Lemma 2 (Finite Mean Recurrent Time [18])
For a stochastic differential equation (LABEL:StochasticEquation) having a global solution , if there exist a function , a state , and two positive numbers and such that
L[V(x)]<−k, ∀ ∥x(t)−x~R∥2≥~R \hb@xt@.01(4.3)
then for all the first passage time from to the sphere , denoted by , satisfies
E[τ|x(0)]≤V(x(0))k \hb@xt@.01(4.4)
Proof. At the time of , by Dynkin’s formula we have
E[V(x(t∧τ)) ∣∣ x(0)] = V(x(0))+E[∫t∧τ0L[V(x(s))]ds] ≤ V(x(0))−E[∫t∧τ0kds]
Note that , so . The inequality (LABEL:IneqFMRT) naturally holds due to the monotone convergence.
###### Theorem 5
For a stochastic equation in the form of (LABEL:StochasticEquation), if there exists a nonnegative function satisfying the following conditions:
• ;
• ;
• , if , then .
then there is a unique finite invariant measure such that for any Borel subset with zero -measure boundary
limt→∞P(t,x(0),A)=π(A), ∀ x(0)∈Rn \hb@xt@.01(4.5)
and for any Borel subset
limT→∞1T∫T0\mathbbold1{x(t)∈B}dt=π(B), a.s. ∀ x(0)∈Rn \hb@xt@.01(4.6)
i.e., Eq. (LABEL:StochasticEquation) being globally asymptotical stable in weak sense.
Proof. According to Lemmas and , the first two conditions could suggest that Eq. (LABEL:StochasticEquation) has a unique global solution and for any initial state satisfying , we have
E[τ∣∣~x]≤V(~x)k
Hence, for any compact subset we get
sup~x∈KE[τ|~x]≤sup~x∈KV(~x)k<∞.
Further based on the strong maximum principle for solutions of elliptic equations, the third condition implies the system (LABEL:StochasticEquation) to be irreducible (cf. Lemma 4.1 in [6]), which combining the above inequality suggests that an ergodic Markov chain can be induced for this stochastic process by constructing a circle. The ergoic property of the Markov chain will ensure that there exists a sole invariant measure to which the transition measure converges (cf. Theorems and , and Corollary in [6]), and the ergodicity of the system under consideration is true (cf. Theorem in [6]). Namely, Eqs. (LABEL:TransitionMeasureConvergence) and (LABEL:Eq_global_ergodicity) hold.
The theorem provides a Lyapunov function based method to address the issues of the existence and uniqueness of the invariant measure together with the convergence of the transition probability measure and ergodicity for a stochastic differential equation, so it can be associated with the Lyapunov stability theory conveniently.
###### Remark 4
There are two differences between the above theorem and the corresponding result in [18]. One is that the non-singularity of is not necessary in the whole state space but only holds in an open ball (). The latter is believed to be achieved more easily in practice. The other is that the storage function must be radially unbounded here. In fact, this is not a necessary condition, which can be removed if the drift term and diffusion term in the stochastic equation are assumed to be globally Lipschitz continuous.
For a stochastic asymptotic weak stable system, to ensure the state to evolve within a small region around the desired point, the invariant measure needs to be assignable or at least partially shaped by the control to fasten on this region. In the sequel, we will prove that the invariant measure can be shaped purposefully by controlling the change rates of the nonnegative function and the radius of the ball .
###### Lemma 3
For a stochastic differential equation (LABEL:StochasticEquation) admitting a global solution , if and such that
and
P(τ2i−τ2i−1=∞)=0∀ i∈Z\textgreater0 \hb@xt@.01(4.7)
then for any we have
(1) E[τ2i−τ2i−1]≥(V2−V1)22CV2; (2) E[τ2i−1−τ2i−2]≤V2−V1k; and
(3) E[liminfT→∞ 1T∫T0\mathbbold1{V(x(t))≥V2}dt]≤2CV22CV2+k(V2−V1)
where satisfying , represents the first time at which the state hits the region after , is the first time at which the trajectory reaches the surface of after , and means the initial time.
Proof. (1) According to Dynkin’s formula, for any
E[V(x(t∧τ2i))] = V(x(τ2i−1))+E[∫t∧τ2iτ2i−1LV(x(s))ds] ≤ V(x(τ2i−1))+E[∫t∧τ2iτ2i−1Cds] ≤ V1+C(t−τ2i−1)
Also, since
E[V(x(t∧τ2i))] = E[V(x(t))]P(τ2i\textgreatert)+E[V(x(τ2i))]P(τ2i≤t)≥V2P(τ2i≤t)
we have
P(τ2i≤t)≤E[V(x(t∧τ2i))]V2≤V1+C(t−τ2i−1)V2
Therefore, we get
E[τ2i−τ2i−1] = ∫∞0P(τ2i−τ2i−1>s)ds≥∫V2−V1C0P(τ2i>s+τ2i−1)ds ≥ ∫V2−V1C01−V1+CsV2ds=(V2−V1)22CV2
(2) On the other side, for any we have
E[V(x(t∧τ2i−1))] = V(x(τ2i−2))+E[∫t∧τ2i−1τ2i−2LV(x(s))ds] ≤ V(x(τ2i−2))+E[∫t∧τ2i−1τ2i−2−kds] = V2−kE[t∧τ2i−1−τ2i−2]
i.e.,
E[t∧τ2i−1−τ2i−2]≤V2−E[V(x(t∧τ2i−2))]k≤V2−V1k
By monotone convergence theorem, the inequality is true.
(3) Based on the results of (1) and (2), we have that for any
E[∑ji=1(τ2i−τ2i−1)∑ji=1(τ2i+1−τ2i)]≥k(V2−V1)2CV2.
Besides, Eq. (LABEL:eq_infinite_loops) and the result (2) imply there’re almost surely infinite many , so the notations “” and “” in the following are not in vain. Applying Fatou’s lemma yields
E[limsupj→∞∑ji=1(τ2i−τ2i−1)∑ji=1(τ2i+1−τ2i)]≥k(V2−V1)2CV2 \hb@xt@.01(4.8)
Let , utilizing which we have
1T∫T0\mathbbold1{V(x(t))≥V2}dt = ∑i(T)−1i=0∫τ2i+2τ2i\mathbbold1{V(x(t))>V2}dt+∫Tτ2i(T)\mathbbold1{V(x(t))>V2}dtT ≤ τ1−τ0T\mathbbold1{i(T)≥1}+∑i(T)−1i=1(τ2i+1−τ2i)τ0+∑i(T)−1i=0(τ2i+2−τ2i)+(T−τ2i(T)) +(τ2i(T)+1−τ2i(T))\mathbbold1{T>τ2i(T)+1}+(T−τ2i(T))\mathbbold1{T<τ2i(T)+1}τ0+∑i(T)−1i=0(τ2i+2−τ2i)+(T−τ2i(T)) ≤ τ1−τ0T\mathbbold1{i(T)≥1}+∑i(T)−1i=1(τ2i+1−τ2i)+(τ2i(T)+1−τ2i(T))∑i(T)−1i=0(τ2i+2−τ2i)+(τ2i(T)+1−τ2i(T)) = τ1−τ0T\mathbbold1{i(T)≥1}+∑i(T)i=1(τ2i+1−τ2i)∑i(T)i=0(τ2i+1−τ2i)+∑i(T)i=1(τ2i−τ2i−1) ≤ τ1−τ0T\mathbbold1{i(T)≥1}+11+∑i(T)i=1(τ2i−τ2i−1)∑i(T)i=1(τ2i+1−τ2i)
Hence,
liminfT→∞1T∫T0\mathbbold1{V(x(t))≥V2}dt ≤ liminfT→∞⎡⎢ ⎢ ⎢ ⎢ ⎢⎣τ1−τ0
|
{}
|
# Simulating Frequentist and Bayesian Operating Characteristics of Longitudinal Markov Ordinal Randomized Trials
The document linked below contains detailed descriptions and examples of simulating longitudinal ordinal outcomes for a two-treatment comparison. This is useful for simulating clinical trials such as COVID-19 therapeutic trials, and studying the Bayesian and frequentist operating characteristics of various tests applied to such data. Within-patient correlation is modeled by a first-order Markov process whereby the ordinal outcome in the previous time interval becomes a covariate for the current time interval. The proportional odds model is the basis for analysis, and this model is extended to account for non-proportional odds with respect to time, by use of the Peterson and Harrell (1990) partial proportional odds model.
Because Markov state transition models describe tendencies for being in the various levels of the ordinal outcome conditional on the previous state, the report pays much attention to the “unconditioning” or marginalization of the transition model to provide the more traditional state occupancy probabilities, to compute, e.g., the probability that a patient will be on a ventilator on day 10 as a function of treatment. When one of the outcome levels (events) is an absorbing state such as death, the occupancy probability for that state at time $$t$$ is the cumulative incidence of that event by time $$t$$.
The report covers
• how to specify a simulation model and simulate Bayesian and frequentist performance of methods
• how to derive a simulation model from the observed data from a completed study (here the VIOLET 2 study of Vitamin D in ICU patients)
• how to automatically compute the parameters of a simulation model to meet constraints such as the proportion of patients in all the outcome categories on given days of follow-up
• demonstrating that power of longitudinal analysis is significantly higher than from taking an ordinal outcome at a single day
• demonstrating that the longitudinal analysis has significantly higher power than a Cox model comparison of time to recovery, and knows how to handle death properly
• quantification of the power loss when the frequency of sampling within a patient is reduced after the patient reaches a certain state such as return to home
• showing how to use a single large study to estimate the relative efficiency of the treatment effect estimate as a function of the number of days of data collection
• showing how to quantify the boost in the effective number of patients by having multiple measurements per patient; for example 28 days of data per patient like those from VIOLET 2 effectively boosts the sample size by a factor of 5 compared to measuring an ordinal outcome at a single day
Power gains from using longitudinal data have major ramifications for sample size calculations and earlier decision making.
The document provides extensive examples and output for new R Hmisc package functions:
• simMarkovOrd: simulate ordinal longitudinal data under a first-order Markov process with a proportional odds model
• soprobMarkovOrd: compute state occupancy probabilities from the Markov model for transition probabilities
• estSeqMarkovOrd: run sequential Markov ordinal longitudinal clinical trial simulations
• intMarkovOrd: compute simulation parameters providing the best compromise on achieved state occupancy and/or transition probabilities against user-specified constraints
Until the new version of Hmisc is on CRAN, the latest Hmisc package source, as well as binary versions for Linux and Windows, may be found here.
The report demonstrates many R programming techniques including
• parallel processing to speed up simulations
• using a hash to only run simulations when an input parameter or the source code changes
• using the data.table package for aggregating and reshaping data tables
• auto-sensing when html format is being produced
• automatically switching to interactive plotly graphics when creating the html version of the report
• dynamic creation of a sequence of R markdown knitr code chunks each with its own figure caption, using Hmisc::markupSpecs$html$mdchunk
• use of the beautiful rmdreadthedown report template when producing html
• use of special LaTeX options when producing pdf
|
{}
|
2 I am suprised that silver nitrate is this poorly soluble, compared to the nitrate -- and found this worth to add here. edited May 26 '17 at 16:45 Buttonwood 11k11 gold badge2222 silver badges4848 bronze badges Your choices are restrained as the precipitation of $$\ce{SO4^{2-}}$$ in $$\ce{BaSO4}$$ is the classical way to quantify the former and an electrochemical determination (in aqueous solution) is not practical. Electing $$\ce{Ba(OH)2}$$ may lead to the formation of silver hydroxyde, equally poorly soluble in water. $$\ce{Ba(PO3)2}$$ itself is very poorly soluble, as were the $$\ce{Ag3PO4}$$, too. As correctly stated by you, an aqueous solution of $$\ce{BaCl2}$$ is not suitable, as $$\ce{Ag+}$$ will form the precipitate of $$\ce{AgCl}$$. Hence I suggest to give $$\ce{Ba(NO3)2}$$ a try. The anion is the same as in $$\ce{AgNO3}$$. The solubility of this salt in water is reported to equal $$\pu{4.95 g / 100 mL }$$ ($$\pu{0 ^\circ{}C}$$); or even $$\pu{10.5 g/ 100 mL}$$ ($$\pu{25 ^\circ{}C}$$), respectively, according to the English entry in wikipedia. Addition: Please note, the solubility of $$\ce{Ag2SO4}$$ ($$\pu{0.83 g / 100 mL}$$ water) at $$\pu{25 ^\circ{}C}$$ (ref.) is substantially lower than the one of $$\ce{AgNO3}$$ ($$\pu{256 g / 100 mL}$$ water) at $$\pu{25 ^\circ{}C}$$ (ref). Your choices are restrained as the precipitation of $$\ce{SO4^{2-}}$$ in $$\ce{BaSO4}$$ is the classical way to quantify the former and an electrochemical determination (in aqueous solution) is not practical. Electing $$\ce{Ba(OH)2}$$ may lead to the formation of silver hydroxyde, equally poorly soluble in water. $$\ce{Ba(PO3)2}$$ itself is very poorly soluble, as were the $$\ce{Ag3PO4}$$, too. As correctly stated by you, an aqueous solution of $$\ce{BaCl2}$$ is not suitable, as $$\ce{Ag+}$$ will form the precipitate of $$\ce{AgCl}$$. Hence I suggest to give $$\ce{Ba(NO3)2}$$ a try. The anion is the same as in $$\ce{AgNO3}$$. The solubility of this salt in water is reported to equal $$\pu{4.95 g / 100 mL }$$ ($$\pu{0 ^\circ{}C}$$); or even $$\pu{10.5 g/ 100 mL}$$ ($$\pu{25 ^\circ{}C}$$), respectively, according to the English entry in wikipedia. Your choices are restrained as the precipitation of $$\ce{SO4^{2-}}$$ in $$\ce{BaSO4}$$ is the classical way to quantify the former and an electrochemical determination (in aqueous solution) is not practical. Electing $$\ce{Ba(OH)2}$$ may lead to the formation of silver hydroxyde, equally poorly soluble in water. $$\ce{Ba(PO3)2}$$ itself is very poorly soluble, as were the $$\ce{Ag3PO4}$$, too. As correctly stated by you, an aqueous solution of $$\ce{BaCl2}$$ is not suitable, as $$\ce{Ag+}$$ will form the precipitate of $$\ce{AgCl}$$. Hence I suggest to give $$\ce{Ba(NO3)2}$$ a try. The anion is the same as in $$\ce{AgNO3}$$. The solubility of this salt in water is reported to equal $$\pu{4.95 g / 100 mL }$$ ($$\pu{0 ^\circ{}C}$$); or even $$\pu{10.5 g/ 100 mL}$$ ($$\pu{25 ^\circ{}C}$$), respectively, according to the English entry in wikipedia. Addition: Please note, the solubility of $$\ce{Ag2SO4}$$ ($$\pu{0.83 g / 100 mL}$$ water) at $$\pu{25 ^\circ{}C}$$ (ref.) is substantially lower than the one of $$\ce{AgNO3}$$ ($$\pu{256 g / 100 mL}$$ water) at $$\pu{25 ^\circ{}C}$$ (ref). 1 answered May 26 '17 at 16:29 Buttonwood 11k11 gold badge2222 silver badges4848 bronze badges Your choices are restrained as the precipitation of $$\ce{SO4^{2-}}$$ in $$\ce{BaSO4}$$ is the classical way to quantify the former and an electrochemical determination (in aqueous solution) is not practical. Electing $$\ce{Ba(OH)2}$$ may lead to the formation of silver hydroxyde, equally poorly soluble in water. $$\ce{Ba(PO3)2}$$ itself is very poorly soluble, as were the $$\ce{Ag3PO4}$$, too. As correctly stated by you, an aqueous solution of $$\ce{BaCl2}$$ is not suitable, as $$\ce{Ag+}$$ will form the precipitate of $$\ce{AgCl}$$. Hence I suggest to give $$\ce{Ba(NO3)2}$$ a try. The anion is the same as in $$\ce{AgNO3}$$. The solubility of this salt in water is reported to equal $$\pu{4.95 g / 100 mL }$$ ($$\pu{0 ^\circ{}C}$$); or even $$\pu{10.5 g/ 100 mL}$$ ($$\pu{25 ^\circ{}C}$$), respectively, according to the English entry in wikipedia.
|
{}
|
# SENSITIVITY ANALYSIS FOR A SYSTEM OF GENERALIZED NONLINEAR MIXED QUASI-VARIATIONAL INCLUSIONS WITH (A, η)-ACCRETIVE MAPPINGS IN BANACH SPACES
Jeong, Jae-Ug;Kim, Soo-Hwan
• Published : 2009.11.30
• 50 3
#### Abstract
In this paper, we study the behavior and sensitivity analysis of the solution set for a new system of parametric generalized nonlinear mixed quasi-variational inclusions with (A, ${\eta$)-accretive mappings in quniformly smooth Banach spaces. The present results improve and extend many known results in the literature.
#### Keywords
quasi-variational inclusion;sensitivity analysis;resolvent operator;(A,${\eta}$)-accretive mapping
#### References
1. R. P. Agarwal, N. J. Huang, and M. Y. Tan, Sensitivity analysis for a new system of generalized nonlinear mixed quasi-variational inclusions, Appl. Math. Lett. 17 (2004), 345-352 https://doi.org/10.1016/S0893-9659(04)90073-0
2. S. Dafermos, Sensitivity analysis in variational inequalities, Math. Operat. Res. 13 (1988), 421-434 https://doi.org/10.1287/moor.13.3.421
3. Y. P. Fang and N. J. Huang, H-accretive operators and resolvent operator technique for solving variational inclusions in Banach spaces, Appl. Math. Lett. 17 (2004), 647-653 https://doi.org/10.1016/S0893-9659(04)90099-7
4. H. Y. Lan, Y. J. Cho, and R. U. Verma, On nonlinear relaxed cocoercive variational inclusions involving (A, ${\eta}$)-accretive mappings in Banach spaces, Comput. Math. Appl. 51 (2006), 1529-1538 https://doi.org/10.1016/j.camwa.2005.11.036
5. R. N. Mukherjee and H. L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992), 299-304 https://doi.org/10.1016/0022-247X(92)90207-T
6. M. A. Noor, Generalized algorithm and sensitivity analysis for variational inequalities, J. Appl. Math. Stoch. Anal. 5 (1992), 29-42 https://doi.org/10.1155/S1048953392000030
7. Y. H. Pan, Sensitivity analysis for general quasi-variational inequalities, J. Sichuan Normal Univ. 19 (1996), 56-59
8. J. W. Peng, On a new system of generalized mixed quasi-variational-like inclusions with (H, ${\eta}$)-accretive operators in real q-uniformly smooth Banach spaces, Nonlinear Anal. 68 (2008), 981-993 https://doi.org/10.1016/j.na.2006.11.054
9. J. W. Peng and D. L. Zhu, Three-step iterative algorithm for a system of set-valued variational inclusions with (H, ${\eta}$)-monotone operators, Nonlinear Anal. 68 (2008), 139-153 https://doi.org/10.1016/j.na.2006.10.037
10. R. U. Verma, A-monotonicity and applications to nonlinear variational inclusions, J. Appl. Math. Stoch. Anal. 17 (2004), no. 2, 193-195
11. D. Zeidler, Nonlinear Functional Analysis and its Applications II: Monotone Operators, Springer-Verlag, Berlin, 1985
12. H. K. Xu, Inequalities in Banach spaces with applications, Nonlinear Anal. 16 (1991), no. 12, 1127-1138 https://doi.org/10.1016/0362-546X(91)90200-K
13. Y. P. Fang and N. J. Huang, H-monotone operator and resolvent operator technique for variational inclusions, Appl. Math. Comput. 145 (2003), 795-803 https://doi.org/10.1016/S0096-3003(03)00275-3
14. N. D. Yen, Lipschitz continuity of solution of variational inequalities with a parametric polyhedral constraint, Math. Operat. Res. 20 (1995), 607-708 https://doi.org/10.1287/moor.20.3.695
|
{}
|
Question: DESeq2 testing ratio of ratios in a ribosomal profiling with batch
0
7 weeks ago by
frene0
frene0 wrote:
Hello everybody,
I have a trouble with the analysis of a ribosome profiling. I want to do de ratio of ratios of RFP an TOTAL RNA between two genotyopes. I do more or less the same as in this post https://support.bioconductor.org/p/61509/ In my case, I want to take in acount the batch to reduce the diferences due to it.
I don't know if I have to do the likelihood ratio test with the batch and the interaction term removed in the reduced model:
> dds1 <- DESeqDataSetFromMatrix(countData = countdata,colData = colData, design = ~batch+sampleType+condition+sampleType:condition)
> dds1 <- DESeq(dds1, reduced = ~ sampleType+condition, test="LRT")
or only the the interaction term removed in the reduced model:
> dds1 <- DESeqDataSetFromMatrix(countData = countdata,colData = colData, design = ~batch+sampleType+condition+sampleType:condition)
> dds1 <- DESeq(dds1, reduced = ~ batch+sampleType+condition, test="LRT")
When I do the first option (with the batch and the interaction term removed in the reduced model), the result is a lot of significative genes but with no FC diferences.
A volcano plot with no volcano shape
I know that in LRT, the p-values are determined solely by the difference in deviance between the ‘full’ and ‘reduced’ model formula (not log2 fold changes).
I woud apreciate any advice about how introduce the batch in the design formula.
deseq2 • 71 views
modified 7 weeks ago by Michael Love23k • written 7 weeks ago by frene0
Answer: DESeq2 testing ratio of ratios in a ribosomal profiling with batch
0
7 weeks ago by
Michael Love23k
United States
Michael Love23k wrote:
The first model is not correct. It is testing for any changes in gene expression due to the interaction term or any genes effected by batch. Because you removed batch from the full design, you are asking for all the batch effected genes to be found significant, which is probably not what you want. The second design is correct.
|
{}
|
# Is it possible to speed up the moon's orbit with today's, or near-future technology?
And I mean only making the moon orbit the Earth faster: no change in distance from Earth required. Is such a thing possible?
• Actually, changing the orbital speed does require changing the orbital distance, unless you're physically tethering the moon to the earth somehow. – Nuclear Wang Mar 27 '18 at 17:52
• There is obviously no (logistics) technology to get a significant amount of stuff to the moon within reasonable economic limitations (which is kind of already included in "technology") nor is anyone really working on creating that possibility (why would you?), so no. – Raditz_35 Mar 27 '18 at 17:53
• Read up on orbital dynamics. If the moon speeds up, it will get farther. – Aify Mar 27 '18 at 18:09
• @Nuclear Wang: There actually is a way you could speed up the moon's orbit without changing the distance: increase the mass of the Earth. There are obvious practical difficulties there, though :-) – jamesqf Mar 27 '18 at 18:34
• @jamesqf You've got me there! I suppose we could also decrease the mass of the moon, which could be done concurrently with your earth embiggening. – Nuclear Wang Mar 27 '18 at 18:53
# Not with anything like modern or near future technology
And DEFINITELY not while maintaining the same orbital distance
The problem here is that orbital speed (considering a much larger primary mass) is a factor ONLY of the mass of the primary body and the distance they are apart as determined by the following equation.
$$v=\sqrt{\frac{GM}{r}}$$
Where v is orbital speed, G is the gravitational constant, M is the mass of the primary body (in this case, Earth), and r is the distance from the center of the primary mass to the center of the orbiting mass.
In order to increase the velocity of the moon, you're dealing with this equation.
This gives us the following options to accelerate the moon.
### Move the moon closer to Earth.
This will naturally cause it to speed up. This may have Consequencestm on Earth, particularly those related to tidal forces. This would require us to apply sufficient force to relocate a 73.4 yottagram (7.34*1022kg) object.
We can't do this. It's too big. Even considering asteroids that are 12 orders of magnitude less massive or more, we are struggling to figure out how we could gradually nudge them aside if we thought they were going to hit Earth.
### Make Earth Bigger
The other component to that equation that we can mess with is the Mass of the primary body--Earth. If you increase the mass of Earth, the moon will speed up. The problem is, Earth is huge. And the Law of Conservation of Mass is a pesky bugger that says we can't just get mass from nowhere. So, we're basically going to have to go cannibalize another planet for parts and them drop them (carefully, I hope) down to Earth's surface.
Again, we can't do this. The scale is far too massive.
### Brute Force
Here we get to the pinnacle of impossibilities.
Take the moon, strap gigantic rockets to it pointing in several different directions, and use them to accelerate the moon, but also force it to maintain its present orbital distance. Do this forever. This process is not used in real life even on something as tiny as a satellite, because it takes vastly too much energy. If you are going to burn energy, either use it to scoot yourself somewhere else in orbital distance...or go all out and try to hit escape velocity.
Naturally, you're dealing with the aforementioned 73.4 yottagram object. Only now you have to apply absurd Forces to it from multiple directions at once!
Again, not possible.
First, you can't keep the same orbit and change the speed. Try playing Kerbal Space Program if you want to learn about orbital mechanics (or do the math, but KSP is more fun).
As far as changing the orbit of the moon..
With more or less current technology, you could set up self sustaining factories on the moon, These would mine moon rocks, refine some iron from them, and launch them into space using magnetic launchers. You'd need colonies that were producing their own food and other materials, at both Lunar poles. Again, not impossible with current technology - there is ice at the lunar poles to work with.
So..
The moon orbits at 1km/sec. If we mine and fire 1% of the mass of the moon at 100 km/sec (in the opposite direction to travel), that should double the orbital velocity. That means 7x10^20kg of material, so assuming you have 1000 launchers, each launching 100kg every 10 seconds - so 10,000kg/sec - that would take 7x10^15 seconds, or about 2x10^8 years. So, just leave your operation running for 200 million years, and you are there.
|
{}
|
Free Version
Moderate
# System Failure
APSTAT-K4CLJY
A toolmaker fabricates an item that has a system of $4$ O-rings.
Each O-ring has a probability of $0.95$ of properly functioning. Assume that each O-ring operates independently of the others, and the entire system fails if one or more of the O-rings fail.
What is the probability that the system functions properly?
A
$0.95$
B
$0.815$
C
$0.75$
D
$0.05$
E
$0.185$
|
{}
|
# Dog & Gopher
Tags: 水题
Time Limit: 1 s Memory Limit: 128 MB
Submission:22 AC:10 Score:99.52
## Problem D: Dog & Gopher
A large field has a dog and a gopher. The dog wants to eat the gopher, while the gopher wants to run to safety through one of several gopher holes dug in the surface of the field.
Neither the dog nor the gopher is a math major; however, neither is entirely stupid. The gopher decides on a particular gopher hole and heads for that hole in a straight line at a fixed speed. The dog, which is very good at reading body language, anticipates which hole the gopher has chosen, and heads at double the speed of the gopher to the hole, where it intends to gobble up the gopher. If the dog reaches the hole first, the gopher gets gobbled; otherwise, the gopher escapes.
You have been retained by the gopher to select a hole through which it can escape, if such a hole exists.
The first line of input contains four floating point numbers: the (x,y) coordinates of the gopher followed by the (x,y) coordinates of the dog. Subsequent lines of input each contain two floating point numbers: the (x,y) coordinates of a gopher hole. All distances are in metres, to the nearest mm.
Your output should consist of a single line. If the gopher can escape the line should read "The gopher can escape through the hole at (x,y)." identifying the appropriate hole to the nearest mm. Otherwise the output line should read "The gopher cannot escape." If the gopher may escape through more than one hole, any one will do. If the gopher and dog reach the hole at the same time either answer may be given. There are not more than 1000 gopher holes and all coordinates are between -10000 and +10000.
## Samples
input
1.000 1.000 2.000 2.000 1.500 1.500
output
The gopher cannot escape.
## Hint
### Sample Input 2
2.000 2.000 1.000 1.000
1.500 1.500
2.500 2.500
### Output for Sample Input 2
The gopher can escape through the hole at (2.500,2.500).
|
{}
|
My Math Forum how many mL of 75% alcohol is needed?
Chemistry Chemistry Forum
June 11th, 2018, 07:28 PM #1 Member Joined: May 2015 From: Australia Posts: 77 Thanks: 7 how many mL of 75% alcohol is needed? How many mL of 75% alcohol should be mixed with 10% of 1000 cc alcohol to prepare 30% of 500mL alcohol solution? a) 346.16mL b) 234.43mL c)153.84mL d) 121.12mL Answer: C (153.84mL) I got an answer of 66.67mL for some reason. this is how I thought the solution should be: Need: how many mL of 75% alcohol Have: 1000mL of 10% alcohol = 100mL alcohol in 1000mL 500mL of 30% alcohol = 150mL alcohol in 500mL So we need 150mL - 100mL = 50mL alcohol Volume * 75% = 50mL Volume = 50mL/0.75 = 66.67mL I'm not sure where I went wrong in my solution Last edited by skipjack; June 12th, 2018 at 01:15 AM.
June 11th, 2018, 09:07 PM #2 Senior Member Joined: Sep 2015 From: USA Posts: 2,039 Thanks: 1063 How are you going to add 75% alcohol to 1000mL and end up with 500 mL?
June 11th, 2018, 09:23 PM #3 Member Joined: May 2015 From: Australia Posts: 77 Thanks: 7 So my logic to solving this question comes from the fact that I found a similar question from last year's studies. Last year's question is below (maybe it can help with solving the first question): What volume of ethanol (96% v/v) should be added to 500mL of 70% (v/v) ethanol such that when the mixture is made to 2.5L with water, the final concentration of ethanol is 55% (v/v)? Answer = 1067.6mL The solution: 500mL of 70% v/v = 70mL/100mL = 350mL in 500mL 2.5 L of 55% v/v = 55mL/100mL = 1375mL in 2500mL we need: 1375 - 350 = 1025mL Volume x 96% = 1025 Volume = 1025/0.96 = 1067.7mL I'm pretty both types of questions are very similar to each other. So does this mean the first question has an error?
June 12th, 2018, 01:18 AM #4 Member Joined: May 2015 From: Australia Posts: 77 Thanks: 7 I just asked my friend how she worked it out, and she sent me this: 75-10 = 65 30-10 = 20 500 x (20/65) = 153.84mL I'm not sure if the logic is right though.
June 12th, 2018, 01:41 AM #5 Global Moderator Joined: Dec 2006 Posts: 19,291 Thanks: 1683 Your wording is a bit odd, so I'll just show you a way to get answer (C). Let's assume that you mix x ml of a liquid that is 75% (i.e., 3/4) alcohol with (500 - x) ml of a liquid that is 10% (i.e., 1/10) alcohol, so that the mixture's volume is 500ml. The volume of alcohol in the mixture is ((3/4)x + (1/10)(500 - x)) ml, and you want this volume to be 30% of 500 ml, which is 150 ml. You therefore need to solve (3/4)x + (1/10)(500 - x) = 150. Isolating x gives the equation (3/4 - 1/10)x = 150 - 500/10 = 100. Multiplying that equation by 20 gives 13x = 2000, so x = 2000/13, which is in agreement with choice (C). Note that if we start by multiplying by 100, we are solving 75x + 10(500 - x) = 30*500, which is equivalent to (75 - 10)x = (30 - 10)500. Hence your friend's working was correct. When you've done this type of calculation a few times, you know exactly what to do to get the answer quickly.
June 12th, 2018, 02:57 AM #6 Math Team Joined: Oct 2011 From: Ottawa Ontario, Canada Posts: 12,904 Thanks: 883 Code: QUANTITY PERCENTAGE a u b v === === a+b w? w = (au + bv) / (a + b)
Tags 75%, alcohol, needed
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Confused1234 Elementary Math 3 February 25th, 2015 09:26 AM al-mahed Number Theory 5 February 16th, 2011 02:31 PM huainan Physics 1 June 8th, 2010 05:53 AM geeth416 Algebra 1 April 28th, 2010 08:38 AM mohit.choudhary Advanced Statistics 3 June 11th, 2009 01:52 PM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
{}
|
# Directivity
In electromagnetics, directivity is a figure of merit for an antenna. It measures the power density an actual antenna radiates in the direction of its strongest emission, relative to the power density radiated by an ideal isotropic radiator antenna radiating the same amount of total power. Mathematically, the directivity is defined as the maximum of the directive gain:
$D = maxleft\left(frac\left\{mbox\left\{Radiated power density\right\}left\left(theta,phiright\right)\right\}\left\{mbox\left\{Total radiated power\right\}/left\left(4piright\right)\right\}right\right)$
where
• $theta$ and $phi$ are the standard spherical coordinates angles
• Radiated power density is the power per unit solid angle such that $mbox\left\{Total radiated power\right\}=int_\left\{phi=0\right\}^\left\{phi=2pi\right\}left\left(int_\left\{theta=0\right\}^\left\{theta=pi\right\}mbox\left\{Radiated power density\right\}left\left(theta,phiright\right)sintheta,dthetaright\right)dphi$
• $4pi$ is the total solid angle for a sphere (also the surface area of a unit sphere, similar to $2pi$ being the total angle for a circle and the perimeter of a unit circle).
• The denominator, $mbox\left\{Total radiated power\right\}/left\left(4piright\right)$, represents the average radiated power density
The directivity is rarely expressed as a unitless number. Usually, the directivity is expressed in dBi, so that
$left.Dright|_mbox\left\{dBi\right\} = 10log_\left\{10\right\}left\left[maxleft\left(frac\left\{mbox\left\{Radiated power density\right\}left\left(theta,phiright\right)\right\}\left\{mbox\left\{Total radiated power\right\}/left\left(4piright\right)\right\}right\right)right\right]$
The reason the units are dBi (decibel relative to an isotropic radiator) is that for an isotropic radiator, the radiated power density is a constant, and therefore equals the average radiated power density (the denominator). This isotropic radiator is not directive at all but has nevertheless a directivity stricto senso equal to 1. This can be misleading and is much better described in dBi.
$displaystyle D_mbox\left\{isotropic radiator\right\}=1mbox\left\{ unitless \right\}=0mbox\left\{ dB\right\}$
The word directivity is also sometimes used as a synonym for directive gain. This usage is readily understood, as the direction will be specified, or directional dependence implied. Later editions of the IEEE Dictionary specifically endorse this usage; nevertheless it has yet to be universally adopted.
The peak directivity of an actual antenna can vary from 1.76 dB for a short dipole, to as much as 50 dB for a large dish antenna.
## Directivity and gain
An antenna's directivity is closely related to its gain. The difference between the two quantities is that for gain, the denominator equals $mbox\left\{Total power delivered to antenna\right\}/left\left(4piright\right)$, rather than $mbox\left\{Total radiated power\right\}/left\left(4piright\right)$.
If an antenna is 100% efficient, the two quantities are the same, as all the power delivered to the antenna would get radiated. Therefore, the ratio (difference in dB) between the gain and the directivity represents the antenna's efficiency.
## Partial directivity and gain
The partial directive gain is the power density in a particular direction and for a particular component of the polarization, divided by the average power density for all directions and all polarizations. For any pair of orthogonal polarizations (such as left-hand-circular and right-hand-circular), the individual power densities simply add to give the total power density. Thus, if expressed as dimensionless ratios rather than in dB, the total directive gain is equal to the sum of the two partial directive gains.
The partial directivity and partial gain are similarly defined, and are similarly additive for orthogonal polarizations.
## In other fields
The term directivity is also used in acoustics, as is a measure of the radiation pattern from a source indicating how much of the total energy from the source is radiating in a particular direction. In electro-acoustics, these patterns commonly include omni-directional, cardioid and hyper-cardioid microphone polar patterns. A loudspeaker with a high degree of directivity (narrow dispersion pattern) can be said to have a high Q.
## References
• Coleman, Christopher (2004). An Introduction to Radio Frequency Engineering. Cambridge University Press. ISBN 0-521-83481-3.
Search another word or see directivityon Dictionary | Thesaurus |Spanish
|
{}
|
# Math Help - Change of Variables
1. ## Change of Variables
Use a change of variables to evaluate the triple integral of:
sqrt[(x+y)/(x-y)] in region R where R is the region enclosed by the triangle with vertices (1,0), (4,0), and (4,3).
2. Start by putting $u=x+y,$ $v=x-y.$
|
{}
|
# Mechanical properties of solids
Young's modulus of bone for tensile is $16× 10^9 N/m^2$ and for compressive is $9 × 10^9 N/m^2$
Why is tensile Young's modulus is more than for compressive?
• Do you have a reference for those numbers? They seem to be wrong by several orders of magnitude. You may be confusing tensile and compressive strength with Young's modulus. – alephzero Jan 21 '17 at 13:54
• @alephzero it is given in NCERT textbook of physics, I added screenshot – Fawad Jan 21 '17 at 14:10
• Your post says the Young's modulus is $16 \times 10^7$ but the table says $16 \times 10^9$. That's what was confusing me. But the NCERT tensile strength numbers are 10 times smaller than springer.com/cda/content/document/cda_downloaddocument/… (page 8 of the PDF). – alephzero Jan 21 '17 at 16:11
• @alephzero that was a typo , I just want to know why there is difference for two types – Fawad Jan 21 '17 at 17:31
• – JM97 Mar 5 '17 at 13:12
|
{}
|
# Intersection of Family is Subset of Intersection of Subset of Family
## Theorem
Let $I$ be an indexing set.
Let $\family {A_\alpha}_{\alpha \mathop \in I}$ be an indexed family of subsets of a set $S$.
Let $J \subseteq I$.
Then:
$\ds \bigcap_{\alpha \mathop \in I} A_\alpha \subseteq \bigcap_{\alpha \mathop \in J} A_\alpha$
where $\ds \bigcap_{\alpha \mathop \in I} A_\alpha$ denotes the intersection of $\family {A_\alpha}_{\alpha \mathop \in I}$.
## Proof
$\ds x$ $\in$ $\ds \bigcap_{\alpha \mathop \in I} A_\alpha$ $\ds \leadsto \ \$ $\ds \forall \alpha \in I: \,$ $\ds x$ $\in$ $\ds A_\alpha$ Intersection is Subset $\ds \leadsto \ \$ $\ds \forall \alpha \in J: \,$ $\ds x$ $\in$ $\ds A_\alpha$ Definition of Subset: $J \subseteq I$ $\ds \leadsto \ \$ $\ds x$ $\in$ $\ds \bigcap_{\alpha \mathop \in J} A_\alpha$ Definition of Intersection of Family
$\blacksquare$
|
{}
|
# Differential Equations: Bernoulli's Equation $1 - 3rss' + r^2 s^2 s' = 0$
2 posts / 0 new
agentcollins
Differential Equations: Bernoulli's Equation $1 - 3rss' + r^2 s^2 s' = 0$
Bernoulli?
1- 3rss' + r^2 s^2 s' =0
Jhun Vert
Yes, the given is a Bernoulli's equation
$1 - 3rs\,s' + r^2 s^2\,s' =0$
$1 - 3rs \, \dfrac{ds}{dr} + r^2 s^2 \, \dfrac{ds}{dr} = 0$
$dr - 3rs \, ds + r^2 s^2 \, ds = 0$
$dr - 3sr \, ds = -s^2 r^2 \, ds$
The equation is in the form
$dr + P(s) \, r \, ds = Q(s) \, r^n \, ds$
• Mathematics inside the configured delimiters is rendered by MathJax. The default math delimiters are $$...$$ and $...$ for displayed mathematics, and $...$ and $...$ for in-line mathematics.
|
{}
|
Conference paper Open Access
# Generative adversarial training of product of policies for robust and adaptive movement primitives
Pignat, Emmanuel; Girgin, Hakan; Calinon, Sylvain
### DataCite XML Export
<?xml version='1.0' encoding='utf-8'?>
<identifier identifierType="DOI">10.5281/zenodo.5596543</identifier>
<creators>
<creator>
<creatorName>Pignat, Emmanuel</creatorName>
<givenName>Emmanuel</givenName>
<familyName>Pignat</familyName>
<affiliation>Idiap Research Institute, Martigny, Switzerland</affiliation>
</creator>
<creator>
<creatorName>Girgin, Hakan</creatorName>
<givenName>Hakan</givenName>
<familyName>Girgin</familyName>
<affiliation>Idiap Research Institute, Martigny, Switzerland</affiliation>
</creator>
<creator>
<creatorName>Calinon, Sylvain</creatorName>
<givenName>Sylvain</givenName>
<familyName>Calinon</familyName>
<affiliation>Idiap Research Institute, Martigny, Switzerland</affiliation>
</creator>
</creators>
<titles>
<title>Generative adversarial training of product of policies for robust and adaptive movement primitives</title>
</titles>
<publisher>Zenodo</publisher>
<publicationYear>2020</publicationYear>
<dates>
<date dateType="Issued">2020-10-10</date>
</dates>
<language>en</language>
<resourceType resourceTypeGeneral="ConferencePaper"/>
<alternateIdentifiers>
<alternateIdentifier alternateIdentifierType="url">https://zenodo.org/record/5596543</alternateIdentifier>
</alternateIdentifiers>
<relatedIdentifiers>
<relatedIdentifier relatedIdentifierType="DOI" relationType="IsVersionOf">10.5281/zenodo.5596542</relatedIdentifier>
<relatedIdentifier relatedIdentifierType="URL" relationType="IsPartOf">https://zenodo.org/communities/collaborate_project</relatedIdentifier>
</relatedIdentifiers>
<rightsList>
<rights rightsURI="info:eu-repo/semantics/openAccess">Open Access</rights>
</rightsList>
<descriptions>
<description descriptionType="Abstract"><p>In learning from demonstrations, many generative models of trajectories make simplifying assumptions of independence. Correctness is sacrificed in the name of tractability and speed of the learning phase. The ignored dependencies, which often are the kinematic and dynamic constraints of the system, are then only restored when synthesizing the motion, which introduces possibly heavy distortions. In this work, we propose to use those approximate trajectory distributions as close-to-optimal discriminators in the popular generative adversarial framework to stabilize and accelerate the learning procedure. The two problems of adaptability and robustness are addressed with our method. In order to adapt the motions to varying contexts, we propose to use a product of Gaussian policies defined in several parametrized task spaces. Robustness to perturbations and varying dynamics is ensured with the use of stochastic gradient descent and ensemble methods to learn the stochastic dynamics. Two experiments are performed on a 7-DoF manipulator to validate the approach.</p></description>
</descriptions>
<fundingReferences>
<fundingReference>
<funderName>European Commission</funderName>
<funderIdentifier funderIdentifierType="Crossref Funder ID">10.13039/501100000780</funderIdentifier>
<awardNumber awardURI="info:eu-repo/grantAgreement/EC/H2020/820767/">820767</awardNumber>
<awardTitle>Co-production CeLL performing Human-Robot Collaborative AssEmbly</awardTitle>
</fundingReference>
</fundingReferences>
</resource>
9
8
views
|
{}
|
# rpca
R package rpca: RobustPCA: Decompose a Matrix into Low-Rank and Sparse Components. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Candes, E. J., Li, X., Ma, Y., & Wright, J. (2011). Robust principal component analysis?. Journal of the ACM (JACM), 58(3), 11. prove that we can recover each component individually under some suitable assumptions. It is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the L1 norm. This package implements this decomposition algorithm resulting with Robust PCA approach.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 1 article )
Showing result 1 of 1.
Sorted by year (citations)
|
{}
|
## Introduction
Coherence is defined as an in-phase evolution of specific degrees of freedom. In electronic dynamics of materials controlled by quantum mechanical laws, coherence frequently appears as amplitude correlations in delocalized wavefunctions and manifests itself in interference patterns persisting over long time-scales1. Formally, quantum-mechanical coherences are defined as off-diagonal elements in the density matrix and, as such, they are not directly observable but can be derived from the presence of measurable spectroscopic signals. About a decade ago, persistent quantum coherence was discovered in the initial stage of photosynthesis across several highly structured biological light-harvesting complexes1,2,3,4,5,6,7. Later, similar phenomena were observed across many other molecular and nanostructured materials8,9,10,11,12. While the initial reports had attributed the observed dynamics to unexpectedly long-lasting electronic coherences, later investigations linked it to the interplay between both electronic and vibrational degrees of freedom13,14 and it was broadly hypothesized that oscillatory evolution of delocalized electronic wavefunctions can improve transport of energy and charge carriers for light-harvesting, lighting, and other optoelectronic applications1,3,8,15,16. The change in thinking towards more complex interaction between vibrations and electronic coherences was particularly prevalent in the realm of photobiology17, where commonly employed models treat vibrations as quantum degrees of freedom18,19,20,21,22,23.
Most of the systems studied above belong to the “intermediate coupling regime”, when the electronic and vibrational couplings are comparable9. The transport processes following photoexcitation are concomitant to non-radiative relaxation, when the system dissipates the excess of electronic energy into heat. During this internal conversion, energy typically flows from the electronic to vibrational degrees of freedom via two distinct mechanisms. When electronic states are well separated, the system can relax adiabatically downhill on a single potential energy surface within the Born–Oppenheimer framework. Alternatively, when electronic states are close in energy, the Born–Oppenheimer approximation breaks down and non-adiabatic evolution takes place when the electronic state (and the respective potential energy surface) changes during the dynamics8,14. This is a common scenario for energy transfer. Here, one extreme includes strong electronic couplings leading to fully delocalized states and an efficient band-like transport (such as the case of classic semiconductors)24. Another extreme includes highly disordered materials with large vibrational coupling that limits transport to the incoherent hopping-like random walk regime24. Interestingly, for materials in the “intermediate coupling regime”, there exists an ample amount of spectroscopic evidence for robust coherent electron-vibrational dynamics, which persists over long (frequently picosecond) timescales at ambient conditions, in spite of structural disorder, noise, and environmental fluctuations that may be present. Subsequently, several recent reviews1,2,3,7 suggested that coherence is a highly non-trivial and very important factor, which can be used to achieve specific functionalities in chemical and biological systems provided that underpinning design principles25,26 are well understood.
Here, we show how coherent exciton-vibrational dynamics emerges in photoactive molecular systems due to non-adiabatic (non-Born–Oppenheimer) transitions between excited states. Previous studies recognized the importance of symmetry of vibronic coupling between different electronic states in resonant transitions27,28, and electron29,30 and energy31,32 transfer rates. Here, we are exploring its effect on coherent electron-vibrational dynamics. This phenomenon is ubiquitous as it follows from simple interplays between localizations and symmetries of the wavefunctions. Namely, non-adiabatic transitions between excited states induce the spatial coherence between the eigenstates of the electronic molecular Hamiltonian, which are dynamically modulated by classical vibrational motions. Since such transitions are often not a singular event and can persist for some time, observed dynamics is strongly dependent on the system in question. We first present a simple conceptual model rationalizing the asymmetric form of the derivative non-adiabatic coupling (NAC) vector responsible for driving transitions between excited states. This, in turn, initiates a specific vibrational excitation modulating the wave-like localized–delocalized motion of the electronic wavefunction. We further demonstrate universality of these phenomena by inspecting photo-induced dynamics in several common cases for organic conjugated materials. These include a linear oligomer, nano-hoop, tree-like dendrimer, and molecular dimer. In all these molecules, ultrafast dynamics and exciton transport is directly simulated using our atomistic non-adiabatic excited-state molecular dynamics (NEXMD) package33. Coherent dynamics observed in these systems persists on the timescale of hundreds of femtoseconds at room temperature and in the presence of a bath, which agrees with experimental spectroscopic reports on various materials. Here, coherences are controlled by electronic and vibrational coupling unique to the chemical composition and structural conformation. Such general behavior suggests common strategies for manipulating electronic functionalities, such as charge and energy transport, in both natural and synthetic systems.
## Results
### Alternating wavefunction symmetry
To establish a conceptual framework, we recall that photo-induced electronic processes in realistic molecular systems predominantly involve a broad manifold of excited states. Subsequently, avoided and unavoided (e.g., conical intersections) crossings between potential energy surfaces (PESs) define the dynamics, where non-adiabatic transitions between states (internal conversion) are commonly occurring due to a breakdown of the Born–Oppenheimer approximation. Fig. 1a schematically shows two PESs with electronic wavefunctions labeled as Ψ1 and Ψ2 parametrically depending on multidimensional vibrational degrees of freedom R, where the colored box denotes the non-adiabatic coupling region. Excited state wavefunctions in low-dimensional organic materials such as conjugated polymers, branching structures, and molecular aggregates are excitons (electron-hole pairs interacting via Coulombic potential) with a large binding energy16. Importantly, the envelopes of these wavefunctions always adopt a standing wave pattern on the finite structures34 following the particle (exciton) in a box model as shown in Fig. 1b. Here, the respective PES of the state defines the multi-dimensional potential landscape for a bound excitonic state. We further notice the presence of an alternating symmetry between wavefunction phases for sequential states in the band. While specific symmetry labels depend on the molecular geometry, here we will loosely use “symmetric” and “antisymmetric” labels as depicted in Fig. 1b. For example, the excited states in a prototype conjugated polymer polyacetylene have Ag and Bu symmetries (Fig. 1c), whereas Ψ1 and Ψ2 states in the molecular homo-dimer are symmetric and antisymmetric combinations of the parent ϕL and ϕR (left and right) monomer states (Ψ1,2 = ϕL ± ϕR) within the Frenkel exciton model35, thus illustrating the basis for our notations.
### Vibrational excitation initiated by internal conversion
In a typical scenario for internal conversion (Fig. 1a), a photoexcited wavepacket goes through the crossing region to transition from the upper to the lower PES. Such processes are usually described via semiclassical models establishing consistent propagation of quantum (electrons) and classical (nuclei) degrees of freedom in the non-adiabatic regime33. Erhenfest and surface hopping36 are examples of such methods allowing explicit treatment of large molecular systems for which fully quantum dynamics is prohibitively expensive8,37,38,39. Alternative perturbative approaches40,41,42 usually treat nuclei as an effective bath, and the self-energy due to coupling of the nuclei and electrons is usually defined in frequency space and is estimated by averaging over the nuclear motion, thus losing the explicit correlation. Such approaches have been extensively applied, for example, to biological light-harvesting systems43,44. In this study, instead, the correlation between the electronic and nuclear dynamics is explicitly included in real time, though non-adiabatic, coupling. Notably, across all methodologies, the derivative coupling NAC d12 (Fig. 1a) drives the efficiency of the transition. First, the wavepacket on the upper surface in the non-adiabatic region experiences the so-called Pechukas force (P, Fig. 1a) in the direction of the NAC vector pushing the system towards the crossing45. Furthermore, upon non-adiabatic transition, the excess electronic energy is dispersed into the nuclear velocities in the direction of the NAC vector to enforce energy conservation. The direction of the NAC vector is highly significant and it represents the direction of the driving force acting along a unique normal mode direction throughout regions of strong coupling46,47. The fact that the direction of the NAC vector defines the flux of energy toward specific vibrations has been emphasized by Bittner et al.48. This provides a simple physical rationale for adjusting nuclear velocities along the direction of the non-adiabatic coupling vector. These electronic-to-vibrational energy conversion principles were proven at various levels of theory45,49. Subsequently, the NAC vector defines a displacement for a specific vibrational state within a lower PES absorbing the excess electronic energy from transitions between excited states. A rigorous iterative search of this vibrational coordinate was recently reported for the state-to-state transitions in the case of electronic transfer48. In our conceptual example of an asymmetric-to-symmetric transition between neighboring wavefunctions (Fig. 1b), the NAC vector defined in Fig. 1a (and the resulting vibrational excitation) has a strictly asymmetric form. Namely, the left (right) part of the system undergoes expanding (contracting) structural deformation with opposite displacement (or phase) as shown in Fig. 1d. We expect that such vibrational excitation is related to the structural motions usually considered to be coupled to the electronic degrees of freedom such as C–C stretches and torsional librations. However, it is not directly associated with any of the vibrational normal modes of either the ground or any excited state, rather being a complex superposition of several normal modes, as was demonstrated in the case of charge transfer48. In the present examples, the non-adiabatic coupling vector is commonly spread among a small subset of normal modes (~2–5) such as C–C stretches and torsions. A typical spectral width within each subclass of modes is less than 0.05 eV. These modes become active experiencing a substantial increase in their vibrational energy during the process50.
Finally, initiated by electronic relaxation, asymmetric vibrational excitation periodically modulates the electronic wavefunction motions on the lower PES. This leads to the “sloshing” of the localized wavefunction between “left” and “right” sides (see Fig. 1e) with possible intermittent spatial delocalizations across the double well potential. Thus, symmetries of the initial wavefunctions define the form of vibrational excitation emerging after electronic relaxation, which, in turn, controls wave-like localization–delocalization motion of the final wavefunction underpinning synchronous vibronic dynamics in the excited state. The dynamics of long-lived ground state wavepackets in photosynthetic light-harvesting antennas has already been reported in experiment19.
### Applications to molecular systems
To validate this scenario in realistic materials, we further study four systems: a linear oligomer (Fig. 2a) representing conjugated polymer family39, a nanohoop (Fig. 2b) prototyping circular geometry of ubiquitous photosynthetic complexes38, a dendrimer (Fig. 2c) exemplifying branched artificial light-harvesting systems37, and a dimer (Fig. 2d) signifying molecular crystals and aggregates51. We use our NEXMD package to simulate internal conversion following photoexcitation in all the systems at ambient conditions in the presence of a bath, as outlined in Methods.
While our calculations may involve higher lying excited states to mimic time-resolved spectroscopic probes, here we focus our analysis on the transition between the two lowest excited electronic states S2 and S1 (S3 and S2 states in the dendrimer). Fig. 2 displays the orbital plots of the transition densities (see Methods) taken at the ground state equilibrium geometry, which reflect spatial distributions of the excited state wavefunctions. We immediately recognize the “asymmetric–symmetric” motif (Fig. 1b) for Ψ1 and Ψ2 in all systems. In the dimer example, orbitals for one monomer are in-phase, whereas they are out-of-phase for the other, reflecting “+” and “−” wavefunction combinations as discussed above. As expected, NAC d12 vectors have the corresponding spatially asymmetric forms (Fig. 2a–d, bottom plots), conveying the vibrational excitation dynamically emerging due to electronic transition, in line with the schematic in Fig. 1d. Interestingly, the asymmetric form of NAC persists across all dynamical simulations as illustrated for the case of a dimer in Supplementary Fig. 1. It is clear that even for complex systems, the behavior described using our simple symmetry arguments holds true, as long as the system is composed of similar elementary building blocks (e.g., monomeric units in the case of a molecular aggregate or a crystal).
### Capturing periodic dynamical signatures
The signatures of such concerted vibronic dynamics can be followed by analyzing common descriptors for both vibrational and electronic degrees of freedom. Bond-length alternation, BLA (see Methods) is a typical parameter for monitoring C–C stretches52. Fig. 3a displays periodic out-of-phase (with respect to left and right molecular halves) BLA variations in the linear oligomer. Alternatively, we can monitor displacements of the torsion angle on the top and bottom sides of the hoop, which also conveys out-of-phase vibration, as illustrated in Fig. 3b. Identical periodic dynamical signatures can be observed by following electronic degrees of freedom where spatial distribution of the state transition density is a good descriptor53. This is illustrated for the dendrimer (Fig. 3c) and dimer (Fig. 3d) in the evolution of the fraction of transition density contained in each branch or monomer, revealing oscillations associated with the changes in wavefunction localization. Other calculated variations of BLA, torsions, and transition densities are shown in Supplementary Figs. 4–7. Altogether, there is a consistent picture of coupled electron-nuclei dynamics modulated by specific vibrational excitations initiated by non-adiabatic transitions.
## Discussion
It is interesting to note that such concerted in-phase coherent vibronic dynamics is observed across the entire ensemble of trajectories with slow decay for well over 100 fs at room temperature for all considered systems and others52,54, overcoming effects of thermal fluctuations, solvent viscosity, and disorder. We mention spectroscopic observations of “coherent phonons” persisting up to picoseconds (e.g., in the case of carbon nanotubes11), when the entire ensemble of molecules undergoes in-phase vibrational motion. While we discuss here only fast C–C stretching, slow torsions along the chain represent another structural motion coupled to the electronic system. By averaging over the C–C vibrations, one can inspect these slow recurring motions on the timescale of several picoseconds as illustrated in the case of the nanohoop (see Supplementary Fig. 7). An important spectroscopic observation is that the broad pulse may create coherences between electronic states in the initial condition1,4,5,6,7,9,10. These aspects invite further investigation by direct electronic dynamics modeling using advanced methodologies capable of describing interacting trajectories such as coherent Gaussian wavepacket approaches or multi-configurational methods55,56.
In summary, we show the appearance of coherent electron-vibrational dynamics initiated by non-adiabatic transitions between excited states. Our concept is verified by direct atomistic NEXMD simulations of internal conversion in typical organic conjugated systems such as oligomer, hoop, dendrimer, and a molecular dimer. In all cases, we observe remarkably similar excited state dynamics initiated by non-adiabatic transitions between states leading to a specific asymmetric vibrational excitation, which modulates subsequent spatial evolution of the electronic wavefuntion described as wave-like motion. Consequently, we conclude that these phenomena are omnipresent across a very broad range of molecular materials and may potentially provide an alternative interpretation of existing and future spectroscopic experiments. Namely, an inevitable energy flow from electronic degrees of freedom to vibrations in the process of non-radiative relaxation and in the presence of strong electron–phonon coupling creates specific vibrational excitations that spatially modulate the excited electronic state before localizing it into a “self-trapped” excitation. Thus, there exists a dynamical regime in which vibrations may efficiently transfer the electronic excitation across molecular constituents. Across all examples studied, such dynamics are vastly different from system to system in terms of persistence and timescales including cases of coupled multi-chromophore systems. Consequently, it may be possible to achieve the desired function (such as specific directed funneling of excitons) by relying on observed ultrafast dynamics of exciton-vibrations (e.g., by seeking a dynamical regime underpinning an efficient transport in multi-chromophore systems with large disorder and strong electron–phonon coupling). Thus, these observed underlying physical principles can be further exploited for design of functional organic materials for various optoelectronic applications.
## Methods
### Non-adiabatic excited state molecular dynamics
The non-adiabatic excited-state molecular dynamics (NEXMD) software package33 has been used to simulate the photoexcitation and subsequent electronic and vibrational energy relaxation and redistribution of each system: an anthracene dimer dithia-anthracenophane (DTA), a cycloparaphelynene with 16 phenyl units ([16]CPP), an unsymmetrical phenylene–ethynylene dendrimer with an ethynylene–perylene sink, and a linear paraphenylene with 7 phenyl units. The NEXMD combines the fewest switches surface hopping (FSSH) algorithm57 with “on the fly” analytical calculations of excited-state energies53,58,59, gradients60,61, and non-adiabatic coupling terms62,63,64. The collective electronic oscillator (CEO) approach65,66,67 is used to compute excited states at the configuration interaction singles (CIS) level of theory68. The semiempirical AM1 Hamiltonian69 has been used for all systems except for DTA where the PM3 Hamiltonian70 is used.
### NEXMD simulation details and parameters
One nanosecond ground state molecular dynamics simulations were performed for initial equilibration of all molecular structures studied. The Langevin thermostat71 is used with temperature T = 300 K, a friction coefficient γ = 20.0 ps−1 and time step Δt = 0.5 fs. The ground state trajectory was used to collect sets of initial configurations for the subsequent NEXMD simulations. The NEXMD simulations were started from these initial configurations by instantaneously promoting the system to an initial excited state α with the energy Ωα, selected according to a Frank-Condon window defined as $$g_{\mathrm{\alpha}} = f_{\mathrm{\alpha}} {\mathrm{exp}}\left[ { - T^2\left( {E_{{\mathrm{laser}}} - \Omega _{\mathrm{\alpha }}} \right)^2} \right]$$. fα represents the normalized oscillator strength for the α state, and Elaser represents the energy of a laser pulse centered at the maximum of the absorption spectra of a given molecule. The excitation energy width is given by the transform-limited relation of a Gaussian pulse with a full width half maximum (FWHM) of 100 fs, giving a value of T2 = 42.5 fs. Using gα, the initial excited state for each equilibrated structure was determined.
Ten electronic excited states and their corresponding non-adiabatic couplings have been considered during NEXMD simulations for all systems. In agreement with previous numerical tests, 400 trajectories is found to be sufficient to achieve statistical convergence. A classical time step of 0.1 fs has been used for nuclear propagation and a quantum time step of 0.025 fs has been used to propagate the electronic degrees of freedom. Empirical corrections were introduced to account for electronic decoherence72 and trivial unavoided crossings were diagnosed by tracking the identities of states73. The coherent vibronic dynamics observed in the present systems occur after the final effective hop to the lowest energy state and are therefore not an artifact of the decoherence model employed here72. Upon transition, the system decoheres instantaneously and moves independently on the lower surface with electron-vibrational coherent dynamics. In fact, the observed dynamics remains roughly the same if decoherence corrections are employed for the original FSSH method or not. These corrections primarily affect the relaxation timescales and eliminate numerical inconsistencies from the original FSSH74. More details concerning the NEXMD implementation and parameters can be found elsewhere33,72,73,75.
### Analysis of electronic transition density
During the NEXMD simulations, the electronic energy redistribution is monitored by computing the time-dependent localization of the electronic transition density, whose diagonal elements (ρ) nn (index n refers to atomic orbital (AO) basis functions) represent the changes in the distribution of the electronic density induced by photoexcitation from the ground state g to an excited electronic α state76. The orbital representation of the transition density is convenient for the analysis of excited state properties. For example, natural transition orbitals (NTOs)77 enable the analysis of electron-hole separation in excitonic wavefunctions and charge transfer states by representing the electronic transition density matrix as essential pairs of particle and hole orbitals. Similarly, the orbital representation of the diagonal elements of the transition density is beneficial for the analysis of the total spatial extent of the excited state wavefunction. By partitioning the molecular system into moieties and/or chromophore units, the fraction of transition density, $$\left( {\rho ^{\mathrm{g\alpha }}(t)} \right)_X^2$$, localized on each unit X at a given time can be obtained by summing the contributions of the AO from each atom (index A) in X and occasionally contributions of the AO from atoms localized on the boundary with another unit (index B)
$$(\rho ^{{\mathrm{g\alpha }}}({\mathit{t}}))_X^2 = \mathop {\sum}\limits_{n_Am_A} {(\rho _{n_Am_A}^{{\mathrm{g\alpha }}}({\mathit{t}}))^2} + \frac{1}{2}\mathop {\sum}\limits_{n_Bm_B} {(\rho _{n_Bm_B}^{{\mathrm{g\alpha }}}({\mathit{t}}))^2}$$
(1)
### Analysis of bond length alternation
Molecular conformations during NEXMD simulations are analyzed by following the bond-length alternation (BLA). BLA and torsions (dihedral angles) represent the nuclear motions that are strongly coupled to the electronic degrees of freedom. BLA provides a convenient vibrational descriptor that reflects the inhomogeneity in the distribution of electrons along the π-conjugated molecule and it is generally defined as a difference between single and double bond lengths along the conjugated chain
$${\mathrm{BLA}} = d_1 - d_2 \cdot \frac{2}{3} - d_3 \cdot \frac{1}{3},$$
(2)
where d1, d2, and d3 are consecutive bond lengths in the conjugated system. Smaller values of BLA are associated with better π-conjugation and, therefore, an enhancement of the electronic delocalization78,79. Torsions are typically slower motions than C–C stretches. Here, the torsional motion of interest refers to the inter-ring dihedral angle, that indicates how rotated phenyl rings are with respect to a neighboring ring. The inter-ring dihedral angle modulates π-electron delocalization (large inter-ring dihedral angles can create conjugation breaks) and affects the molecular relaxation pathways.
### Data availability
All relevant data are available from the authors upon request.
|
{}
|
Here is a list of codes related to the Kitaev surface code.
Code Description
Clifford-deformed surface code (CDSC) A generally non-CSS derivative of the surface code defined by applying a constant-depth Clifford circuit to the original (CSS) surface code. Unlike the surface code, CDSCs include codes whose thresholds and subthreshold performance are enhanced under noise biased towards dephasing. Examples of CDSCs include the XY code, XZZX code, and random CDSCs.
Color code A family of abelian topological CSS stabilizer codes defined on a $$D$$-dimensional lattice which satisfies two properties: The lattice is (1) a homogeneous simplicial $$D$$-complex obtained as a triangulation of the interior of a $$D$$-simplex and (2) is $$D+1$$-colorable. Qubits are placed on the $$D$$-simplices and generators are supported on suitable simplices [1]. For 2-dimensional color code, the lattice must be such that it is 3-valent and has 3-colorable faces, such as a honeycomb lattice. The qubits are placed on the vertices and two stabilizer generators are placed on each face [2].
Double-semion code Stub.
Fractal surface code Kitaev surface code on a fractal geometry, which is obtained by removing qubits from the surface code on a cubic lattice. Stub.
Freedman-Meyer-Luo code Hyperbolic surface code constructed using cellulation of a Riemannian Manifold $$M$$ exhibiting systolic freedom [3]. Codes derived from such manifolds can achieve distances scaling better than $$\sqrt{n}$$, something that is impossible using closed 2D surfaces or 2D surfaces with boundaries [4]. Improved codes are obtained by studying a weak family of Riemann metrics on closed 4-dimensional manifolds $$S^2\otimes S^2$$ with the $$Z_2$$-homology.
Galois-qudit topological code Abelian topological code, such as a surface [5] or color [6] code, constructed on lattices of Galois qudits.
Golden code Variant of the Guth-Lubotzky hyperbolic surface code that uses regular tessellations for 4-dimensional hyperbolic space.
Guth-Lubotzky code Hyperbolic surface code based on cellulations of certain four-dimensional manifolds. The manifolds are shown to have good homology and systolic properties for the purposes of code construction, with corresponding codes exhibiting linear rate.
Haah cubic code Class of stabilizer codes on a length-$$L$$ cubic lattice with one or two qubits per site. We also require that the stabilizer group $$\mathsf{S}$$ is translation invariant and generated by two types of operators with support on a cube. In the non-CSS case, these two are related by spatial inversion. For CSS codes, we require that the product of all corner operators is the identity. We lastly require that there are no non-trival ''string operators'', meaning that single-site operators are a phase, and any period one logical operator $$l \in \mathsf{S}^{\perp}$$ is just a phase. Haah showed in his original construction that there is exactly one non-CSS code of this form, and 17 CSS codes [7]. The non-CSS code is labeled code 0, and the rest are numbered from 1 - 17. Codes 1-4, 7, 8, and 10 do not have string logical operators [7][8].
Heavy-hexagon code Subsystem stabilizer code on the heavy-hexagonal lattice that combines Bacon-Shor and surface-code stabilizers. Encodes one logical qubit into $$n=(5d^2-2d-1)/2$$ physical qubits with distance $$d$$. The heavy-hexagonal lattice allows for low degree (at most 3) connectivity between all the data and ancilla qubits, which is suitable for fixed-frequency transom qubits subject to frequency collision errors.
Hemicubic code Stub.
Higher-dimensional surface code A family of Kitaev surface codes on planar or toric surfaces of dimension greater than two. Stub.
Honeycomb code Floquet code inspired by the Kitaev honeycomb model [9] whose logical qubits are generated through a particular sequence of measurements.
Hyperbolic surface code An extension of the Kitaev surface code construction to hyperbolic manifolds in dimension two or greater. Given a cellulation of a manifold, qubits are put on $$i$$-dimensional faces, $$X$$-type stabilizers are associated with $$(i-1)$$-faces, while $$Z$$-type stabilizers are associated with $$i+1$$-faces.
Hypergraph product code A family of $$[[n,k,d]]$$ CSS codes whose construction is based on two binary linear seed codes $$C_1$$ and $$C_2$$.
Hypersphere product code Stub.
Kitaev surface code A family of abelian topological CSS stabilizer codes whose generators are few-body $$X$$-type and $$Z$$-type Pauli strings associated to the stars and plaquettes, respectively, of a cellulation of a two-dimensional surface (with a qubit located at each edge of the cellulation). Toric code often either refers to the construction on the two-dimensional torus or is an alternative name for the general construction. The construction on surfaces with boundaries is often called the planar code [10].
Lifted-product (LP) code Also called a Panteleev-Kalachev (PK) code. Code that utilizes the notion of a lifted product in its construction. Lifted products of certain classical Tanner codes are the first (asymptotically) good QLDPC codes.
Majorana stabilizer code Majorana fermion stabilizer codes are stabilizer codes whose stabilizers are products of an even number of Majorana fermion operators, analogous to Pauli strings for a traditional stabilizer code and referred to as Majorana stabilizers. The codespace is the mutual $$+1$$ eigenspace of all Majorana stabilizers. In such systems, Majorana fermions may either be considered individually or paired into creation and annihilation operators for fermionic modes. Codes can be denoted as $$[[n,k,d]]_{f}$$ [11], where $$n$$ is the number of fermionic modes.
Modular-qudit surface code A family of stabilizer codes whose generators are few-body $$X$$-type and $$Z$$-type Pauli strings associated to the stars and plaquettes, respectively, of a tessellation of a two-dimensional surface (with a qudit located at each edge of the tesselation). The code has $$n=E$$ many physical qudits, where $$E$$ is the number of edges of the tesselation, and $$k=2g$$ many logical qudits, where $$g$$ is the genus of the surface.
Projective-plane surface code A family of Kitaev surface codes on the non-orientable 2-dimensional compact manifold $$\mathbb{R}P^2$$ (in contrast to a genus-$$g$$ surface). Whereas genus-$$g$$ surface codes require $$2g$$ logical qubits, qubit codes on $$\mathbb{R}P^2$$ are made from a single logical qubit.
Quantum-double code A family of topological codes, defined by a finite group $$G$$, whose generators are few-body operators associated to the stars and plaquettes, respectively, of a tessellation of a two-dimensional surface (with a qudit of dimension $$|G|$$ located at each edge of the tesselation).
Raussendorf-Bravyi-Harrington (RBH) code Stub. (see Sec. III E of [12])
Rotated surface code Also called a checkerboard code. CSS variant of the surface code defined on a square lattice that has been rotated 45 degrees such that qubits are on vertices, and both $$X$$- and $$Z$$-type check operators occupy plaquettes in an alternating checkerboard pattern.
Solid code A variant of Kitaev's surface code on a 3D lattice.
String-net code Also called a Turaev-Viro or Levin-Wen model code. A family of topological codes, defined by a finite unitary spherical category $$\mathcal{C}$$, whose generators are few-body operators acting on a cell decomposition dual to a triangulation of a two-dimensional surface (with a qudit of dimension $$|\mathcal{C}|$$ located at each edge of the decomposition).
Surface-17 code A $$[[9,1,3]]$$ rotated surface code named for the sum of its 9 data qubits and 8 syndrome qubits. It uses the smallest number of qubits to perform error correction on a surface code with parallel syndrome extraction.
Translationally-invariant stabilizer code A geometrically local qubit or qudit stabilizer code with qudits organized on a lattice modeled by the additive group $$\mathbb{Z}^D$$ for spatial dimension $$D$$ such that each lattice point, referred to as a site, contains $$m$$ qudits of dimension $$q$$. The stabilizer group of the translationally invariant code is generated by site-local Pauli operators and their translations.
Two-dimensional hyperbolic surface code Hyperbolic surface codes based on a tessellation of a closed 2D manifold with a hyperbolic geometry (i.e., non-Euclidean geometry, e.g., saddle surfaces when defined on a 2D plane).
XY surface code Non-CSS derivative of the surface code whose generators are $$XXXX$$ and $$YYYY$$, obtained by mapping $$Z \to Y$$ in the surface code.
XZZX surface code Non-CSS variant of the rotated surface code whose generators are $$XZXZ$$ Pauli strings associated, clock-wise, to the vertices of each face of a two-dimensional lattice (with a qubit located at each vertex of the tessellation).
$$[[4,2,2]]$$ CSS code Also known as the $$C_4$$ code. Four-qubit CSS stabilizer code with generators $$\{XXXX, ZZZZ\}$$ and codewords \begin{align} \begin{split} |\overline{00}\rangle = (|0000\rangle + |1111\rangle)/\sqrt{2}~{\phantom{.}}\\ |\overline{01}\rangle = (|0011\rangle + |1100\rangle)/\sqrt{2}~{\phantom{.}}\\ |\overline{10}\rangle = (|0101\rangle + |1010\rangle)/\sqrt{2}~{\phantom{.}}\\ |\overline{11}\rangle = (|0110\rangle + |1001\rangle)/\sqrt{2}~. \end{split} \end{align} This code is the smallest single-qubit error-detecting code. It is also the smallest instance of the toric code, and its various single-qubit subcodes are small planar surface codes.
## References
[1]
A. M. Kubica, The Abcs of the Color Code: A Study of Topological Quantum Codes as Toy Models for Fault-tolerant Quantum Computation and Quantum Phases of Matter, California Institute of Technology, 2018. DOI
[2]
H. Bombin, “An Introduction to Topological Quantum Codes”. 1311.0277
[3]
M. H. Freedman, “<b>Z</b><sub>2</sub>–Systolic-Freedom”, Proceedings of the Kirbyfest (1999). DOI
[4]
E. Fetaya, “Bounding the distance of quantum surface codes”, Journal of Mathematical Physics 53, 062202 (2012). DOI
[5]
Iryna Andriyanova, Denise Maurice, and Jean-Pierre Tillich, “New constructions of CSS codes obtained by moving to higher alphabets”. 1202.3338
[6]
P. Sarvepalli, “Topological color codes over higher alphabet”, 2010 IEEE Information Theory Workshop (2010). DOI
[7]
J. Haah, “Local stabilizer codes in three dimensions without string logical operators”, Physical Review A 83, (2011). DOI; 1101.1962
[8]
A. Dua et al., “Sorting topological stabilizer models in three dimensions”, Physical Review B 100, (2019). DOI; 1908.08049
[9]
A. Kitaev, “Anyons in an exactly solved model and beyond”, Annals of Physics 321, 2 (2006). DOI; cond-mat/0506438
[10]
S. B. Bravyi and A. Yu. Kitaev, “Quantum codes on a lattice with boundary”. quant-ph/9811052
[11]
Sagar Vijay and Liang Fu, “Quantum Error Correction for Complex and Majorana Fermion Qubits”. 1703.00459
[12]
S. Roberts and S. D. Bartlett, “Symmetry-Protected Self-Correcting Quantum Memories”, Physical Review X 10, (2020). DOI; 1805.01474
|
{}
|
kaldor and kalecki theory of distribution
His work is inspired by Keynesâ contributions, in the Treatise on Money, and by Kalecki. Kaldor – Kalecki demand and investment oriented theories of cycles; Goodwins theory of cyclical growth based on employment and wage share dynamics; and Minsky’s financial instability hypothesis whereby capitalist economies show a genetic propensity to boom-bust It focuses on the relation between distribution and macroeconomic performance, building on (and debating with) Michal Kalecki's pricing and distribution theory. ãÍñHÍõ² :vÁßÿwþÖîbâQÙ2JkA;¤Ü&¤À¤FüÐàaÜm@µ@-ïñÐðÒÌ
ÕUÏXfâ¬ÎLÇA»Q?ÛyJbõÚ&¼? Kaldor N. 1971. However, while Keynes and Kalecki develop analyses of short period, Kaldor studies a long period equilibrium so that the mechanism on which the adjustment is based, the flexibility of profit margins, is inappropriate. Despite the fact that Kalecki authored many theoretical economic constructs, his interest in economics was more practical than academic and resulted from his work in engineering, journalism, credit investigation, use of statistics and observation of business operations. Growth is driven by demand‐side forces that induce supply‐side accommodation. Then, the role played by income distribution effects in the trade cycle theories developed during the thirties are examined in a second section, the first part focusing on Kalecki 1939’s theory based on a linear saving function while the second part is devoted to Kaldor’s 1940 model analysis based on a non-linear saving function. It also allows you to accept potential citations to this item that we are uncertain about. Please note that corrections may take a couple of weeks to filter through All material on this site has been provided by the respective publishers and authors. òCi"^üåò#}ryev)¯hhÓ>£x¹GiyÍb»ý34 ÐвTQÜ.k³è°£1ª2U]³â¼¯ ØÀUû+«=÷À|X}5Àø ñªx!ð-©*ÓJªóÁ¹¥Â*0¯,^Ss)8}J®t ¡¨ü9 9h. Kaldor presents his analysis of distribution as a Keynesian theory. You can help adding them by using this form . Kaldor in his theory of distribution argues, unlike Kalecki, that it is not reasonable to neglect the constraint of labour shortage, and analyse a situation of full employment. Cambridge, UK: Cambridge University Press. Kaldor-Kalecki model is rebuilt. However, his thesis seems debatable: the idea that the saving function proposed by Kaldor is logically inconsistent is unfounded. In Section 4, after reviewing the Kaldor model without delay quickly, we analytically and numerically detect the delay e⁄ect on cyclic dynamics of the Kaldor-Kalecki … Kalecki’s ideas on effective demand, for his anticipation of a number of Keynesian elements, and for the development of Kalecki’s related themes such as income determination and distribution. The article talked about the different alternative theories of Distribution. This allows to link your profile to this item. The heart of Kaldor’s theory lies in his demonstration “that shift in the distribution of income is essential to bring about the higher-saving income ratio, which is the necessary condition for a continued full employment equilibrium with a higher absolute level of investment in real terms. Abstract This paper compares Kalecki's distribution theory with Post-Keynesian – specifically with Kaldor's distribution theory. http://www.cairn.info/revue-cahiers-d-economie-politique.htm, Kaldor and the Keynesian theory of distribution, Cahiers dâéconomie politique / Papers in Political Economy. Then, we find that the time delay can give rise to the Hopf bifurcation when the time delay passes a critical value. Appeared in … If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation. A Theory of the Business Cycle. Kaldor's Model of Distribution (Hindi) - Duration: 27:46. We saw how Michal Kalecki, David Ricardo, and Nicholas Kaldor divided the national income into components that work the best … "Mr. Kaldor's theory of distribution is more appropriate for the explanation of short-run inflation than of long-run growth." Kitchin J. Subject : Economic Paper : Advance microeconomics Module : Macro theories of distribution—Kalecki and Kaldor’s Content Writer : Mr. Animesh Naskar. A Comment. The heart of Kaldor’s theory lies in his demonstration “that shift in the distribution of income is essential to bring about the higher-saving income ratio, which is the necessary condition for a continued full employment equilibrium with a higher absolute level of investment in real terms. His work is inspired by Keynes’ contributions in A Treatise on Money , and by Kalecki. Kalecki’s macroeconomics is notable for having been the first to be built, unlike Keynes’ but alike the contemporary New- Keynesian macroeconomic models, in an imperfectly competitive framework and, at the same time, for linking the theory of distribution, on the one side, and the theory of income determination, on the other. Review of Economic Studies 38: 45–46. Back . MTPØ»bï° ±¡ØyæÎ½¹ðñùü¾Ïï÷×ïõz²æ3sÏ÷3ç;e å 9#§N}3× ãͧòëöªç×EåñJ»Ñ When requesting a correction, please mention this item's handle: RePEc:cpo:journl:y:2011:i:61:p:113-156. Although the secondary literature (both technical and descriptive) on this subject is immense, a specific aspect seems to deserve further reflection. I(27), 1-20. If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. We consider the extent to which real wages are determined in the product rather than the labour market; relate Kalecki’s theory of distribution to the ‘neo-Keynesian’ theories, as expressed in the Kaldor - Pasinetti equations; and discuss alternative interpretations of the … Selected Essays on the Dynamics of the Capitalist Economy. Kaldor presents his analysis of the distribution as a Keynesian theory. Although Michal Kalecki had been independently working on business cycle theory before Keynes wrote his General Theory, Kalecki's various contributions have since been incorporated into the corpus of "Keynesian" literature on macrodynamics. The most celebrated microeconomic theory is the marginal productivity theory of distribution. The record of business cycle has been kept relatively well during the last 200 years, and business cycle theory, as the core issue of macroeconomics, He was contemptuous of abstract research and declined Keynes's invitation to undertake a critique of Jan Tinbergen's econometric business cycle work, for which he would also lack an in-depth knowledge of statistical theory. It was developed by J.B. Clark in 1899 and then modified by Philip Wicksteed. Public profiles for Economics researchers, Various rankings of research in Economics & related fields, Curated articles & papers on various economics topics, Upload your paper to be listed on RePEc and IDEAS, RePEc working paper series dedicated to the job market, Pretend you are at the helm of an economics department, Data, research, apps & more from the St. Louis Fed, Initiative for open bibliographies in Economics, Have your institution's/publisher's output listed on RePEc. Kaldor presents his analysis of the distribution as a Keynesian theory. This makes it possible for the theory of functional distribution to handle more complicated social relations and savings behavior. Jan Kregel=s essay on AIncome Distribution@ in the 1978 Guide to Post Keynesian Economics remains a classic introduction to the work of Kalecki, Robinson, Kaldor, Sraffa, 3 And it provided a universal, irrefutable, empty rationalization for existing wage differentials, since human capital cannot, by its nature, be observed or measured to any useful One of the important differences between Kaldor's 'Keynesian' theory of distribution and Kalecki's is that the former is restricted to full employment situations, while the latter is not. For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: (Carlos Andrés Vasco Correa). The e ects of an (exogenous) distributional shock in favor of wages are studied within the framework of an imperfectly competitive economy in which rms form ä+R@&Ò¹ 6f¥ endstream endobj 462 0 obj <>stream His work is inspired by Keynes’ contributions, in the Treatise on Money, and by Kalecki. existence and stability of periodic solutions in Kaldor–Kalecki model with investment delay [8,9]. In the analysis of such models, it is common to assume that the time delay continuously varies, and hence it is treated as a bifurcation parameter. We have no references for this item. The two macroeconomic theories are the classical (Ricardian) theory and the Cambridge (Kaldor) theory. The dynamics behaviors of Kaldor–Kalecki business cycle model with diffusion effect and time delay under the Neumann boundary conditions are investigated. Abstract This paper presents a Kaldorian model of growth that incorporates both Kaldor's theory of income distribution and his endogenous technical progress function. While Kalecki’s model is reduced to one differential equation with delay describing the capital formation, Kaldor’s original idea is to study the evolution of production and capital formation. This is … Bifurcation Analysis of a Kaldor-Kalecki Model of Business Cycle with Time Delay Liancheng Wang Kennesaw State University, lwang5@kennesaw.edu Xiaoqin P. Wu ... Qualitative Theory of Differential Equations, Spec. ¼®ha÷ÎNû¿ ÓJ è´¹%åo-¹îðrEIÙ¹Bì. If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. First the conditions of time-independent and time-dependent stability are investigated. Pasinetti, by suggesting that the Kaldorâs article rests on a logical slip and that the correction of this error shows the rate of profit depends only on the natural growth rate of the economy and on the capitalistsâ propensity to save, boosted the debate. ßNŨ x1Èó÷ðn£Q&Y Theory of Distribution » Macro-Distribution Theories of Ricardo, Marx, Kaldor, Kalecki. Kalecki M. 1971. Based on Kaldor’s idea of introducing nonlinear functional forms and Kalecki’s idea of introducing time lags, a Kaldor–Kalecki type model was proposed in : $$\textstyle\begin{cases} \frac{dY}{dt} = \alpha [I(Y,K)-S(Y,K)], \\ \frac{dK}{dt} = I(Y(t-T),K)-\delta K. \end{cases}$$ the various RePEc services. Key words: Distribution, growth, model comparison, Bhaduri/Marglin model JEL classification: E21, E22, E25, O41 Contact: Prof. Dr. Eckhard Hein Abstract: Combining ideas proposed by Kaldor and Kalecki leads to a non-linear, time delayed, model for business cycle dynamics. Based on the assumptions of the neo-Keynesian distribution theory and using an information-theoretic approach this paper derives the distribution of income between income units. General contact details of provider: http://www.cahiersdecopo.fr/fr/ . You can help correct errors and omissions. Kaldor suggests that the treatment of savings and investment as linear curves simply does not correspond to empirical reality. Kalecki M. 1937. Ed. In this paper, we analyse the stability and the local Hopf bifurcation properties of a Kaldor-Kalecki type model. Cycles and Trends in Economic Factors. In Section 3, short-run dynamics is examined under the investment delay. endstream endobj 463 0 obj <>stream with the problems of income distribution and growth since the pioneering contribu-tions of Kalecki, Harrod and Domar, followed by those of Kaldor, Joan Robinson, Pasinetti, Harcourt, etc. xì½w|TÅ÷7>åÎÜ6Kïu!t²%Ù]@7
ÞQÙ\$K²dC The variety of consequences of this has led several economists, such as Meade (1961) and, later, Nell (1982), to argue that at least for a long-run model, Kaldor's theory has a rather poor price-adjustment mechanism. Abstract and Figures Kaldor and the Keynesian theory of distribution Kaldor presents his analysis of the distribution as a Keynesian theory. This is … Full-time university teaching, for which he did not have formal qualifications (a degree), he did only during the last thirteen years of his career. This paper presents a Kaldorian model of growth that incorporates both Kaldor's theory of income distribution and his endogenous technical progress function. See general information about how to correct material in RePEc. growth model, new neoclassical growth theories, classical/Marxian distribution and growth approaches, and post-Keynesian Kaldor-Robinson and Kalecki-Steindl distribution and growth theories. Finally, the crucial hypothesis on which rests the reasoning of Pasinetti, the existence of a class of individuals who earn only profit appears to characterize hardly in a relevant way the economic systems which prevail in advanced economies. 1923. Review of Economic Studies 4: 77–97. x3Rðâ2Ð35W(ç*T0PðR0T(ÒY@ìÄé@QC= P A
JÎåÒ ð1TpÉWä ... [IES/IAS Economics Mains] Kalecki's Theory of Income Distribution - Duration: 5:30. nishant mehra 3,903 views. Electronic Journal of Qualitative Theory of Differential Equations Õuïxfâ¬ÎlÇa » Q? ÛyJbõÚ & ¼, Kalecki ( both technical and descriptive ) on this is., new neoclassical growth theories, classical/Marxian distribution and growth approaches, and by Kalecki bifurcation properties a... It here curves simply does not correspond to kaldor and kalecki theory of distribution reality time delay give! Inspired by Keynesâ kaldor and kalecki theory of distribution, in the Treatise on Money, and by Kalecki specifically Kaldor. Distribution is more appropriate for the theory of functional distribution to handle more complicated social relations and savings behavior //www.cairn.info/revue-cahiers-d-economie-politique.htm. To correct material in RePEc have authored this item and are not yet with... Papers in Political Economy and Post-Keynesian Kaldor-Robinson and Kalecki-Steindl distribution and his endogenous technical progress.... @ µ @ -ïñÐðÒÌ ÕUÏXfâ¬ÎLÇA » Q? ÛyJbõÚ & ¼ suggests that time! Do it here assumptions of the neo-Keynesian distribution theory with Post-Keynesian – specifically with Kaldor 's of... Repec, we analyse the stability and the local Hopf bifurcation properties of a Kaldor-Kalecki model! Take a couple of weeks to filter through the various RePEc services Political.. Income between income units: p:113-156, we find that the treatment of savings and investment as linear simply! Is logically inconsistent is unfounded driven by demand‐side forces that induce supply‐side.! By Philip Wicksteed and Kalecki leads to a non-linear, time delayed, for... Registered with RePEc, we analyse the stability and the Keynesian theory function proposed Kaldor. Capitalist Economy the Hopf bifurcation when the time delay passes a critical.... Philip Wicksteed that induce supply‐side accommodation the Capitalist Economy growth.: the idea that time! Macro-Distribution theories of Ricardo, Marx, Kaldor and Kalecki leads to a non-linear, delayed..., and by Kalecki and his endogenous technical progress function is inspired by Keynes ’ contributions, in Treatise... And are not yet registered with RePEc, we find that the delay. Approach this paper derives the distribution of income distribution and growth approaches, by... Marginal productivity theory of distribution is more appropriate for the theory of distribution behaviors of business! Can help adding them by using this form is … the most celebrated microeconomic is! Animesh Naskar Capitalist Economy @ µ @ -ïñÐðÒÌ ÕUÏXfâ¬ÎLÇA » Q? ÛyJbõÚ ¼. Please mention this item: Economic paper: Advance microeconomics Module: Macro theories distribution—Kalecki. ] Kalecki 's distribution theory with Post-Keynesian – specifically with Kaldor 's of. A Kaldorian model of growth that incorporates both Kaldor 's distribution theory with Post-Keynesian – specifically Kaldor. The saving function proposed by Kaldor and the local Hopf bifurcation properties of a Kaldor-Kalecki type.. Mr. Animesh Naskar distribution and growth theories and the Keynesian theory … the most celebrated microeconomic is. Kalecki leads to a non-linear, time delayed, model for business cycle model investment! Registered with RePEc, we encourage you to do it here Combining ideas by! Economic paper: Advance microeconomics Module: Macro theories of distribution—Kalecki and Kaldor s... On this subject is immense, a specific aspect seems to deserve further.... This paper compares Kalecki 's distribution theory Mains ] Kalecki 's distribution theory with Post-Keynesian – specifically with Kaldor theory. Philip Wicksteed the marginal productivity theory of distribution, Cahiers dâéconomie politique / Papers in Political Economy find that treatment. Empirical reality growth. - Duration: 5:30. nishant mehra 3,903 views a Treatise on Money and... Technical progress function the dynamics behaviors of Kaldor–Kalecki business cycle model with diffusion effect and time delay can rise! Is the marginal productivity theory of distribution as a Keynesian theory investment delay [ 8,9 ] theories of and! Clark in 1899 and then modified by Philip Wicksteed ’ contributions, the!
|
{}
|
# 一路向前走
:: :: :: :: :: :: :: ::
91 随笔 :: 3 文章 :: 94 评论 :: 0 引用
Contents:
This document follows the basic outline of the Java Programming Conventions Guide, a copy of which may be found at http://geosoft.no/javastyle.html.
Widget authors are expected to adhere to this style guide and also to the Dojo Accessibility Design Requirements guidelines.
## General
Any violation to this guide is allowed if it enhances readability.
Guidelines in this document are informed by discussions carried out among the Dojo core developers. The most weight has been given to considerations that impact external developer interaction with Dojo code and APIs. Rules such as whitespace placement are of a much lower order importance for Dojo developers, but should be followed in the main in order to improve developer coordination.
## Quick Reference
Table of core API naming constructs:
ConstructConventionComment
module lowercase never multiple words
class CamelCase
public method mixedCase whether class or instance method. lower_case() is acceptable only if the particular function is mimicking another API.
public var mixedCase
constant CamelCase or UPPER_CASE
Table of constructs that are not visible in the API, and therefore carry less weight of enforcement.
ConstructConvention
private method _mixedCase
private var _mixedCase
method args _mixedCase, mixedCase
local vars _mixedCase, mixedCase
## Naming Conventions
1. When constructing string IDs or ID prefixes in the code, do not use "dojo", "dijit" or "dojox" in the names. Because we now allow multiple versions of dojo in a page, it is important you use _scopeName instead (dojo._scopeName, dijit._scopeName, dojox._scopeName).
2. Names representing modules SHOULD be in all lower case.
3. Names representing types (classes) MUST be nouns and written using CamelCase capitalization:
1 Account, EventHandler
4. Constants SHOULD be placed within a single object created as a holder for constants, emulating an Enum; the enum SHOULD be named appropriately, and members SHOULD be named using either CamelCase or UPPER_CASE capitalization:
1 2 3 4 var NodeTypes = { Element : 1, DOCUMENT: 2 }
5. Abbreviations and acronyms SHOULD NOT be UPPERCASE when used as a name:
1 getInnerHtml(), getXml(), XmlDocument
6. Names representing methods SHOULD be verbs or verb phrases:
1 obj.getSomeValue()
7. Public class variables MUST be written using mixedCase capitalization.
8. CSS variable names SHOULD follow the same conventions as public class variables.
9. Private class variables MAY be written using _mixedCase (with preceding underscore):
1 2 3 4 5 var MyClass = function(){ var _buffer; this.doSomething = function(){ }; }
10. Variables that are intended to be private, but are not closure bound, SHOULD be prepended with a "_" (underscore) char:
1 this._somePrivateVariable = statement;
Note: the above variable also follows the convention for a private variable.
11. Generic variables SHOULD have the same name as their type:
1 setTopic(topic) // where topic is of type Topic
12. All names SHOULD be written in English.
13. Variables with a large scope SHOULD have globally unambiguous names; ambiguity MAY be distinguished by module membership. Variables with small or private scope MAY have terse names.
14. The name of the return object is implicit, and SHOULD be avoided in a method name:
1 getHandler(); // NOT getEventHandler()
15. Public names SHOULD be as clear as necessary and SHOULD avoid unclear shortenings and contractions:
1 MouseEventHandler // NOT MseEvtHdlr
Note that, again, any context that can be determined by module membership SHOULD be used when determining if a variable name is clear. For example, a class that represents a mouse event handler:
1 dojo.events.mouse.Handler // NOT dojo.events.mouse.MouseEventHandler
16. Classes/constructors MAY be named based on their inheritance pattern, with the base class to the right of the name:
1 2 3 EventHandler UIEventHandler MouseEventHandler
The base class CAN be dropped from a name if it is obviously implicit in the name:
1 MouseEventHandler // as opposed to MouseUIEventHandler
## Specific Naming Conventions
1. The terms get/set SHOULD NOT used where a field is accessed, unless the variable being accessed is lexically private.
2. The "is" prefix SHOULD be used for boolean variables and methods. Alternatives include "has", "can" and "should"
3. The term "compute" CAN be used in methods where something is computed.
4. The term "find" CAN be used in methods where something is looked up.
5. The terms "initialize" or "init" CAN be used where an object or a concept is established.
6. UI Control variables SHOULD be suffixed by the control type. Examples: leftComboBox, topScrollPane
7. Plural form MUST be used to name collections.
8. A "num" prefix or "count" postfix SHOULD be used for variables representing a number of objects.
9. Iterator variables SHOULD be called "i", "j", "k", etc.
10. Complement names MUST be used for complement entities. Examples: get/set, add/remove, create/destroy, start/stop, insert/delete, begin/end, etc.
11. Abbreviations in names SHOULD be avoided.
12. Negated boolean variable names MUST be avoided:
1 isNotError, isNotFound are unacceptable.
13. Exception classes SHOULD be suffixed with "Exception" or "Error" .. FIXME (trt) not sure about this?
14. Methods returning an object MAY be named after what they return, and methods returning void after what they do.
## Files
1. Class or object-per-file guidelines are not yet determined.
2. Tabs (set to 4 spaces) SHOULD be used for indentation.
3. If your editor supports "file tags", please append the appropriate tag at the end of the file to enable others to effortlessly obey the correct indentation guidelines for that file:
1 // vim:ts=4:noet:tw=0:
4. The incompleteness of a split line MUST be made obvious :
1 2 3 4 5 6 7 8 var someExpression = Expression1 + Expression2 + Expression3; var o = someObject.get( Expression1, Expression2, Expression3 );
Note the indentation for expression continuation is indented relative to the variable name, while indentation for parameters is relative to the method being called.
Note also the position of the parenthesis in the method call; positioning SHOULD be similar to the use of block notation.
## Variables
1. Variables SHOULD be initialized where they are declared and they SHOULD be declared in the smallest scope possible. A null initialization is acceptable.
2. Variables MUST never have a dual meaning.
3. Related variables of the same type CAN be declared in a common statement; unrelated variables SHOULD NOT be declared in the same statement.
4. Variables SHOULD be kept alive for as short a time as possible.
5. Loops / iterative declarations
1. Only loop control statements MUST be included in the "for" loop construction.
2. Loop variables SHOULD be initialized immediately before the loop; loop variables in a "for" statement MAY be initialized in the "for" loop construction.
3. The use of "do...while" loops is acceptable (unlike in Java).
4. The use of "break" and "continue" is not discouraged (unlike in Java).
6. Conditionals
1. Complex conditional expressions SHOULD be avoided; use temporary boolean variables instead.
2. The nominal case SHOULD be put in the "if" part and the exception in the "else" part of an "if" statement.
3. Executable statements in conditionals MUST be avoided.
7. Miscellaneous
1. The use of magic numbers in the code SHOULD be avoided; they SHOULD be declared using named "constants" instead.
2. Floating point constants SHOULD ALWAYS be written with decimal point and at least one decimal.
3. Floating point constants SHOULD ALWAYS be written with a digit before the decimal point.
## Layout
1. Block statements.
1. Block layout SHOULD BE as illustrated below:
1 2 3 4 while(!isDone){ doSomething(); isDone = moreToDo(); }
2. if statements SHOULD have the following form:
1 2 3 4 5 6 7 if(someCondition){ statements; }else if(someOtherCondition){ statements; }else{ statements; }
3. for statements SHOULD have the following form:
1 2 3 for(initialization; condition; update){ statements; }
1. while statements SHOULD have the following form:
1 2 3 4 while(!isDone){ doSomething(); isDone = moreToDo(); }
2. do...while statements SHOULD have the following form:
1 2 3 do{ statements; }while(condition);
3. switch statements SHOULD have the following form:
1 2 3 4 5 6 7 8 9 10 11 switch(condition){ case ABC: statements; // fallthrough case DEF: statements; break; default: statements; // no break keyword on the last case -- it's redundant }
4. try...catch...finally statements SHOULD have the following form:
1 2 3 4 5 6 7 try{ statements; }catch(ex){ statements; }finally{ statements; }
5. A single statement if-else, while or for MUST NOT be written without brackets, but CAN be written on the same line:
1 2 3 if(condition){ statement; } while(condition){ statement; } for(intialization; condition; update){ statement; }
2. Whitespace
1. Conventional operators MAY be surrounded by a space (including ternary operators).
2. The following reserved words SHOULD NOT be followed by a space:
• break
• catch
• continue
• do
• else
• finally
• for
• function if anonymous, ex. var foo = function(){};
• if
• return
• switch
• this
• try
• void
• while
• with
3. The following reserved words SHOULD be followed by a space:
• case
• default
• delete
• function if named, ex. function foo(){};
• in
• instanceof
• new
• throw
• typeof
• var
4. Commas SHOULD be followed by a space.
5. Colons MAY be surrounded by a space.
6. Semi-colons in for statements SHOULD be followed by a space.
7. Semi-colons SHOULD NOT be preceded by a space.
8. Function calls and method calls SHOULD NOT be followed by a space. Example: doSomething(someParameter); // NOT doSomething (someParameter)
9. Logical units within a block SHOULD be separated by one blank line.
10. Statements MAY be aligned wherever this enhances readability.
1. Tricky code SHOULD not be commented, but rewritten.
2. All comments SHOULD be written in English.
3. Comments SHOULD be indented relative to their position in the code, preceding or to the right of the code in question.
4. The declaration of collection variables SHOULD be followed by a comment stating the common type of the elements in the collection.
5. Comments SHOULD be included to explain BLOCKS of code, to explain the point of the following block.
6. Comments SHOULD NOT be included for every single line of code.
## Markup Guidelines
#### Using a Key
When parsing a comment block, we give the parser a list of "keys" to look for. These include summary, description, and returns, but many comment blocks will also have all of the variables and parameters in the object or function added to this list of keys as well.
If any of these keys occur at the beginning of a line, the parser will start reading the text following it and save it as part of that key until we find a completely blank line, or another key. This means that you should be careful about what word you use to start a line. For example, "summary" shouldn't start a line unless the content that follows is the summary.
#### Using Markdown
The Markdown syntax is used in descriptions and code examples.
In Markdown, to indicate a code block, indent the code block using either four spaces, or a single tab. The parser considers the | (pipe) character to indicate the start of a line. You must use | followed by a tab or four spaces in order to indicate a code block.
In Markdown, to indicate an inline piece of code, surround the code with backticks. eg <div>.
## General Information
These keys provide descriptions for the function or object:
• summary: A short statement of the purpose of the function or object. Will be read in plain text (html entity escaped, Markdown only for code escaping)
• description: A complete description of the function or object. Will appear in place of summary. (uses Markdown)
• tags: A list of whitespace-separated tags used to indicate how the methods are to be used (see explanations below)
• returns: A description of what the function returns (does not include a type, which should appear within the function)
• example: A writeup of an example. Uses Markdown syntax, so use Markdown syntax to indicate code blocks from any normal text. This key can occur multiple times.
## Tags
Methods are assumed to be public, but are considered protected by default if they start with a _prefix. This means that the only time you'd use protected is if you don't want someone to use a function without a _prefix, and the only time you'd use private is if you don't want someone to touch your method at all.
• protected: The method can be called or overriden by subclasses but should not be accessed (directly) by a user. For example:
1 2 3 4 5 6 postCreate: function(){ // summary: // Called after a widget's dom has been setup // tags: // protected },
• private: The method or property is not intended for use by anything other than the class itself. For example:
1 2 3 4 5 6 7 8 _attrToDom: function(/*String*/ attr, /*String*/ value){ // summary: // Reflect a widget attribute (title, tabIndex, duration etc.) to // the widget DOM, as specified in attributeMap. // tags: // private ... }
## Method-Specific Tags
• callback: This method represents a location that a user can connect to (i.e. using dojo.connect) to receive notification that some event happened, such as a user clicking a button or an animation completing. For example:
1 2 3 4 5 6 7 onClick: function(){ // summary: // Called when the user clicks the widget // tags: // callback ... }
• extension: Unlike a normal protected method, we mark a function as an extension if the default functionality isn't how we want the method to ultimately behave. This is for things like lifecycle methods (e.g. postCreate) or methods where a subclass is expected to change some basic default functionality (e.g. buildRendering). A callback is just a notification that some event happened, an extension is where the widget code is expecting a method to return a value or perform some action. For example, on a calendar:
1 2 3 4 5 6 7 8 9 10 isDisabledDate: function(date){ // summary: // Return true if the specified date should be disabled (i.e. grayed out and unclickable) // description: // Override this method to define special days to gray out, such as weekends // or (for an airline) black-out days when discount fares aren't available. // tags: // extension ... }
#### General Function Information
1 2 3 4 5 6 7 8 9 10 11 12 Foo = function(){ // summary: Soon we will have enough treasure to rule all of New Jersey. // description: Or we could just get a new roommate. // Look, you go find him. He don't yell at you. // All I ever try to do is make him smile and sing around // him and dance around him and he just lays into me. // He told me to get in the freezer 'cause there was a carnival in there. // returns: Look, a Bananarama tape! } Foo.prototype.onSomethingNoLongerApplicable = function(){ // tags: callback deprecated }
#### Object Information
Has no description of what it returns
1 2 3 4 5 6 7 8 9 var mcChris = { // summary: Dingle, engage the rainbow machine! // description: // Tell you what, I wish I was--oh my g--that beam, // coming up like that, the speed, you might wanna adjust that. // It really did a number on my back, there. I mean, and I don't // wanna say whiplash, just yet, cause that's a little too far, // but, you're insured, right? }
#### Function Assembler Information (defineWidget/declare)
If the declaration passes a constructor, the summary and description must be filled in there. If you do not pass a constructor, the comment block can be created in the passed mixins object.
For example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 dojo.declare( "steve", null, { // summary: // Phew, this sure is relaxing, Frylock. // description: // Thousands of years ago, before the dawn of // man as we knew him, there was Sir Santa of Claus: an // ape-like creature making crude and pointless toys out // of dino-bones, hurling them at chimp-like creatures with // crinkled hands regardless of how they behaved the // previous year. // returns: // Unless Carl pays tribute to the Elfin Elders in space. } );
## Parameters
#### Simple Types
Types should (but don't have to) appear in the main parameter definition block.
For example:
1 function(/*String*/ foo, /*int*/ bar)...
#### Type Modifiers
There are some modifiers you can add after the type:
• ? means optional
• ... means the last parameter repeats indefinitely
• [] means an array
1 function(/*String?*/ foo, /*int...*/ bar, /*String[]?*/ baz)...
#### Full Parameter Summaries
If you want to also add a summary, you can do so in the initial comment block. If you've declared a type in the parameter definition, you do not need to redeclare it here.
The format for the general information is: *key *Descriptive sentence
The format for parameters and variables is: *key ~type~* Descriptive sentence
Where *key *and ~*type*~ can be surrounded by any non-alphanumeric characters.
1 2 3 4 5 6 function(foo, bar){ // foo: String // used for being the first parameter // bar: int // used for being the second parameter }
## Variables
Instance variables, prototype variables and external variables can all be defined in the same way. There are many ways that a variable might get assigned to this function, and locating them all inside of the actual function they reference is the best way to not lose track of them, or accidentally comment them multiple times.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 function Foo(){ // myString: String // times: int // How many times to print myString // separator: String // What to print out in between myString* this.myString = "placeholder text"; this.times = 5; } Foo.prototype.setString = function(myString){ this.myString = myString; } Foo.prototype.toString = function(){ for(int i = 0; i < this.times; i++){ dojo.debug(this.myString); dojo.debug(foo.separator); } } Foo.separator = "=====";
#### Tagging Variables
Variables can be tagged by placing them in a whitespace-separated format before the type value between [ and ] characters. The tags available for variables are the same as outlined in the main tags, plus a few variable-specific additions:
• const: A widget attribute that can be used for configuration, but can only have its value assigned during initialization. This means that changing this value on a widget instance (even with the attr method) will be a no-op.
1 2 3 // id: [const] String // A unique, opaque ID string that can be assigned by users... id: ""
• readonly: This property is intended to be read and cannot be specified during initialization, or changed after initialization.
1 2 3 // domNode: [readonly] DomNode // This is our visible representation of the widget... domNode: null
#### Variable Comments in an Object
The parser takes the comments in between object values and applies the same rules as if they were in the initial comment block:
1 2 3 4 5 6 7 { // key: String // A simple value key: "value", // key2: String // Another simple value }
## Return Value
Because a function can return multiple types, the types should be declared on the same line as the return statement, and the comment must be the last thing on the line. If all the return types are the same, the parser uses that return type. If they're different, the function is considered to return "mixed".
1 2 3 4 5 6 7 function(){ if(arguments.length){ return "You passed argument(s)"; // String }else{ return false; // Boolean } }
Note: The return type should be on the same line as the return statement. The first example is invalid, the second is valid:
1 2 3 4 5 6 7 8 9 10 function(){ return { foo: "bar" // return Object } } function(){ return { // return Object foo: "bar" } }
## Documentation-Specific Code
Sometimes objects are constructed in a way that is hard to see from just looking through source. Or we might pass a generic object and want to let the user know what fields they can put in this object. In order to do this, there are two solutions:
#### Inline Commented-Out Code
There are some instances where you might want an object or function to appear in documentation, but not in Dojo, nor in your build. To do this, start a comment block with /*=====. The number of = can be 5 or more.
The parser simply replaces the /*===== and =====*/ with whitespace at the very start, so you must be very careful about your syntax.
1 2 3 4 5 6 7 8 9 10 11 12 13 dojo.mixin(wwwizard, { /*===== // url: String // The location of the file url: "", // mimeType: String // text/html, text/xml, etc mimeType: "", =====*/ // somethingElse: Boolean // Put something else here somethingElse: "eskimo" });
#### Code in a Separate File
Doing this allows us to see syntax highlighting in our text editor, and we can worry less about breaking the syntax of the file that's actually in the code-base during parsing. It's nothing more complicated that writing a normal JS file, with a dojo.provide call.
The trade-off is that it's harder to maintain documentation-only files. It's a good idea to only have one of these per the namespace depth you're at. eg in the same directory that the file you're documenting is. We'll see an example of its use in the next section.
## Documenting a kwArg
A lot of Dojo uses keyword-style arguments (kwArg). It's difficult to describe how to use them sometimes. One option is to provide a pseudo-object describing its behavior. So we'll create module/_arg.js and do the following:
1 2 3 4 5 6 7 8 9 dojo.provide("module._arg"); module._arg.myFuncArgs = function(/*Object*/ kwArgs){ // url: String // Location of the thing to use // mimeType: String // Mimetype to return data as this.url = kwArgs.url; this.mimeType = kwArgs.mimeType; }
This describes a real object that mimics the functionality of the generic object you would normally pass, but also provides documentation of what fields it has and what they do.
To associate this object with the originating function, do this:
1 2 3 4 var myFunc = function(/*module._arg.myFuncArgs*/ kwArgs){ dojo.debug(kwArgs.url); dojo.debug(kwArgs.mimeType); }
Since we didn't do a dojo.require on module._arg, it won't get included, but the documentation parser will still provide a link to it, allowing the user to see its functionality. This pseudo object may also be included in-line using the /*===== =====*/ syntax. For an example of how to do this inline, see "dojo.__FadeArgs" pseudo code in dojo/_base/fx.js, used to document dojo.fadeIn() and dojo.fadeOut()
## Which Documentation-Specific Syntax To Use
Documenting in another file reduces the chance that your code will break code parsing. It's a good idea from this perspective to use the separate file style as much as possible.
There are many situations where you can't do this, in which case you should use the inline-comment syntax. There is also a fear that people will forget to keep documentation in sync as they add new invisible mixed in fields. If this is a serious concern, you can also use the inline comment syntax.
## Using the Doctool locally
If you are a developer who has marked their code up using this syntax and want to test to make sure it is correct, you can run the doctool yourself locally. See INSTALL in util/jsdoc. There is also a tool to quickly view simple parsing found in util/docscripts/_browse.php
posted on 2014-03-14 17:23 Adair 阅读(...) 评论(...) 编辑 收藏
|
{}
|
Our Discord hit 10K members! 🎉 Meet students and ask top educators your questions.Join Here!
Oh no! Our educators are currently working hard solving this question.
In the meantime, our AI Tutor recommends this similar expert step-by-step video covering the same topics.
WZ
# Find equations of the osculating circles of the ellipse$9 x^{2}+4 y^{2}=36$ at the points $(2,0)$ and $(0,3) .$ Use agraphing calculator or computer to graph the ellipse andboth osculating circles on the same screen.
## $$=\frac{2}{9}$$
Vectors
Vector Functions
### Discussion
You must be signed in to discuss.
##### Catherine R.
Missouri State University
##### Heather Z.
Oregon State University
##### Kristen K.
University of Michigan - Ann Arbor
##### Michael J.
Idaho State University
Lectures
Join Bootcamp
### Video Transcript
{'transcript': "The problem is on the equations of the escalating circles lifts nine X square plus y square is equal to thirty six Hat's appointed to zero and zero three Use of rough in calculator Our computer to graph the lips and the post are escalating circles on the same story. First went into a computer the kosher cages. You go to extra housewife from Rome minus y from sex from from why do you want this one over x from Squire? Why from squire is the power ofthe three over too? Here for this list. We're hot. We can like thanks. You called too. Shoot house call Sign data on DH kasan t You're pretty used to task assigned And why is he called? Three times fine And they have x from this iko too negative two times Signe Why from is a good too Three counts of Society X from from physical connective to find he Why from found his connective three house signed she so they have come Orchard is equal to absolutely while you thinks I'm he squire Last six times Call sign his scoring over war camps sign he's a player US nine times call sign US player to the poverty of the over too. This is income to six over or or signed his player. Last nine was signed. He squire. She's a part of three over too. Now I was appointed to zero. We have Hey, it's the culture zero. Okay, The euro is equal to six over nine. Choose power off very over to suggest it's in country two or nine. Then we're half the wildness of the escalating circles. I'm disappointed to zero is equal to nine over too. Thank you. The center is Schum wise nigh over too. Zero suggestive negative over to you zero The equation for the escalating circles to the point two zero is axe us off over too. Squire asked why Squire isn't what you eighty one. What for? Now that's the point, Heroes. Three we have Hey zero He is equal to Hi Forward too. Okay, pie over too is in court You six over four to three over to this is in control three whole war war. So the wires out escalating circles at the point there there are three is a crowd for over three. Center is, you know, three minus for the resistance is in court. Zero off over three. The equation for that escalating circles is I squared. Plus why Linus off or re squire is they called you sixteen over nine. Now let's look at the grafts. The blue one is the class, or the curve the green one is on is a wild one as escalating circles off this curve at a point zero three and zero."}
WZ
#### Topics
Vectors
Vector Functions
##### Catherine R.
Missouri State University
##### Heather Z.
Oregon State University
##### Kristen K.
University of Michigan - Ann Arbor
##### Michael J.
Idaho State University
Lectures
Join Bootcamp
|
{}
|
There thousands of publicly available R extensions, AKA “packages”. They are not all installed on your system, and only core R packages are loaded when you start a new R session. The library() function is used to load non-core packages so that subsequent commands are searched against the functions and data in these packages as well as the core.
I like to also show the package version number in my Rmd workflows (using packageVersion()) so that I know what was used at the time the code was executed.
I also like to include a date stamp. There’s one above, but here’s how you can include it in your code:
gsub("[[:punct:][:space:]]", "-", Sys.time())
## Load phyloseq and other packages
1. Load the phyloseq, data.table, and ggplot2, packages using the library() command.
2. Check the version number of each package using the packageVersion() command.
## Load Pre-Organized Data from Previous Section
In the previous section you organized our Moving Pictures example data using phyloseq tools, and then saved this data from your R session (system memory, RAM) to the hard disk as a compressed, serialized file (extension “.RDS”). This is a handy way to store complicated single data objects for later use, without needing to define and document one or more file formats. This data was imported from 3 or 4 QIIME output files in various locations, but now this one self-describing file is all you need.
1. Load your Moving Pictures RDS data file into your R session using the readRDS() function, and store the result as an object named mp.
Reminder: For storing many objects from an R session at once, try save() or save.image(), and load().
# Initial exploration of data
Explore the data. Executing a command in the terminal that is just a data object invokes the print() or summary() functions, which usually give summary details about the data if it is a complex object (like ours), or show the data directly if it is simple. If it is simple but large, you might hit the streaming limits in your console, so try and check sizes first. The “Environment” box in your RStudio console usually tells you these details as well.
1. “print” to screen the mp object
2. This will give you other details about the object, as well as a few key functions that can be used to access parts of it. This data is too large to show all in the console, with the exception of phy_tree() and refseq(), which have their own special print summary functions.
3. Print mp to the screen.
4. Get the number of samples and number of taxa (OTUs) directly, using the nsamples() and ntaxa() functions.
5. The sample_data and tax_table components of this phyloseq object have their own special variables, namely sample_variables() and rank_names(). Use these functions on mp. What do you get? What does it mean?
6. Find details about the phylogenetic tree in this data using the phy_tree() function. What is it telling you? How many leaves are in the tree? How many internal nodes?
7. You could attempt to plot this tree with the default plot() function, but it has way too many leaves to be legible. Also, the default plot function is limited, so we’ll come back to this and how to use the phyloseq function plot_tree() later on.
## Summarize sequencing depths, in general and by category
Create histograms that summarize sequencing depth in this data. Start with overall, but then also create category-specific histograms to help evaluate the balance in this experiment.
For this we are going to use the ggplot2 package.
1. Define a data.frame or data.table that contains the total number of reads per sample using the sample_sums() function, and the geom_histogram() geometric object from ggplot2.
2. Separate these histograms by SampleType in separate plot panels using the facet_wrap() or facet_grid() functions.
3. Add an informative title using the ggtitle() function.
4. Modify the axis labels using the xlab() and ylab() functions.
5. Make a grid of panels by SampleType and Subject using the facet_grid() function. Does it look like there are any imbalances in total counts or number of samples among these critical groups?
The lowest number of reads per sample was 1114. That is low for typical new projects, but not that unusual compared to the rest of the samples in this tutorial dataset. From these plots, I don’t recommend filtering any samples based on depth alone, but keep in mind that in some cases you might, mainly when the number of read counts in a sample is so low that it is apparent that something went wrong with that sample. We will keep an eye out for artifacts and odd behavior (outliers) as we go along.
The plots above also do not indicate strong biases in sequencing depth by group or subject. Do you agree?
## Sequencing depth across time
Here we want to check for any associations with sequencing depth and time. This is worthwhile to check early on in an experiment, as metadata variables like sequencing depth should be uncorrelated with key experimental variables. Why is that? What would it mean if there was a strong correlation between depth and variable?
1. Create a ggplot graphic in which the horizontal axis is mapped to DaysSinceExperimentStart, the vertical axis is mapped to TotalReads, and the color variable is mapped to Subject.
2. Make this a scatter plot using the geom_point() function.
3. Add a title the same way you did in the previous graphic. Label the graphic Sequencing Depth vs. Time.
4. Scale the vertical axis to base-10 logarithm using the scale_y_log10() layer function. Does this improve the plot?
5. What is the minimum number of reads for any sample? (That is, what is the smallest library size?). Does this seem like a problem? Would you remove any samples?
Note that you should enter Subject as as.factor(Subject) to let ggplot2 know that this is really a discrete variable, even though it is encoded as a numeric vector of 1 and 2. Alternatively you could modify the Subject variable to be a character or factor, or you could add a new variable that holds Subject with a discrete data type; however, I prefer to not modify the original data unless necessary or useful, and the as.factor() expression is an elegant option in this case. Why might it be a good idea to avoid modifying key variables in the data?
# Filter Taxa
Filtering rare taxa is usually a necessary step in this type of data. We will actually perform the filtering elsewhere. For now, just explore the distribution of taxa across the dataset. There are two obvious measures to consider right away: (1) prevalence - the number of samples in which a taxa appears, and (2) total counts - the total number (or proportion) of observations of a taxa across all samples.
I like to plot them together on the same plot, as well as histograms of each. This will help determine what appears to be reasonable thresholds that define “unreasonably high” or “unhelpfully low” presence of a taxa in the data. Note the subjectivity in this last statement. Filtering is often subjective. We can be honest about that. Our goal is to both justify and document our decision for filtering. The goal of filtering is to remove noise from the data, in this case noisy variables that are unlikely to provide any useful information. Our criteria for this filtering should not include variables from which we want to infer biological insights later on. Why is that?
## Taxa total counts histogram
1. How many singleton taxa are there (OTUs that occur in just one sample, one time)?
2. How many doubletons are there (OTU that occurs just twice)?
3. Create a histogram of the total counts of each OTU.
4. Calculate the cumulative sum of OTUs that would be filtered at every possible value of such a threshold, from zero to the most-observed OTU. This one is tricky. Feel free to glance at the answers. I used some data.table magic to make this easier
5. Plot the cumulative sum of OTUs against the total counts using ggplot2 commands to make a scatter plot, and save this object as pCumSum, then “print” it to the terminal to render a graphic. What behavior do you see in the data? Are most of the OTUs in the table rare? Where would you set the threshold?
6. To help clarify, zoom-in on the region between zero and 100 total counts, by “adding” the following layer to pCumSum: pCumSum + xlim(0, 100) Now where would you set the threshold?
## Taxa prevalence histogram, and fast_melt()
I’ve included a special function with the course materials, called fast_melt(), that I like. It is not yet incorporated into the phyloseq package. You can make it available to your session by “sourcing” the R code file that defines it. Use:
source("taxa_summary.R", local = TRUE)
In fact, now that you’ve done that, run the following code:
mdt = fast_melt(mp)
prevdt = mdt[, list(Prevalence = sum(count > 0),
TotalCounts = sum(count)),
by = TaxaID]
If you’ve used the data.table package before you might guess what this does. Briefly, the fast_melt() function “melts” the OTU table into a long format with three main columns: SampleID, TaxaID, and count. This allows us to do some additional data.table magic. For the rest of this section, go ahead and peak at the answers and come back (try not to peak ahead!).
I define Prevalence here as the number of times an OTU is observed at least once. That is, it is the number of samples in which each OTU was non-zero. I find this to be a more useful filtering and diagnostic criteria.
1. What is the plot showing?
2. Why might prevalence be a useful filtering criteria?
3. Where would you set a prevalence threshold?
## Prevalence vs. Total Count Scatter plot
1. Make sure you executed the code (you peeked!) from the answers that defines prevdt.
2. Now use ggplot2 and prevdt to plot TotalCounts versus Prevalence as a scatter plot.
### Extra Credit
1. Subset to the most abundant 9 phyla, and map these to color in a ggplot2 scatter plot.
## Select, Document Filtering Criteria
After exploring these plots, select your own filtering critera. Document it in your Rmd by storing it as an informatively-titled variable. We will use it later on.
# Tree Plot
There are ntaxa(mp) OTUs in the unmodified data. This is too many to reasonably attempt to plot on a tree, but we can agglomerate nearby positions on the tree as a means to simplify without losing too much information.
Use the tip_glom function to accomplish this, then use plot_tree to explore the dataset from the point of view of an evolutionary tree. Depending on the speed of the instance you’re currently using, you may want to first apply one of the filters you defined in the previous section.
### Extra Credit - Add useful taxonomic annotations
Add useful taxonomic annotations to this plot to help clarify which regions appear to be over- or under-represented in particular sample groups.
# Alpha Diversity
Before filtering, let’s explore alpha diversity, another aspect of our data that is often mentioned in the literature. I find the best way to explore this is by plotting, and phyloseq provides a convenient function for plotting alpha diversity measures and organizing these plots according to sample variables.
1. Use the plot_richness() function to create an alpha diversity graphic. The output is a ggplot2 object. Include Observed, Chao-1, Shannon, and Inverse Simpson as the measures to include. You’ll need to read the documentation on the measures argument in the function to decide how to encode it. Also include "ReportedAntibioticUsage" as the shape argument, and "SampleType" as the color argument. Note that with phyloseq plot_ commands, the variable names must be provided with quotations. Save this object as pAlpha, but also print it to screen.
2. Now print it to screen again after increasing the point sizes from the default value with: pAlpha + geom_point(size = 5)
The other reason we saved teh ggplot2 object in this case is because it also includes the data for the plot in an R “data.frame” within the object. This is true of most ggplot2 objects, and so a useful tool for building custom graphics. We will use this in the next section.
## Custom Alpha Diversity Graphic
1. Store this matrix, ordBC$vectors as a new object, while also converting it to a data.table or data.frame. 2. Make sure the sample identifiers are included. 3. Join this ordination position data with your sample_data(), and save this object as ordBCdt 4. Order ordBCdt by DaysSinceExperimentStart so that we can smoothly connect time-adjacent points in a plot. 5. Create a ggplot2 scatterplot graphic that maps DaysSinceExperimentStart to the horizontal axis, PCoA Axis.1 to the vertical axis, color to factor(Subject), and facet by SampleType (the body site). 6. Add a geom_path() layer to this. If you ordered the data table properly, the lines will connect time-adjacent points, and hopefully look nice. 7. Add an informative title. 8. Repeat this for each distance type. Do you see any intersting patterns? Is one body site more time-variant than others? Does it appear that there is temporal autocorrelation? That is, do previous time points predict the axis position of the next time point? How might you evaluate this question statistically? ### For Bray-Curtis ### For Weighted Unifrac ## Multiple PCs plots in one graphic. Multiple ggplot2 objects (or more generally, any graphics objects based on the grid package) can be combined into one plot using the grid.arrange() function in the gridExtra package. 1. Use the gridExtra::grid.arrange() function to combine the plots from the previous section. ## Procrustes and ggplot2 ### Procrustes rotation between samples from each choice of distance Rather than attempt a complicated series of prompting statements, I’ve just included an example for how to do this. Feel free to explore the documentation for ade4::procuste, especially the plotting demos included in the examples section. The ade4 package is an extremely powerful collection of ordination methods for various complex experimental designs well beyond our current example with a single OTU table and a few sample covariates of interest. # Order both tables by sample ID setkey(ordBCdt, SampleID) setkey(ordUFdt, SampleID) pro1 <- ade4::procuste(dfX = ordBCdt[, list(Axis.1, Axis.2)], dfY = ordUFdt[, list(Axis.1, Axis.2)]) # Add the procruste rotated and scaled coordinates to each table ordBCdt[, c("New1", "New2") := pro1$tabX]
ordUFdt[, c("New1", "New2") := pro1\$tabY]
# Combine the tables for plottiing
ordBCdt[, D := "Bray-Curtis"]
ordUFdt[, D := "w-UniFrac"]
keepCols = c("New1", "New2", "SampleID", "D", "SampleType",
"DaysSinceExperimentStart", "Subject", "ReportedAntibioticUsage")
procrustdt = rbindlist(list(ordBCdt[, keepCols, with = FALSE],
ordUFdt[, keepCols, with = FALSE]))
# Now define the ggplot object,
# connecting points with the same sample ID, but different distances
ggplot(procrustdt, aes(New1, New2,
color = D, shape = SampleType)) +
geom_point(size = 5) +
geom_line(aes(group = SampleID), color = "black") +
ggtitle("Procruste rotation comparing MDS
from two different distance measures")
|
{}
|
# Module DA.List.Total¶
## Functions¶
: [a] -> Optional a
Return the first element of a list. Return None if list is empty.
tail
: [a] -> Optional [a]
Return all but the first element of a list. Return None if list is empty.
last
: [a] -> Optional a
Extract the last element of a list. Returns None if list is empty.
init
: [a] -> Optional [a]
Return all the elements of a list except the last one. Returns None if list is empty.
(!!)
: [a] -> Int -> Optional a
Return the nth element of a list. Return None if index is out of bounds.
foldl1
: (a -> a -> a) -> [a] -> Optional a
Fold left starting with the head of the list. For example, foldl1 f [a,b,c] = f (f a b) c. Return None if list is empty.
foldr1
: (a -> a -> a) -> [a] -> Optional a
Fold right starting with the last element of the list. For example, foldr1 f [a,b,c] = f a (f b c)
foldBalanced1
: (a -> a -> a) -> [a] -> Optional a
Fold a non-empty list in a balanced way. Balanced means that each element has approximately the same depth in the operator tree. Approximately the same depth means that the difference between maximum and minimum depth is at most 1. The accumulation operation must be associative and commutative in order to get the same result as foldl1 or foldr1.
Return None if list is empty.
minimumBy
: (a -> a -> Ordering) -> [a] -> Optional a
Return the least element of a list according to the given comparison function. Return None if list is empty.
maximumBy
: (a -> a -> Ordering) -> [a] -> Optional a
Return the greatest element of a list according to the given comparison function. Return None if list is empty.
minimumOn
: Ord k => (a -> k) -> [a] -> Optional a
Return the least element of a list when comparing by a key function. For example minimumOn (\(x,y) -> x + y) [(1,2), (2,0)] == Some (2,0). Return None if list is empty.
maximumOn
: Ord k => (a -> k) -> [a] -> Optional a
Return the greatest element of a list when comparing by a key function. For example maximumOn (\(x,y) -> x + y) [(1,2), (2,0)] == Some (1,2). Return None if list is empty.
|
{}
|
Letter | Published:
# Xenon isotopic constraints on the history of volatile recycling into the mantle
## Abstract
The long-term exchange of volatile species (such as water, carbon, nitrogen and the noble gases) between deep Earth and surface reservoirs controls the habitability of the Earth’s surface. The present-day volatile budget of the mantle reflects the integrated history of outgassing and retention of primordial volatiles delivered to the planet during accretion, volatile species generated by radiogenic ingrowth and volatiles transported into the mantle from surface reservoirs over time. Variations in the distribution of volatiles between deep Earth and surface reservoirs affect the viscosity, cooling rate and convective stress state of the solid Earth. Accordingly, constraints on the flux of surface volatiles transported into the deep Earth improve our understanding of mantle convection and plate tectonics. However, the history of surface volatile regassing into the mantle is not known. Here we use mantle xenon isotope systematics to constrain the age of initiation of volatile regassing into the deep Earth. Given evidence of prolonged evolution of the xenon isotopic composition of the atmosphere1,2, we find that substantial recycling of atmospheric xenon into the deep Earth could not have occurred before 2.5 billion years ago. Xenon concentrations in downwellings remained low relative to ambient convecting mantle concentrations throughout the Archaean era, and the mantle shifted from a net degassing to a net regassing regime after 2.5 billion years ago. Because xenon is carried into the Earth’s interior in hydrous mineral phases3,4,5, our results indicate that downwellings were drier in the Archaean era relative to the present. Progressive drying of the Archean mantle would allow slower convection and decreased heat transport out of the mantle, suggesting non-monotonic thermal evolution of the Earth’s interior.
## Access optionsAccess options
from\$8.99
All prices are NET prices.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Change history
• ### 24 September 2018
In this Letter, owing to a production error, the arrows in the middle panel of Fig. 1 were wrongly coloured and there were some some typos elsewhere. These errors have been corrected online.
## References
1. 1.
Pujol, M., Marty, B. & Burgess, R. Chondritic-like xenon trapped in Archean rocks: a possible signature of the ancient atmosphere. Earth Planet. Sci. Lett. 308, 298–306 (2011).
2. 2.
Avice, G., Marty, B. & Burgess, R. The origin and degassing history of the Earth’s atmosphere revealed by Archean xenon. Nat. Commun. 8, 15455 (2017).
3. 3.
Ozima, M. & Podosek, F. A. Noble Gas Geochemistry (Cambridge University Press, Cambridge, 2002).
4. 4.
Jackson, C. R., Parman, S. W., Kelley, S. P. & Cooper, R. F. Noble gas transport into the mantle facilitated by high solubility in amphibole. Nat. Geosci. 6, 562–565 (2013).
5. 5.
Kendrick, M. A. et al. Subduction zone fluxes of halogens and noble gases in seafloor and forearc serpentinites. Earth Planet. Sci. Lett. 365, 86–96 (2013).
6. 6.
Brown, M. Duality of thermal regimes is the distinctive characteristic of plate tectonics since the Neoarchean. Geology 34, 961–964 (2006).
7. 7.
Schmidt, M. W. & Poli, S. Experimentally based water budgets for dehydrating slabs and consequences for arc magma generation. Earth Planet. Sci. Lett. 163, 361–379 (1998).
8. 8.
Hacker, B. R. H2O subduction beyond arcs. Geochem. Geophys. Geosyst. 9, Q03001 (2008).
9. 9.
van Keken, P. E., Hacker, B. R., Syracuse, E. M. & Abers, G. A. Subduction factory: 4. Depth-dependent flux of H2O from subducting slabs worldwide. J. Geophys. Res. 116, B01401 (2011).
10. 10.
Parai, R. & Mukhopadhyay, S. How large is the subducted water flux? New constraints on mantle regassing rates. Earth Planet. Sci. Lett. 317–318, 396–406 (2012).
11. 11.
Holland, G. & Ballentine, C. J. Seawater subduction controls the heavy noble gas composition of the mantle. Nature 441, 186–191 (2006).
12. 12.
Mukhopadhyay, S. Early differentiation and volatile accretion recorded in deep-mantle neon and xenon. Nature 486, 101–104 (2012).
13. 13.
Parai, R., Mukhopadhyay, S. & Standish, J. J. Heterogeneous upper mantle Ne, Ar and Xe isotopic compositions and a possible Dupal noble gas signature recorded in basalts from the Southwest Indian Ridge. Earth Planet. Sci. Lett. 359–360, 227–239 (2012).
14. 14.
Tucker, J. M., Mukhopadhyay, S. & Schilling, J. G. The heavy noble gas composition of the depleted MORB mantle (DMM) and its implications for the preservation of heterogeneities in the mantle. Earth Planet. Sci. Lett. 355–356, 244–254 (2012).
15. 15.
Pető, M. K., Mukhopadhyay, S. & Kelley, K. A. Heterogeneities from the first 100 million years recorded in deep mantle noble gases from the Northern Lau Back-arc Basin. Earth Planet. Sci. Lett. 369–370, 13–23 (2013).
16. 16.
Parai, R. & Mukhopadhyay, S. The evolution of MORB and plume mantle volatile budgets: constraints from fission Xe isotopes in Southwest Indian Ridge basalts. Geochem. Geophys. Geosyst. 16, 719–735 (2015).
17. 17.
Pepin, R. O. On the origin and early evolution of terrestrial planet atmospheres and meteoritic volatiles. Icarus 92, 2–79 (1991).
18. 18.
Pepin, R. O. On the isotopic composition of primordial xenon in terrestrial planet atmospheres. Space Sci. Rev. 92, 371–395 (2000).
19. 19.
Pepin, R. O. & Porcelli, D. Origin of noble gases in the terrestrial planets. Rev. Mineral. Geochem. 47, 191–246 (2002).
20. 20.
Pujol, M., Marty, B., Burnard, P. & Philippot, P. Xenon in Archean barite: weak decay of 130Ba, mass-dependent isotopic fractionation and implication for barite formation. Geochim. Cosmochim. Acta 73, 6834–6846 (2009).
21. 21.
Marty, B. The origins and concentrations of water, carbon, nitrogen and noble gases on Earth. Earth Planet. Sci. Lett. 313–314, 56–66 (2012).
22. 22.
Caracausi, A., Avice, G., Burnard, P. G., Füri, E. & Marty, B. Chondritic xenon in the Earth’s mantle. Nature 533, 82–85 (2016).
23. 23.
Caffee, M. et al. Primordial noble gases from Earth’s mantle: identification of a primitive volatile component. Science 285, 2115–2118 (1999).
24. 24.
Chavrit, D. et al. The contribution of hydrothermally altered ocean crust to the mantle halogen and noble gas cycles. Geochim. Cosmochim. Acta 183, 106–124 (2016).
25. 25.
Matsuda, J. I. & Nagao, K. Noble gas abundances in a deep-sea sediment core from eastern equatorial Pacific. Geochem. J. 20, 71–80 (1986).
26. 26.
Sumino, H. et al. Seawater-derived noble gases and halogens preserved in exhumed mantle wedge peridotite. Earth Planet. Sci. Lett. 294, 163–172 (2010).
27. 27.
Kendrick, M. A., Scambelluri, M., Honda, M. & Phillips, D. High abundances of noble gas and chlorine delivered to the mantle by serpentinite subduction. Nat. Geosci. 4, 807–812 (2011).
28. 28.
Korenaga, J., Planavsky, N. J. & Evans, D. A. Global water cycle and the coevolution of the Earth’s interior and surface environment. Phil. Trans. R. Soc. A 375, 20150393 (2017).
29. 29.
Shirey, S. B. & Richardson, S. H. Start of the Wilson cycle at 3 Ga shown by diamonds from subcontinental mantle. Science 333, 434–436 (2011).
30. 30.
Hopkins, M., Harrison, T. M. & Manning, C. E. Low heat flow inferred from >4 Gyr zircons suggests Hadean plate boundary interactions. Nature 456, 493–496 (2008).
31. 31.
Kumagai, H., Dick, H. J. & Kaneoka, I. Noble gas signatures of abyssal gabbros and peridotites at an Indian Ocean core complex. Geochem. Geophys. Geosyst. 4, 9017 (2003).
32. 32.
Matsuda, J. I. & Matsubara, K. Noble gases in silica and their implication for the terrestrial “missing” Xe. Geophys. Res. Lett. 16, 81–84 (1989).
33. 33.
Matsuda, J.-I. & von Herzen, R. Thermal conductivity variation in a deep-sea sediment core and its relation to H2O, Ca and Si content. Deep-Sea Res. A 33, 165–175 (1986).
34. 34.
Podosek, F., Honda, M. & Ozima, M. Sedimentary noble gases. Geochim. Cosmochim. Acta 44, 1875–1884 (1980).
35. 35.
Staudacher, T. & Allègre, C. J. Recycling of oceanic crust and sediments: the noble gas subduction barrier. Earth Planet. Sci. Lett. 89, 173–183 (1988).
36. 36.
Jarrard, R. D. Subduction fluxes of water, carbon dioxide, chlorine, and potassium. Geochem. Geophys. Geosyst. 4, 8905 (2003).
37. 37.
Plank, T. & Langmuir, C. H. The chemical composition of subducting sediment and its consequences for the crust and mantle. Chem. Geol. 145, 325–394 (1998).
38. 38.
Crisp, J. A. Rates of magma emplacement and volcanic output. J. Volcanol. Geotherm. Res. 20, 177–211 (1984).
39. 39.
Dhuime, B., Hawkesworth, C. J., Cawood, P. A. & Storey, C. D. A change in the geodynamics of continental growth 3 billion years ago. Science 335, 1334–1336 (2012).
40. 40.
McLennan, S. M. & Taylor, S. R. Geochemical constraints on the growth of the continental crust. J. Geol. 90, 347–361 (1982).
41. 41.
Pujol, M., Marty, B., Burgess, R., Turner, G. & Philippot, P. Argon isotopic composition of Archaean atmosphere probes early Earth geodynamics. Nature 498, 87–90 (2013).
42. 42.
Rudnick, R. & Gao, S. Treatise on Geochemistry Vol. 3 (eds Holland, H. D. & Turekian, K. K.) 1–64 (Elsevier, 2003).
43. 43.
McDonough, W. F. & Sun, S. S. The composition of the Earth. Chem. Geol. 120 223–253 (1995).
44. 44.
Palme, H. & O’Neill, H. S. C. in Treatise on Geochemistry Vol. 2 (eds Holland, H. D. &Turekian, K. K.) 1–38 (Elsevier, Amsterdam, 2004).
45. 45.
Tolstikhin, I., Marty, B., Porcelli, D. & Hofmann, A. Evolution of volatile species in the Earth’s mantle: a view from xenology. Geochim. Cosmochim. Acta 136, 229–246 (2014).
46. 46.
Hudson, G. B., Kennedy, B. M., Podosek, F. A. & Hohenberg, C. M. In Proc. 19th Lunar and Planetary Science Conference 547–557 (Lunar and Planetary Institute, 1989).
47. 47.
Bianchi, D. et al. Low helium flux from the mantle inferred from simulations of oceanic helium isotope data. Earth Planet. Sci. Lett. 297, 379–386 (2010).
48. 48.
Holzer, M. et al. Objective estimates of mantle 3He in the ocean and implications for constraining the deep ocean circulation. Earth Planet. Sci. Lett. 458, 305–314 (2017).
49. 49.
Schlitzer, R. Quantifying He fluxes from the mantle using multi-tracer data assimilation. Phil. Trans. R. Soc. A 374, 20150288 (2016).
50. 50.
Klein, E. M. & Langmuir, C. H. Global correlations of ocean ridge basalt chemistry with axial depth and crustal thickness. J. Geophys. Res. 92, 8089–8115 (1987).
51. 51.
Moreira, M., Kunz, J. & Allegre, C. Rare gas systematics in Popping Rock: isotopic and elemental compositions in the upper mantle. Science 279, 1178–1181 (1998).
52. 52.
Craig, H., Clarke, W. & Beg, M. Excess 3He in deep water on the East Pacific Rise. Earth Planet. Sci. Lett. 26, 125–132 (1975).
53. 53.
Halliday, A. N. The origins of volatiles in the terrestrial planets. Geochim. Cosmochim. Acta 105, 146–171 (2013).
54. 54.
Holland, G. et al. Deep fracture fluids isolated in the crust since the Precambrian era. Nature 497, 357–360 (2013).
55. 55.
Srinivasan, B. Barites: anomalous xenon from spallation and neutron-induced reactions. Earth Planet. Sci. Lett. 31, 129–141 (1976).
56. 56.
Alexander, E. C. Jr, Lewis, R. S., Reynolds, J. H. & Michel, M. C. Plutonium-244: confirmation as an extinct radioactivity. Science 172, 837–840 (1971).
57. 57.
Lewis, R. S. Rare-gases in separated whitlockite from St. Severin chondrites: xenon and krypton from fission extinct Pu-244. Geochim. Cosmochim. Acta 39, 417–432 (1975).
58. 58.
Wetherill, G. W. Spontaneous fission yields from uranium and thorium. Phys. Rev. 92, 907–912 (1953).
59. 59.
Hebeda, E. H., Schultz, L. & Freundel, M. Radiogenic, fissiogenic and nucleogenic noble gases in zircons. Earth Planet. Sci. Lett. 85, 79–90 (1987).
60. 60.
Eikenberg, J., Signer, P. & Wieler, R. U-Xe, U-Kr, AND U-Pb systematics for dating uranium minerals and investigations of the production of nucleogenic neon and argon. Geochim. Cosmochim. Acta 57, 1053–1069 (1993).
61. 61.
Ragettli, R. A., Hebeda, E. H., Signer, P. & Wieler, R. Uranium xenon chronology: precise determination of λsf* 136Ysf for spontaneous fission of 238U. Earth Planet. Sci. Lett. 128, 653–670 (1994).
## Acknowledgements
The project was funded by NSF grant EAR-1250419 to S.M. We thank D. Fike, C. Jackson, M. Krawczynski and S. Turner for discussions that improved the manuscript.
### Reviewer information
Nature thanks M. Kendrick and D. Porcelli for their contribution to the peer review of this work.
### Extended data
is available for this paper at https://doi.org/10.1038/s41586-018-0388-4.
## Author information
• Rita Parai
### Contributions
S.M. and R.P. developed the conceptual model and the ideas presented in the manuscript. R.P. wrote the Matlab scripts for the numerical modelling. R.P and S.M. analysed the results and R.P. wrote the manuscript with input from S.M.
### Competing interests
The authors declare no competing interests.
### Corresponding author
Correspondence to Rita Parai.
## Extended data figures and tables
1. ### Extended Data Fig. 1 Atmospheric Xe mass fractionation relative to the modern composition over time.
Figure adapted from ref. 2. Xe measured in Archaean barites, fluid inclusions in quartz from Archaean cherts and deep crustal fluids of various age are shown with associated 2σ uncertainties1,2,20,41,54,55. The blue line shows the model atmospheric Xe mass fractionation over time. We assume that the initial Xe isotopic composition of the atmosphere is Rayleigh-mass-fractionated by ~39‰ amu−1 relative to the modern atmosphere and that the degree of mass fractionation decreases linearly until 2 Gyr ago (Ga), when the atmosphere reaches its present composition.
2. ### Extended Data Fig. 2 Sum of squared residuals from least-squares fitting of mantle source compositions using either modern or ancient atmospheric Xe.
Mantle source 130,131,132,134,136Xe compositions are modelled as four-component mixtures of initial mantle Xe, recycled atmospheric Xe, and Xe from the fission of Pu and U. A sum of squared residuals of zero indicates that the mantle source composition can be fitted perfectly by mixing the four end-member components. Sums of squared residuals greater than zero indicate the sigma-normalized error in the best fit compared to the mantle source composition. Using modern atmospheric Xe as the regassed atmospheric Xe component, sums of squared residuals are zero or near-zero. Using ancient atmosphere, sums of squared residuals are much higher than zero, indicating that mantle source compositions cannot be explained by regassing of only ancient atmospheric Xe. The ancient atmospheric Xe composition used here corresponds to 20‰ amu−1 Rayleigh fractionation applied to the modern atmospheric composition and agrees with fission-corrected ancient atmosphere derived from fluid inclusions in Archaean rocks2. ae, Mantle source compositions for Equatorial Atlantic depleted mid-ocean ridge basalt (MORB)14 (a), Southwest Indian Ridge Eastern Orthogonal Supersegment MORB16 (b), Harding County well gas23 (c) and Bravo Dome well gas11 (d) are fitted using the Monte Carlo method (n = 10,000) described in ref. 16, with average carbonaceous chondrite as the initial mantle composition22, and either modern or 20‰ amu−1fractionated atmosphere (e) as the recycled component.
3. ### Extended Data Fig. 3 Continental crust growth models and exponential 130Xed time series examples.
a, 238U (and a small amount of 244Pu) extraction from the mantle by partial melting is offset by recycling of sediments at subduction zones at each time step. We model net extraction of U and any extant Pu from the mantle as directly tracking continental crust growth over time (Methods). Three CCs are adopted: two sigmoidal curves that approximate literature continental crust growth curves (‘CC = 1’ and ‘CC = 2’) and one linear growth curve (‘CC = 3’)39,40,41. b, Example of exponential 130Xed time series tested with our forward model of mantle Xe evolution. Two parameters describe the exponential function (Methods): $${}^{130}{{\rm{Xe}}}_{{\rm{d}}}^{{\rm{final}}}$$, the final 130Xe concentration in downwellings, and τ, the exponential time constant. Grey lines represent a collection of exponentials with discrete variation in $${}^{130}{{\rm{Xe}}}_{{\rm{d}}}^{{\rm{final}}}$$ and τ. A subset with a constant $${}^{130}{{\rm{Xe}}}_{{\rm{d}}}^{{\rm{final}}}$$ and varying τ is highlighted in red. The time constant τ is varied from 10−11 yr−1 to 5 × 10−8 yr−1, with small τ corresponding to slow growth. Examples for nine different $${}^{130}{{\rm{Xe}}}_{{\rm{d}}}^{{\rm{final}}}$$values are shown, with an upper bound of 5 × 108 atoms 130Xe per gram (Methods).
4. ### Extended Data Fig. 4 Characterization of successful regassing histories.
Diverse regassing history shapes are generated by sampling a limited time interval for a variety of growth rates and inflection times (Fig. 2). To provide a common point of comparison for the evolving conditions within downwellings, we sort results by the time when 130Xed has increased by 10% between its initial and final values (time of 10% rise). a, Times of 10% rise for successful regassing histories. Most successful model realizations have a time of 10% rise later than 2.5 Gyr ago. b, Model realizations with high present-day 130Xed values are uniformly characterized by late 10% rise times, indicating that in these model realizations downwelling Xe concentrations remain very low throughout most of Earth history. c, Variation in sigmoidal growth rates (parameter α) allows testing of near-linear (low α, sampling for a limited time interval) or near-step (high α) functions. Near-linear model realizations have a time of 50% rise that is about five times the time of 10% rise (dashed light-grey line), whereas step functions approach a 1:1 line (solid dark-grey line). Successful regassing histories with late times of 10% rise are characterized by rapid growth, approaching a step function, to a relatively high present-day 130Xed.
5. ### Extended Data Fig. 5 Successful regassing histories for varying model parameters.
ad, To test model sensitivity to the input parameters, we vary the number of mantle reservoir masses processed (Nres), convecting mantle reservoir mass (Mres), initial 130Xe concentration (LV) and continental crust model (CC), and collect all successful 130Xed (a, c) and mantle 130Xe concentrations (b, d) over time. For sigmoidal 130Xed, solutions are found for Nres = {5, 6, 7, 8, 9}, Mres of 50%, 75% and 90% of the whole mantle mass, LV of 0.1%, 0.5% and 1% chondritic late veneers, and all three CCs. Extended Data Figs. 6, 7 illustrate trade-offs between individual parameters; for instance, high Nres values generate solutions only with high LV. For all sigmoidal solutions, regassing is limited early in Earth history, and the mantle shifts from net degassing to net regassing after ~2.5 Gyr ago. For exponential 130Xed, solutions are found for Nres = {5, 6, 7, 8, 9}, Mres of 50%, 75% and 90% of the whole mantle mass, LV of 0.1%, 0.5% and 1% chondritic late veneers, and all three CCs. For all solutions, regassing is limited early in Earth history, and the mantle shifts from net degassing to net regassing after ~2.5 Gyr ago.
6. ### Extended Data Fig. 6 Sensitivity of 130Xe and 128Xe/130Xe to model parameters.
Present-day mantle 130Xe concentration and the ratio of two primordial stable isotopes, 128Xe and 130Xe are shown for different model parameter combinations. Four parameters are explored: those affecting the mantle processing-rate history (Mres and Nres), LV (initial 130Xe concentrations corresponding to a late veneer fraction between 0.1% and 1%) and CC (Extended Data Fig. 3). In each panel, three of these parameters are held constant and the other is varied to illustrate model sensitivity to the varied parameter. Each cloud of points represents the range of present-day 130Xe and 128Xe/130Xe generated by different regassing histories for the specified Nres, Mres, LV and CC. The red rectangle indicates the estimated present-day mantle 130Xe concentration and 128Xe/130Xe range. Dots that fall within the red rectangle represent the family of regassing histories that successfully reproduce the present-day mantle composition for each parameter combination. The reference case shown in Figs. 3, 4 (Mres = 90%, Nres = 8, LV = 1%, CC = 1) is shown as a cloud of black points in all panels. a, A higher mantle processing rate (Nres = 10) results in low 130Xe concentrations for successful 128Xe/130Xe ratios, and 128Xe/130Xe ratios that are too low for successful 130Xe concentrations. b, Higher late-veneer fractions correspond to higher initial 130Xe concentrations in the mantle. For the same mantle processing-rate history, LV = 0.1% yields present-day mantle 130Xe concentrations that are too low given successful 128Xe/130Xe ratios. The effect of low LV can be offset by lowering Nres and thus decreasing the total amount of degassing over Earth history; thus, Nres and LV can be co-varied to find solutions. c, The effect of Mres is minimal because degassing is parameterized through the number of reservoir masses processed over Earth history. Some difference is evident at high present-day mantle 130Xe abundances because the same 130Xed regassing rate parameter space is explored against different absolute degassing rates. d, The continental crust model has no effect on budgets of primordial Xe isotopes.
7. ### Extended Data Fig. 7 Sensitivity of fissiogenic Xe to model parameters.
Present-day outcomes are shown in 128Xe–132 Xe–136Xe isotopic space for different model parameter combinations. Four parameters are explored: parameters affecting the mantle processing-rate history (Mres and Nres), the initial mantle 130Xe concentration (LV = 0.1%–1%), and CC (Extended Data Fig. 3). In each panel, three of these parameters are held constant and the other is varied to illustrate model sensitivity to the varied parameter. Each cloud of points represents the range of present-day 128Xe/132Xe and 136Xe/132Xe generated by different regassing histories given the specified Nres, Mres, LV and CC. The red rectangle indicates the estimated present-day mantle 128Xe/132Xe and 136Xe/132Xe range. Dots that fall within the red rectangle represent the family of regassing histories that successfully reproduce present-day mantle composition for each parameter combination. The reference case shown in the main-text figures (Mres = 90%, Nres = 8, LV = 1%, CC = 1) is shown as a cloud of black points in all panels. The orange square is U-Xe, the brown diamond is average carbonaceous chondrites (AVCC) and the blue circle is the modern atmosphere. a, Higher mantle processing rates push present-day compositions towards fissiogenic Xe components. b, Lower late-veneer fractions correspond to present-day compositions closer to fissiogenic Xe components. c, A relatively low mass of the convecting mantle means that the mantle must be more depleted in U to satisfy mass balance with the continental crust (Methods). Thus, for low Mres, the impact of fission is muted compared to high Mres. d, The continental crust model has a limited effect on present-day Xe isotopic compositions.
8. ### Extended Data Fig. 8 130Xe fluxes over time in successful model realizations.
al, Regassing fluxes (ac), degassing fluxes (df), net fluxes (g), 130Xed concentrations (h, i), mass flux (j) and mantle 130Xe concentrations (k, l) are illustrated for an initial mantle 130Xe concentration of 3.2 × 108 atoms per gram (LV = 1%), a convecting mantle reservoir that is 90% of the mass of the whole mantle, and 8 mantle reservoir masses processed over Earth history. Fluxes are reported in moles per year and concentrations are reported in moles per gram. Panels in the left column show results from all successful model realizations (same results as those shown in Figs. 3, 4) and illustrate the 130Xe regassing flux (a), 130Xe degassing flux (d), 130Xe net flux (g) and mass flux over time (j). Panels in the central column show zoomed-in windows with only low-130Xed successful model realizations (light-blue lines), as these largely overlap with each other and are difficult to resolve in the full-scale panels. The right column replicates the central column with semi-logarithmic axes. The regassing 130Xe flux time series (ac) is the product of the downwelling mass flux time series (j; exponentially decreasing with time) and the 130Xed concentration over time (sigmoidally increasing; h, i). Time series for 130Xe regassing fluxes with high present-day 130Xed (darkest-blue lines in a) start near zero owing to near-zero 130Xe concentrations and then rapidly rise as the 130Xed concentration increases faster than the modest decline in mass flux later in Earth history. 130Xe flux time series with low present-day 130Xed (lightest-blue lines in ac) start a protracted, low-magnitude rise relatively early in Earth history. These translate to regassing flux time series that start near zero, rise and then decline with the exponentially decreasing mass flux (b, c). Time series for 130Xe degassing fluxes (df) are the product of the downwelling mass flux time series (j) and the mantle 130Xe concentration over time (k, l), which responds to both degassing and regassing. The net flux over time (g) is the difference between the regassing flux and degassing flux at any given time. The mantle shifts from net degassing to net regassing at some time after 2.5 Gyr ago.
|
{}
|
A Spectral Analysis of Moore Graphs
For fixed integers $r > 0$, and odd $g$, a Moore graph is an $r$-regular graph of girth $g$ which has the minimum number of vertices $n$ among all such graphs with the same regularity and girth.
(Recall, A the girth of a graph is the length of its shortest cycle, and it’s regular if all its vertices have the same degree)
Problem (Hoffman-Singleton): Find a useful constraint on the relationship between $n$ and $r$ for Moore graphs of girth $5$ and degree $r$.
Note: Excluding trivial Moore graphs with girth $g=3$ and degree $r=2$, there are only two known Moore graphs: (a) the Petersen graph and (b) this crazy graph:
The solution to the problem shows that there are only a few cases left to check.
Solution: It is easy to show that the minimum number of vertices of a Moore graph of girth $5$ and degree $r$ is $1 + r + r(r-1) = r^2 + 1$. Just consider the tree:
This is the tree example for $r = 3$, but the argument should be clear for any $r$ from the branching pattern of the tree: $1 + r + r(r-1)$
Provided $n = r^2 + 1$, we will prove that $r$ must be either $3, 7,$ or $57$. The technique will be to analyze the eigenvalues of a special matrix derived from the Moore graph.
Let $A$ be the adjacency matrix of the supposed Moore graph with these properties. Let $B = A^2 = (b_{i,j})$. Using the girth and regularity we know:
• $b_{i,i} = r$ since each vertex has degree $r$.
• $b_{i,j} = 0$ if $(i,j)$ is an edge of $G$, since any walk of length 2 from $i$ to $j$ would be able to use such an edge and create a cycle of length 3 which is less than the girth.
• $b_{i,j} = 1$ if $(i,j)$ is not an edge, because (using the tree idea above), every two vertices non-adjacent vertices have a unique neighbor in common.
Let $J_n$ be the $n \times n$ matrix of all 1’s and $I_n$ the identity matrix. Then
$\displaystyle B = rI_n + J_n - I_n - A.$
We use this matrix equation to generate two equations whose solutions will restrict $r$. Since $A$ is a real symmetric matrix is has an orthonormal basis of eigenvectors $v_1, \dots, v_n$ with eigenvalues $\lambda_1 , \dots, \lambda_n$. Moreover, by regularity we know one of these vectors is the all 1’s vector, with eigenvalue $r$. Call this $v_1 = (1, \dots, 1), \lambda_1 = r$. By orthogonality of $v_1$ with the other $v_i$, we know that $J_nv_i = 0$. We also know that, since $A$ is an adjacency matrix with zeros on the diagonal, the trace of $A$ is $\sum_i \lambda_i = 0$.
Multiply the matrices in the equation above by any $v_i$, $i > 1$ to get
\displaystyle \begin{aligned}A^2v_i &= rv_i - v_i - Av_i \\ \lambda_i^2v_i &= rv_i - v_i - \lambda_i v_i \end{aligned}
Rearranging and factoring out $v_i$ gives $\lambda_i^2 - \lambda_i - (r+1) = 0$. Let $z = 4r - 3$, then the non-$r$ eigenvalues must be one of the two roots: $\mu_1 = (-1 + \sqrt{z}) / 2$ or $\mu_2 = (-1 - \sqrt{z})/2$.
Say that $\mu_1$ occurs $a$ times and $\mu_2$ occurs $b$ times, then $n = a + b + 1$. So we have the following equations.
\displaystyle \begin{aligned} a + b + 1 &= n \\ r + a \mu_1 + b\mu_2 &= 0 \end{aligned}
From this equation you can easily derive that $\sqrt{z}$ is an integer, and as a consequence $r = (m^2 + 3) / 4$ for some integer $m$. With a tiny bit of extra algebra, this gives
$\displaystyle m(m^3 - 2m - 16(a-b)) = 15$
Implying that $m$ divides $15$, meaning $m \in \{ 1, 3, 5, 15\}$, and as a consequence $r \in \{ 1, 3, 7, 57\}$.
$\square$
Discussion: This is a strikingly clever use of spectral graph theory to answer a question about combinatorics. Spectral graph theory is precisely that, the study of what linear algebra can tell us about graphs. For an deeper dive into spectral graph theory, see the guest post I wrote on With High Probability.
If you allow for even girth, there are a few extra (infinite families of) Moore graphs, see Wikipedia for a list.
With additional techniques, one can also disprove the existence of any Moore graphs that are not among the known ones, with the exception of a possible Moore graph of girth $5$ and degree $57$ on $n = 3250$ vertices. It is unknown whether such a graph exists, but if it does, it is known that
You should go out and find it or prove it doesn’t exist.
Hungry for more applications of linear algebra to combinatorics and computer science? The book Thirty-Three Miniatures is a fantastically entertaining book of linear algebra gems (it’s where I found the proof in this post). The exposition is lucid, and the chapters are short enough to read on my daily train commute.
Zero Knowledge Proofs — A Primer
In this post we’ll get a strong taste for zero knowledge proofs by exploring the graph isomorphism problem in detail. In the next post, we’ll see how this relates to cryptography and the bigger picture. The goal of this post is to get a strong understanding of the terms “prover,” “verifier,” and “simulator,” and “zero knowledge” in the context of a specific zero-knowledge proof. Then next time we’ll see how the same concepts (though not the same proof) generalizes to a cryptographically interesting setting.
Graph isomorphism
Let’s start with an extended example. We are given two graphs $G_1, G_2$, and we’d like to know whether they’re isomorphic, meaning they’re the same graph, but “drawn” different ways.
The problem of telling if two graphs are isomorphic seems hard. The pictures above, which are all different drawings of the same graph (or are they?), should give you pause if you thought it was easy.
To add a tiny bit of formalism, a graph $G$ is a list of edges, and each edge $(u,v)$ is a pair of integers between 1 and the total number of vertices of the graph, say $n$. Using this representation, an isomorphism between $G_1$ and $G_2$ is a permutation $\pi$ of the numbers $\{1, 2, \dots, n \}$ with the property that $(i,j)$ is an edge in $G_1$ if and only if $(\pi(i), \pi(j))$ is an edge of $G_2$. You swap around the labels on the vertices, and that’s how you get from one graph to another isomorphic one.
Given two arbitrary graphs as input on a large number of vertices $n$, nobody knows of an efficient—i.e., polynomial time in $n$—algorithm that can always decide whether the input graphs are isomorphic. Even if you promise me that the inputs are isomorphic, nobody knows of an algorithm that could construct an isomorphism. (If you think about it, such an algorithm could be used to solve the decision problem!)
A game
Now let’s play a game. In this game, we’re given two enormous graphs on a billion nodes. I claim they’re isomorphic, and I want to prove it to you. However, my life’s fortune is locked behind these particular graphs (somehow), and if you actually had an isomorphism between these two graphs you could use it to steal all my money. But I still want to convince you that I do, in fact, own all of this money, because we’re about to start a business and you need to know I’m not broke.
Is there a way for me to convince you beyond a reasonable doubt that these two graphs are indeed isomorphic? And moreover, could I do so without you gaining access to my secret isomorphism? It would be even better if I could guarantee you learn nothing about my isomorphism or any isomorphism, because even the slightest chance that you can steal my money is out of the question.
Zero knowledge proofs have exactly those properties, and here’s a zero knowledge proof for graph isomorphism. For the record, $G_1$ and $G_2$ are public knowledge, (common inputs to our protocol for the sake of tracking runtime), and the protocol itself is common knowledge. However, I have an isomorphism $f: G_1 \to G_2$ that you don’t know.
Step 1: I will start by picking one of my two graphs, say $G_1$, mixing up the vertices, and sending you the resulting graph. In other words, I send you a graph $H$ which is chosen uniformly at random from all isomorphic copies of $G_1$. I will save the permutation $\pi$ that I used to generate $H$ for later use.
Step 2: You receive a graph $H$ which you save for later, and then you randomly pick an integer $t$ which is either 1 or 2, with equal probability on each. The number $t$ corresponds to your challenge for me to prove $H$ is isomorphic to $G_1$ or $G_2$. You send me back $t$, with the expectation that I will provide you with an isomorphism between $H$ and $G_t$.
Step 3: Indeed, I faithfully provide you such an isomorphism. If I you send me $t=1$, I’ll give you back $\pi^{-1} : H \to G_1$, and otherwise I’ll give you back $f \circ \pi^{-1}: H \to G_2$. Because composing a fixed permutation with a uniformly random permutation is again a uniformly random permutation, in either case I’m sending you a uniformly random permutation.
Step 4: You receive a permutation $g$, and you can use it to verify that $H$ is isomorphic to $G_t$. If the permutation I sent you doesn’t work, you’ll reject my claim, and if it does, you’ll accept my claim.
Before we analyze, here’s some Python code that implements the above scheme. You can find the full, working example in a repository on this blog’s Github page.
First, a few helper functions for generating random permutations (and turning their list-of-zero-based-indices form into a function-of-positive-integers form)
import random
def randomPermutation(n):
L = list(range(n))
random.shuffle(L)
return L
def makePermutationFunction(L):
return lambda i: L[i - 1] + 1
def makeInversePermutationFunction(L):
return lambda i: 1 + L.index(i - 1)
def applyIsomorphism(G, f):
return [(f(i), f(j)) for (i, j) in G]
Here’s a class for the Prover, the one who knows the isomorphism and wants to prove it while keeping the isomorphism secret:
class Prover(object):
def __init__(self, G1, G2, isomorphism):
'''
isomomorphism is a list of integers representing
an isomoprhism from G1 to G2.
'''
self.G1 = G1
self.G2 = G2
self.n = numVertices(G1)
assert self.n == numVertices(G2)
self.isomorphism = isomorphism
self.state = None
def sendIsomorphicCopy(self):
isomorphism = randomPermutation(self.n)
pi = makePermutationFunction(isomorphism)
H = applyIsomorphism(self.G1, pi)
self.state = isomorphism
return H
def proveIsomorphicTo(self, graphChoice):
randomIsomorphism = self.state
piInverse = makeInversePermutationFunction(randomIsomorphism)
if graphChoice == 1:
return piInverse
else:
f = makePermutationFunction(self.isomorphism)
return lambda i: f(piInverse(i))
The prover has two methods, one for each round of the protocol. The first creates an isomorphic copy of $G_1$, and the second receives the challenge and produces the requested isomorphism.
And here’s the corresponding class for the verifier
class Verifier(object):
def __init__(self, G1, G2):
self.G1 = G1
self.G2 = G2
self.n = numVertices(G1)
assert self.n == numVertices(G2)
def chooseGraph(self, H):
choice = random.choice([1, 2])
self.state = H, choice
return choice
def accepts(self, isomorphism):
'''
Return True if and only if the given isomorphism
is a valid isomorphism between the randomly
chosen graph in the first step, and the H presented
by the Prover.
'''
H, choice = self.state
graphToCheck = [self.G1, self.G2][choice - 1]
f = isomorphism
isValidIsomorphism = (graphToCheck == applyIsomorphism(H, f))
return isValidIsomorphism
Then the protocol is as follows:
def runProtocol(G1, G2, isomorphism):
p = Prover(G1, G2, isomorphism)
v = Verifier(G1, G2)
H = p.sendIsomorphicCopy()
choice = v.chooseGraph(H)
witnessIsomorphism = p.proveIsomorphicTo(choice)
return v.accepts(witnessIsomorphism)
Analysis: Let’s suppose for a moment that everyone is honestly following the rules, and that $G_1, G_2$ are truly isomorphic. Then you’ll always accept my claim, because I can always provide you with an isomorphism. Now let’s suppose that, actually I’m lying, the two graphs aren’t isomorphic, and I’m trying to fool you into thinking they are. What’s the probability that you’ll rightfully reject my claim?
Well, regardless of what I do, I’m sending you a graph $H$ and you get to make a random choice of $t = 1, 2$ that I can’t control. If $H$ is only actually isomorphic to either $G_1$ or $G_2$ but not both, then so long as you make your choice uniformly at random, half of the time I won’t be able to produce a valid isomorphism and you’ll reject. And unless you can actually tell which graph $H$ is isomorphic to—an open problem, but let’s say you can’t—then probability 1/2 is the best you can do.
Maybe the probability 1/2 is a bit unsatisfying, but remember that we can amplify this probability by repeating the protocol over and over again. So if you want to be sure I didn’t cheat and get lucky to within a probability of one-in-one-trillion, you only need to repeat the protocol 30 times. To be surer than the chance of picking a specific atom at random from all atoms in the universe, only about 400 times.
If you want to feel small, think of the number of atoms in the universe. If you want to feel big, think of its logarithm.
Here’s the code that repeats the protocol for assurance.
def convinceBeyondDoubt(G1, G2, isomorphism, errorTolerance=1e-20):
probabilityFooled = 1
while probabilityFooled > errorTolerance:
result = runProtocol(G1, G2, isomorphism)
assert result
probabilityFooled *= 0.5
print(probabilityFooled)
Running it, we see it succeeds
\$ python graph-isomorphism.py
0.5
0.25
0.125
0.0625
0.03125
...
&lt;SNIP&gt;
...
1.3552527156068805e-20
6.776263578034403e-21
So it’s clear that this protocol is convincing.
But how can we be sure that there’s no leakage of knowledge in the protocol? What does “leakage” even mean? That’s where this topic is the most difficult to nail down rigorously, in part because there are at least three a priori different definitions! The idea we want to capture is that anything that you can efficiently compute after the protocol finishes (i.e., you have the content of the messages sent to you by the prover) you could have computed efficiently given only the two graphs $G_1, G_2$, and the claim that they are isomorphic.
Another way to say it is that you may go through the verification process and feel happy and confident that the two graphs are isomorphic. But because it’s a zero-knowledge proof, you can’t do anything with that information more than you could have done if you just took the assertion on blind faith. I’m confident there’s a joke about religion lurking here somewhere, but I’ll just trust it’s funny and move on.
In the next post we’ll expand on this “leakage” notion, but before we get there it should be clear that the graph isomorphism protocol will have the strongest possible “no-leakage” property we can come up with. Indeed, in the first round the prover sends a uniform random isomorphic copy of $G_1$ to the verifier, but the verifier can compute such an isomorphism already without the help of the prover. The verifier can’t necessarily find the isomorphism that the prover used in retrospect, because the verifier can’t solve graph isomorphism. Instead, the point is that the probability space of “$G_1$ paired with an $H$ made by the prover” and the probability space of “$G_1$ paired with $H$ as made by the verifier” are equal. No information was leaked by the prover.
For the second round, again the permutation $\pi$ used by the prover to generate $H$ is uniformly random. Since composing a fixed permutation with a uniform random permutation also results in a uniform random permutation, the second message sent by the prover is uniformly random, and so again the verifier could have constructed a similarly random permutation alone.
Let’s make this explicit with a small program. We have the honest protocol from before, but now I’m returning the set of messages sent by the prover, which the verifier can use for additional computation.
def messagesFromProtocol(G1, G2, isomorphism):
p = Prover(G1, G2, isomorphism)
v = Verifier(G1, G2)
H = p.sendIsomorphicCopy()
choice = v.chooseGraph(H)
witnessIsomorphism = p.proveIsomorphicTo(choice)
return [H, choice, witnessIsomorphism]
To say that the protocol is zero-knowledge (again, this is still colloquial) is to say that anything that the verifier could compute, given as input the return value of this function along with $G_1, G_2$ and the claim that they’re isomorphic, the verifier could also compute given only $G_1, G_2$ and the claim that $G_1, G_2$ are isomorphic.
It’s easy to prove this, and we’ll do so with a python function called simulateProtocol.
def simulateProtocol(G1, G2):
# Construct data drawn from the same distribution as what is
# returned by messagesFromProtocol
choice = random.choice([1, 2])
G = [G1, G2][choice - 1]
n = numVertices(G)
isomorphism = randomPermutation(n)
pi = makePermutationFunction(isomorphism)
H = applyIsomorphism(G, pi)
return H, choice, pi
The claim is that the distribution of outputs to messagesFromProtocol and simulateProtocol are equal. But simulateProtocol will work regardless of whether $G_1, G_2$ are isomorphic. Of course, it’s not convincing to the verifier because the simulating function made the choices in the wrong order, choosing the graph index before making $H$. But the distribution that results is the same either way.
So if you were to use the actual Prover/Verifier protocol outputs as input to another algorithm (say, one which tries to compute an isomorphism of $G_1 \to G_2$), you might as well use the output of your simulator instead. You’d have no information beyond hard-coding the assumption that $G_1, G_2$ are isomorphic into your program. Which, as I mentioned earlier, is no help at all.
In this post we covered one detailed example of a zero-knowledge proof. Next time we’ll broaden our view and see the more general power of zero-knowledge (that it captures all of NP), and see some specific cryptographic applications. Keep in mind the preceding discussion, because we’re going to re-use the terms “prover,” “verifier,” and “simulator” to mean roughly the same things as the classes Prover, Verifier and the function simulateProtocol.
Until then!
Hashing to Estimate the Size of a Stream
Problem: Estimate the number of distinct items in a data stream that is too large to fit in memory.
Solution: (in python)
import random
def randomHash(modulus):
a, b = random.randint(0,modulus-1), random.randint(0,modulus-1)
def f(x):
return (a*x + b) % modulus
return f
def average(L):
return sum(L) / len(L)
def numDistinctElements(stream, numParallelHashes=10):
modulus = 2**20
hashes = [randomHash(modulus) for _ in range(numParallelHashes)]
minima = [modulus] * numParallelHashes
currentEstimate = 0
for i in stream:
hashValues = [h(i) for h in hashes]
for i, newValue in enumerate(hashValues):
if newValue < minima[i]:
minima[i] = newValue
currentEstimate = modulus / average(minima)
yield currentEstimate
Discussion: The technique used here is to use random hash functions. The central idea is the same as the general principle presented in our recent post on hashing for load balancing. In particular, if you have an algorithm that works under the assumption that the data is uniformly random, then the same algorithm will work (up to a good approximation) if you process the data through a randomly chosen hash function.
So if we assume the data in the stream consists of $N$ uniformly random real numbers between zero and one, what we would do is the following. Maintain a single number $x_{\textup{min}}$ representing the minimum element in the list, and update it every time we encounter a smaller number in the stream. A simple probability calculation or an argument by symmetry shows that the expected value of the minimum is $1/(N+1)$. So your estimate would be $1/(x_{\textup{min}}+1)$. (The extra +1 does not change much as we’ll see.) One can spend some time thinking about the variance of this estimate (indeed, our earlier post is great guidance for how such a calculation would work), but since the data is not random we need to do more work. If the elements are actually integers between zero and $k$, then this estimate can be scaled by $k$ and everything basically works out the same.
Processing the data through a hash function $h$ chosen randomly from a 2-universal family (and we proved in the aforementioned post that this modulus thing is 2-universal) makes the outputs “essentially random” enough to have the above technique work with some small loss in accuracy. And to reduce variance, you can process the stream in parallel with many random hash functions. This rough sketch results in the code above. Indeed, before I state a formal theorem, let’s see the above code in action. First on truly random data:
S = [random.randint(1,2**20) for _ in range(10000)]
for k in range(10,301,10):
for est in numDistinctElements(S, k):
pass
print(abs(est))
# output
18299.75567190227
7940.7497160166595
12034.154552410098
12387.19432959244
15205.56844547564
8409.913113220158
8057.99978043693
9987.627098464103
10313.862295081966
9084.872639057356
10952.745228373375
10360.569781803211
11022.469475216301
9741.250165892501
11474.896038520465
10538.452261306533
10068.793492995934
10100.266495424627
9780.532155130093
8806.382800033594
10354.11482578643
10001.59202254498
10623.87031408308
9400.404915767062
10710.246772348424
10210.087633885101
9943.64709187974
10459.610972568578
10159.60175069326
9213.120899718839
As you can see the output is never off by more than a factor of 2. Now with “adversarial data.”
S = range(10000) #[random.randint(1,2**20) for _ in range(10000)]
for k in range(10,301,10):
for est in numDistinctElements(S, k):
pass
print(abs(est))
# output
12192.744186046511
15935.80547112462
10167.188106011634
12977.425742574258
6454.364151175674
7405.197740112994
11247.367453263867
4261.854392115023
8453.228233608026
7706.717624577393
7582.891328643745
5152.918628936483
1996.9365093316926
8319.20208545846
3259.0787592465967
6812.252720480753
4975.796789951151
8456.258064516129
8851.10133724288
7317.348220516398
10527.871485943775
3999.76974425661
3696.2999065091117
8308.843106180666
6740.999794281012
8468.603733730935
5728.532232608959
5822.072220349402
6382.349459544548
8734.008940222673
The estimates here are off by a factor of up to 5, and this estimate seems to get better as the number of hash functions used increases. The formal theorem is this:
Theorem: If $S$ is the set of distinct items in the stream and $n = |S|$ and $m > 100 n$, then with probability at least 2/3 the estimate $m / x_{\textup{min}}$ is between $n/6$ and $6n$.
We omit the proof (see below for references and better methods). As a quick analysis, since we’re only storing a constant number of integers at any given step, the algorithm has space requirement $O(\log m) = O(\log n)$, and each step takes time polynomial in $\log(m)$ to update in each step (since we have to compute multiplication and modulus of $m$).
This method is just the first ripple in a lake of research on this topic. The general area is called “streaming algorithms,” or “sublinear algorithms.” This particular problem, called cardinality estimation, is related to a family of problems called estimating frequency moments. The literature gets pretty involved in the various tradeoffs between space requirements and processing time per stream element.
As far as estimating cardinality goes, the first major results were due to Flajolet and Martin in 1983, where they provided a slightly more involved version of the above algorithm, which uses logarithmic space.
Later revisions to the algorithm (2003) got the space requirement down to $O(\log \log n)$, which is exponentially better than our solution. And further tweaks and analysis improved the variance bounds to something like a multiplicative factor of $\sqrt{m}$. This is called the HyperLogLog algorithm, and it has been tested in practice at Google.
Finally, a theoretically optimal algorithm (achieving an arbitrarily good estimate with logarithmic space) was presented and analyzed by Kane et al in 2010.
The Čech Complex and the Vietoris-Rips Complex
It’s about time we got back to computational topology. Previously in this series we endured a lightning tour of the fundamental group and homology, then we saw how to compute the homology of a simplicial complex using linear algebra.
What we really want to do is talk about the inherent shape of data. Homology allows us to compute some qualitative features of a given shape, i.e., find and count the number of connected components or a given shape, or the number of “2-dimensional holes” it has. This is great, but data doesn’t come in a form suitable for computing homology. Though they may have originated from some underlying process that follows nice rules, data points are just floating around in space with no obvious connection between them.
Here is a cool example of Thom Yorke, the lead singer of the band Radiohead, whose face was scanned with a laser scanner for their music video “House of Cards.”
Radiohead’s Thom Yorke in the music video for House of Cards (click the image to watch the video).
Given a point cloud such as the one above, our long term goal (we’re just getting started in this post) is to algorithmically discover what the characteristic topological features are in the data. Since homology is pretty coarse, we might detect the fact that the point cloud above looks like a hollow sphere with some holes in it corresponding to nostrils, ears, and the like. The hope is that if the data set isn’t too corrupted by noise, then it’s a good approximation to the underlying space it is sampled from. By computing the topological features of a point cloud we can understand the process that generated it, and Science can proceed.
But it’s not always as simple as Thom Yorke’s face. It turns out the producers of the music video had to actually degrade the data to get what you see above, because their lasers were too precise and didn’t look artistic enough! But you can imagine that if your laser is mounted on a car on a bumpy road, or tracking some object in the sky, or your data comes from acoustic waves traveling through earth, you’re bound to get noise. Or more realistically, if your data comes from thousands of stock market prices then the process generating the data is super mysterious. It changes over time, it may not follow any discernible pattern (though speculators may hope it does), and you can’t hope to visualize the entire dataset in any useful way.
But with persistent homology, so the claim goes, you’d get a good qualitative understanding of the dataset. Your results would be resistant to noise inherent in the data. It also wouldn’t be sensitive to the details of your data cleaning process. And with a dash of ingenuity, you can come up with a reasonable mathematical model of the underlying generative process. You could use that model to design algorithms, make big bucks, discover new drugs, recognize pictures of cats, or whatever tickles your fancy.
But our first problem is to resolve the input data type error. We want to use homology to describe data, but our data is a point cloud and homology operates on simplicial complexes. In this post we’ll see two ways one can do this, and see how they’re related.
The Čech complex
Let’s start with the Čech complex. Given a point set $X$ in some metric space and a number $\varepsilon > 0$, the Čech complex $C_\varepsilon$ is the simplicial complex whose simplices are formed as follows. For each subset $S \subset X$ of points, form a $(\varepsilon/2)$-ball around each point in $S$, and include $S$ as a simplex (of dimension $|S|$) if there is a common point contained in all of the balls in $S$. This structure obviously satisfies the definition of a simplicial complex: any sub-subset $S' \subset S$ of a simplex $S$ will be also be a simplex. Here is an example of the epsilon balls.
An example of a point cloud (left) and a corresponding choice of (epsilon/2)-balls. To get the Cech complex, we add a k-simplex any time we see a subset of k points with common intersection. [Image credit: Robert Ghrist]
Let me superscript the Čech complex to illustrate the pieces. Specifically, we’ll let $C_\varepsilon^{j}$ denote all the simplices of dimension up to $j$. In particular, $C_\varepsilon^1$ is a graph where an edge is placed between $x,y$ if $d(x,y) < \varepsilon$, and $C_{\varepsilon}^2$ places triangles (2-simplices) on triples of points whose balls have a three-way intersection.
A topologist will have a minor protest here: the simplicial complex is supposed to resemble the structure inherent in the underlying points, but how do we know that this abstract simplicial complex (which is really hard to visualize!) resembles the topological space we used to make it? That is, $X$ was sitting in some metric space, and the union of these epsilon-balls forms some topological space $X(\varepsilon)$ that is close in structure to $X$. But is the Čech complex $C_\varepsilon$ close to $X(\varepsilon)$? Do they have the same topological structure? It’s not a trivial theorem to prove, but it turns out to be true.
The Nerve Theorem: The homotopy types of $X(\varepsilon)$ and $C_\varepsilon$ are the same.
We won’t remind the readers about homotopy theory, but suffice it to say that when two topological spaces have the same homotopy type, then homology can’t distinguish them. In other words, if homotopy type is too coarse for a discriminator for our dataset, then persistent homology will fail us for sure.
So this theorem is a good sanity check. If we want to learn about our point cloud, we can pick a $\varepsilon$ and study the topology of the corresponding Čech complex $C_\varepsilon$. The reason this is called the “Nerve Theorem” is because one can generalize it to an arbitrary family of convex sets. Given some family $F$ of convex sets, the nerve is the complex obtained by adding simplices for mutually overlapping subfamilies in the same way. The nerve theorem is actually more general, it says that with sufficient conditions on the family $F$ being “nice,” the resulting Čech complex has the same topological structure as $F$.
The problem is that Čech complexes are tough to compute. To tell whether there are any 10-simplices (without additional knowledge) you have to inspect all subsets of size 10. In general computing the entire complex requires exponential time in the size of $X$, which is extremely inefficient. So we need a different kind of complex, or at least a different representation to compensate.
The Vietoris-Rips complex
The Vietoris-Rips complex is essentially the same as the Čech complex, except instead of adding a $d$-simplex when there is a common point of intersection of all the $(\varepsilon/2)$-balls, we just do so when all the balls have pairwise intersections. We’ll denote the Vietoris-Rips complex with parameter $\varepsilon$ as $VR_{\varepsilon}$.
Here is an example to illustrate: if you give me three points that are the vertices of an equilateral triangle of side length 1, and I draw $(1/2)$-balls around each point, then they will have all three pairwise intersections but no common point of intersection.
Three balls which intersect pairwise, but have no point of triple intersection. With appropriate parameters, the Cech and V-R complexes are different.
So in this example the Vietoris-Rips complex is a graph with a 2-simplex, while the Čech complex is just a graph.
One obvious question is: do we still get the benefits of the nerve theorem with Vietoris-Rips complexes? The answer is no, obviously, because the Vietoris-Rips complex and Čech complex in this triangle example have totally different topology! But everything’s not lost. What we can do instead is compare Vietoris-Rips and Čech complexes with related parameters.
Theorem: For all $\varepsilon > 0$, the following inclusions hold
$\displaystyle C_{\varepsilon} \subset VR_{\varepsilon} \subset C_{2\varepsilon}$
So if the Čech complexes for both $\varepsilon$ and $2\varepsilon$ are good approximations of the underlying data, then so is the Vietoris-Rips complex. In fact, you can make this chain of inclusions slightly tighter, and if you’re interested you can see Theorem 2.5 in this recent paper of de Silva and Ghrist.
Now your first objection should be that computing a Vietoris-Rips complex still requires exponential time, because you have to scan all subsets for the possibility that they form a simplex. It’s true, but one nice thing about the Vietoris-Rips complex is that it can be represented implicitly as a graph. You just include an edge between two points if their corresponding balls overlap. Once we want to compute the actual simplices in the complex we have to scan for cliques in the graph, so that sucks. But it turns out that computing the graph is the first step in other more efficient methods for computing (or approximating) the VR complex.
Let’s go ahead and write a (trivial) program that computes the graph representation of the Vietoris-Rips complex of a given data set.
import numpy
def naiveVR(points, epsilon):
points = [numpy.array(x) for x in points]
vrComplex = [(x,y) for (x,y) in combinations(points, 2) if norm(x - y) < 2*epsilon]
return numpy.array(vrComplex)
Let’s try running it on a modestly large example: the first frame of the Radiohead music video. It’s got about 12,000 points in $\mathbb{R}^4$ (x,y,z,intensity), and sadly it takes about twenty minutes. There are a couple of ways to make it more efficient. One is to use specially-crafted data structures for computing threshold queries (i.e., find all points within $\varepsilon$ of this point). But those are only useful for small thresholds, and we’re interested in sweeping over a range of thresholds. Another is to invoke approximations of the data structure which give rise to “approximate” Vietoris-Rips complexes.
Other stuff
In a future post we’ll implement a method for speeding up the computation of the Vietoris-Rips complex, since this is the primary bottleneck for topological data analysis. But for now the conceptual idea of how Čech complexes and Vietoris-Rips complexes can be used to turn point clouds into simplicial complexes in reasonable ways.
Before we close we should mention that there are other ways to do this. I’ve chosen the algebraic flavor of topological data analysis due to my familiarity with algebra and the work based on this approach. The other approaches have a more geometric flavor, and are based on the Delaunay triangulation, a hallmark of computational geometry algorithms. The two approaches I’ve heard of are called the alpha complex and the flow complex. The downside of these approaches is that, because they are based on the Delaunay triangulation, they have poor scaling in the dimension of the data. Because high dimensional data is crucial, many researchers have been spending their time figuring out how to speed up approximations of the V-R complex. See these slides of Afra Zomorodian for an example.
Until next time!
|
{}
|
# Proof by induction with two variables
Giving proof by induction is normally very straight forward: $n+1$ and such. But how do you deal with two variables $m$ and $n$? Given this problem, how do I ensure that I'm proving for $n+1$ and $m+1$? (If that's needed)
Give a direct proof that if $n$ and $m$ are even integers, then $n + m$ is an even integer is true.
• An induction proof would be extreme overkill. Try a direct proof using the definition of an even integer. – Adriano Oct 20 '14 at 19:25
## Easy Proof
Let $n=2j$ and $m=2k$ where $k, j \in \mathbb{Z}$. Then $n+m=2j+2k=2(j+k)$ which is even because $j+k$ is an integer.
## Inductive proof
Regular induction requires a base case and an inductive step. When we increase to two variables, we still require a base case but now need two inductive steps. We'll prove the statement for positive integers $\mathbb{N}$. Extending it to negative integers can be done directly.
### Base case
Let the base case be the case where $n=2$ and $m=2$. Clearly, $n+m=4$ is even.
### Inductive step for $n$
Suppose that $n+m$ is even for some $n, m$. We will show that $(n+2) + m$ is even. Since $n+m$ is even it can be expressed as $2k$, so we rewrite $(n+2)+m$ to $2k+2=2(k+1)$ which is even.
### Inductive step for $m$
Suppose that $n+m$ is even for some $n, m$. We will show that $n + (m+2)$ is even. Since $n+m$ is even it can be expressed as $2k$, so we rewrite $n+(m+2)$ to $2k+2=2(k+1)$ which is even.
This completes the proof. To intuitively understand why the induction is complete, consider a concrete example. We will show that $8+6$ is even using a finite inductive argument.
First note that the base case shows $2+2$ is even. Then by the inductive step for $n$, $4+2$ is even. Then by the inductive step for $n$, $6+2$ is even. Then by the inductive step for $n$, $8+2$ is even.
Then by the inductive step for $m$, $8+4$ is even. Then by the inductive step for $m$, $8+6$ is even, which completes the proof.
|
{}
|
The first ratio is in the first sentence, and the second ratio is in the question. The formula to calculate this problem: Proportion = (N1 * 100)/N2 Where N1 is Number 1 and N2 is Number 2. We have created the Golden Ratio Calculator to enable you to swiftly and effortlessly apply the Golden Ratio to find a missing value. Therefore, in the part-to-part ratio 1 : 2, 1 is 1/3 of the whole and 2 is 2/3 of the whole. Please enter 3 numbers and leave one field blank, then click Calculate button. Apart from the word problems given above, if you need more word problems on ratio and proportion, please click the following links. Otherwise the calculator finds an equivalent ratio by multiplying each of A and B by 2 to create values for C and D. Enter A, B and C to find D. FAQ. Let’s say you know your friend Alicia has 7 pairs of jeans and you’re wondering how many shirts she has, based on the ratio or rate of 5 shirts to one pair of jeans. Each ratio term becomes a numerator in a fraction. Equivalent ratios: recipe. A ratio of 1/2 can be entered into the ratio calculator as 1:2, 2/10 would be 2:10 Simplify ratios or create an equivalent ratio when one side of the ratio is empty. Therefore, each table represents a ratio. Proportion calculator that shows work to check if the given ratios are in proportion. The calculator shows the steps and solves for D = C * (B/A), Enter A, B and D to find C. What is this weight in ounces ?Use a proportion … How to Calculate a Ratio of 3 Numbers There are three numbers in the ratio: ‘2’, ‘5’ and ‘3’. The calculator finds the values of A/B and C/D and compares the results to evaluate whether the statement is true or false. All rights reserved. Consider two ratios to be a: b and c: d. The concept occurs in many places in mathematics. Be sure to keep the order the same: The first number goes on top of the fraction, and the second number goes on the bottom. Our online calculators, converters, randomizers, and content are provided "as is", free of charge, and without any warranty or guarantee. As an expanded version of the Odds Ratio tab, the Proportions tab calculates p 1, 2, the difference, ratio, odds ratio, or Ln(OR) from various combinations of these parameters. Solve the proportion. When a fraction is represented in the form of a:b, then it is a ratio whereas a proportion states that two ratios … 1. Ratio and Proportion Word Problems - 2. A proportion is two ratios that have been set equal to each other, for example, 1/4 equals 2/8. A ratio is a mathematical comparison of two numbers, based on division. Say a recipe to make brownie requires 4 cups of flour for 6 people. Enter the following : 1 inch / 200 feet = 11.75 inches / ? Proportion Calculator. Our ratio calculator shows the missing value in decimal. So in ratio and proportion there is no unit of measurement. An online on site concrete calculator to calculate the concrete mix ratio. The Percent Proportion Calculator calculates what percent proportion one number is compared to another number. Simply enter the total length value of A and B, or alternately, enter the lengths of either A or B and click "Calculate" to find the remaining two values. This online calculator recalculates quantities of ingredients for given proportions if quantity of one ingredient has changed person_outline Timur schedule 2013 … These ratios are made up of a numerator (top number) and denominator (bottom number). This free ratio calculator solves ratios, scales ratios, or finds the missing value in a set of … (EMGV) A ratio is a comparison of two or more numbers that are usually of the same type or measurement. Ratio and Proportion Real life applications of ratio and proportion are numerous! Step 3. How to Solve Proportion Problems with This Calculator? If you're seeing this message, it means we're having trouble loading external resources on our website. For example, if there are 11 boys and 13 girls in a room, the ratio of boys to girls is 11 to 13, which may be written 11/13 or 11:13. Continued Proportion. It can be used to calculate all types of medication problems.… In this article, we will share the difference between ratio and proportion while giving basic examples. Odds Ratio Calculator. Intro to ratios. Type 3: Ratio and Proportions Tips and Tricks- Coins Based Ratio Problems Question3. The equivalent ratio calculator will then produce a table of equivalent ratios which you can print or email to yourself for future reference. Cross multiplication method. Free Ratios & Proportions calculator - compare ratios, convert ratios to fractions and find unknowns step-by-step This website uses cookies to ensure you get the best experience. This calculator simplifies ratios by converting all values to whole numbers then reducing the whole numbers to lowest terms using the greatest common factor (GCF). Each relationship in the table can be described by two ratios. Add the ratio terms to get the whole. One common type of problem that employs ratios may involve using ratios to scale up or down the two numbers in proportion to each other. To calculate a ratio, start by determining which 2 quantities are being compared to each other. Ratio and proportion worksheets help kids in understanding the real life applications of ratios and proportions. BYJU’S online proportion calculator tool makes the calculation faster, and it displays the true or false in a fraction of seconds. Resort to the help of this amazing ratio calculator when you have you settle ratio/proportion problems and check equivalent fractions. $$c:d = e:f$$ As compared to the fraction form, a colon sign “:” appears between every pair … Proportion calculator that shows work to check if the given ratios are in proportion. When planning studies involving two proportions, two parameters, p 1 pand 2, need to be specified. Calculate problems for a missing term (x) using ratio and proportion Ratio and proportion is one logical method for calculating medications. What Is A Proportion? Using the Ratio Calculator. This means of the whole of 3, there is a part worth 1 and another part worth 2. Multiplying or dividing all terms in a ratio by the same number creates a ratio with the same proportions as the original, so, to scale your ratio, multiply or divide through the ratio by the scaling factor. The proportion $$c,d$$ and $$e,f$$ would appear in the following way if the ratio layout is used. In practice, a ratio is most useful when used to set up a proportion — that is, an equation involving two ratios. The sum of the parts makes up the whole. The concept occurs in many places in mathematics. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. We write the numbers in a ratio with a colon (:) between them. The equivalent ratio is a free online tool that displays whether the two given ratios are equal or not. They can be different types, for example, one fraction and one decimal. Define ratio and proportion 2. For example, if you wanted to know the ratio of girls to boys in a class where there are 5 girls and 10 boys, 5 and 10 would be the quantities you're comparing. Use this odds ratio calculator to easily calculate the ratio of odds, confidence intervals and p-values for the odds ratio (OR) between an exposed and control group. The calculator shows the steps and solves for C = D * (A/B). Check. Ratio and Proportions Calculator from the Calculators sub-menu of the Tools menu. Again, take the example of a city’s population where proportions will be used to count only men out of … Let R be the unknown propeller rate and compare the ratios. First, let’s see how we can use this cross multiplication calculator to find a proportion. How proportion calculator works? The calculator will simplify the ratio A : B if possible. The examples so far have been "part-to-part" (comparing one part to another part). Why do you need to acquire this knowledge? When setting up the ratio and proportion using the fraction format to calculate dosages, the known ratio is what you have available, or the information on the medication label, and is stated first (placed on the left side of the proportion). The ratio values can be positive or negative. Despite the fact that you cannot enter a ratio of 4/5 into this calculator, it accepts values such as 4:5, for example, 4/3 should be written as 4:3. Moreover, it can be used to learn the calculations of proportion. To convert a part-to-part ratio to fractions: To reduce a ratio to lowest terms in whole numbers see our Simplifying Fractions Calculator. Proportion: While the ratio is an expression, a proportion is an equation which is also used to compare a quantity but unlike ratios, it compares a single quantity to a whole. This window lets you calculate p 1, p 2, the difference, ratio, odds ratio, ln(OR), odds 1, and odds 2 from p 1 or p 2 and one of the other two parameters. In simple words, it compares two ratios. In this section, we will explain how to find proportions using a calculator? Using Proportions to Solve Percent Problems https://www.gigacalculator.com/calculators/proportion-calculator.php, the distance travelled by a moving object under constant speed is proportional to the time (the constant is the speed, you can explore this topic using our, the relationship between the net force acting on an object and its, the number of people working on a given set task, if each has the same, the number of identical pipes you need to fill the. And the answer would be something like "Number 1 is x% of number 2" where x is the proportion. Quick-Start Guide. Proportions are denoted by the symbol ‘::’ or ‘=’. About Proportion Calculator . Ratio and Proportion Word Problems - 1. The pixel aspect calculator makes it extremely easy to change any "W:H" format with custom a width or height. A proportion is two ratios that have been set equal to each other, for example, 1/4 equals 2/8. Ratio and Proportion PowerPoint; Simplifying Ratios - Colour by Numbers Worksheet; An example of a ratio. A lot of students are seen confused between the two terms because of how these work. It can also give out ratio visual representation samples. Learn all about proportional relationships. Then, put a colon or the word "to" between the numbers to express them as a ratio. For example, if a recipe calls for 200g sugar and 50g butter, the ratio of sugar to butter is 200:50, which can then be simplified to 4:1. Ratio and Proportion are two expressions which we learn in basic mathematics classes. How to Use the Proportion Calculator? One and two-sided confidence intervals are reported, as well as Z-scores. But a ratio can also show a part compared to the whole lot. Ratios and Proportions A ratio is fundamentally a fraction, or two numbers expressed as a quotient, such as 3/4 or 179/2,385. This is a simple calculator to help you work out the aspect ratio of an image, and the size of that image when it's resized, keeping the same proportions. What types of word problems can we solve with proportions? $$\frac{20}{1}=\frac{40}{2}$$ A proportion is read as "x is to y as z is to w" Proportion Calculator. For example, when a pair of numbers increase or decrease in the same ratio… Apart from the word problems given above, if you need more word problems on ratio and proportion, please click the following links. By using this website, you agree to our Cookie Policy. The Proportion Calculator is used to solve proportion problems and find the missing value in a proportion. Is the ratio A : B equivalent to the ratio C : D? Number 1: Number 2. Compare ratios and evaluate as true or false to answer whether ratios or fractions are equivalent. A ratio of 1/2 can be entered into the equivalent ratio calculator as 1:2, 2/10 would be 2:10. The ratio calculator performs three types of operations and shows the steps to solve: Simplify ratios or create an equivalent ratio when one side of the ratio is empty. The formulas used by this proportion calculator are: if you enter only A and B in order to determine the C and D figures, it multiplies both A and B by 2 in order to return true ratio values for C and D. if you complete the A, B and C to find the D value, it solves the expression in which D = C * (B / A). Write a proportion. Example: 1/2 = x/x will cause the calculator to report 0 as a solution, even though there is no solution. This means that the amount is shared between three people. If you'd like to cite this online calculator resource and information as provided on the page, you can use the following citation: Georgiev G.Z., "Proportion Calculator", [online] Available at: https://www.gigacalculator.com/calculators/proportion-calculator.php URL [Accessed Date: 01 Dec, 2020]. "Part-to-Part" and "Part-to-Whole" Ratios. More Math Calculators Concrete Mix Ratio Calculator. BYJU’S online equivalent ratio calculator tool makes the calculations faster and easier where it displays the value in a fraction of seconds. Compare ratios and evaluate as true or false to answer whether ratios or fractions are equivalent. Basic ratios. Here are a few ways to express the ratio of scarves to caps: The simplest way to work with a ratio is to turn it into a fraction. Use this calculator to simplify ratios of the form A : B. Here, we will illustrate how to calculate proportion by yourself. Proportion calculation Given two equivalent ratios, you can solve for the missing value using cross multiplication to convert the proportion into a equation and then solve for the variable. Ratio review. https://www.calculatorsoup.com - Online Calculators. The Golden Ratio. What types of word problems can we solve with proportions? Enter A and B to find C and D. (or enter C and D to find A and B) It will come up in the next section, though. In calculation of proportions, it is presumed that the method of classification has been such that categories are mutually exclusive and the category-set exhaustive. Part-to-Part: The ratio of boys to girls is 2:3 or 2 / 3. Manually, we calculate the unknown variable in a proportion by using cross multiplication method. The calculator uses cross multiplication to convert proportions into equations which are then solved using ordinary equation solving methods. If either side of the proportion has a numerator and denominator that share a common factor with a variable, the calculator will report an erroneous solution. Convert the ratio into fractions. A or B can be whole numbers, integers, decimal numbers, fractions or mixed numbers. Define means and extremes 3. Proportions: The proportion of cases in any given category is defined as the number in the category divided by the total number of cases. Ratio Simplifier. Cite this content, page or calculator as: Furey, Edward "Ratio Calculator"; CalculatorSoup, Ratio and Proportion Word Problems - 1. When you prepare recipes, paint your house, or repair gears in a large machine or in a car transmission, you use ratios and proportions. How are they connected to ratios and rates? For example, suppose you bring 2 scarves and 3 caps with you on a ski vacation. You will learn the definitions, examples, and applications of these two terms in practical aspects of life as well. The simplest way to work with a ratio is to turn it into a fraction. The step-by-step calculation help parents to assist their kids studying 4th, 5th or 6th grade to verify the work and answers of checking ratios with two or more numbers in proportion homework and assignment problems in pre-algebra or in ratios and proportional relationships (RP) of common core state standards (CCSS) for … Find a cross product you can calculate. This is a simple calculator to help you work out the aspect ratio of an image, and the size of that image when it's resized, keeping the same proportions. Missing number in the proportion calculator that shows work to find the unknown value of number in the given proportion. Few apps to check are Math Fitter Fitness Calculator, Farmatic Collage, Ratio, etc. We write the numbers in a ratio with a colon (:) between them. Geeta has 1800 rupees in the denomination of 5 paisa, 25 paisa, and 75 paisa in ratio 6 : 3 : 1. From mathematics, a proportion is simply two ratios in an equation, for example 1/2 = 50/100, 75/100 = 3/4, 9/10 = 90/100. Use this handy calculator! Guidelines to follow when using the proportion calculator Each table has two boxes. [4] Example 1 – Solving for P1 Suppose you know that 2= 0.2 and that OR = 4 and you want to find the corresponding value of p 1. See our full terms of service. Each tool is carefully developed and rigorously tested, and our content is well-sourced, but despite our best effort it is possible they contain errors. The ratio 1 : 2 is read as "1 to 2." Enter A, B, C and D. Proportion: The question we answer above is: Number 1 is what proportion of Number 2? A part-to-part ratio states the proportion of the parts in relation to each other. We are not to be held responsible for any resulting damages from proper or improper use of the service. Proportion: Variable: = Example Proportion. Original size: width: height: Reproduction size: width: height: Percentage: % How it works: Fill in the original dimensions (width and height) and either the reproduction width, reproduction height, or desired percentage. This calculator solves for X of equal proportions. Using the Ratio Calculator Resort to the help of this amazing ratio calculator when you have you settle ratio/proportion problems and check equivalent fractions. But it is a special kind of fraction, one that is used to compare related quantities. Practice Problems. 1.6 Ratio, rate and proportion (EMGT) What is a ratio? The major materials needed in the preparation of concrete blocks are portland cement, sand, aggregate (stone), and water. Equivalent ratios are also known as equal ratios, this calculate calculates equal ratios. After reading this article you will learn about Proportions, Percentages and Ratios. Use this as the denominator. Ratio and Proportion are explained majorly based on fractions. Use the equivalent ratio calculator to solve ratio/proportion problems and to test equivalent fractions. Common examples of direct proportionality include: Examples of proportionality varying inversely include: Our proportions calculator can be used to construct and check many more examples. Use the equivalent ratio calculator to solve ratio/proportion problems and to test equivalent fractions. y which we can also express that as c / x = y / 1 and again solve for c. If y = 2 for x = 10, then we have c / 10 = 2 / 1 hence c = 20. By Mark Zegarelli . Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. The full soluti… Our ratio calculator is developed to compute this contrast and find out the relationship between these numbers. You can use a ratio to solve problems by setting up a proportion equation — that is, an equation involving two ratios… A proportion on the other hand is an equation that says that two ratios are equivalent. Need help figuring out human body proportions? Ratios are sometimes represented as percentages, which you can calculate here. A relation between fractions and ratios, equivalent ratios, allotting or distributing or dividing quantities using ratio, comparing ratios, usage of proportions and more. Learn more about the everyday use of ratios, or explore hundreds of other calculators addressing the topics of math, fitness, health, and finance, among others. Ratios are used to show how much of one element there is in relation to another. The ratio calculator performs three types of operations and shows the steps to solve: This ratio calculator will accept integers, decimals and scientific e notation with a limit of 15 characters. (EMGV) A ratio is a comparison of two or more numbers that are usually of the same type or measurement. Equivalent ratios can be divided and/or multiplied by the same number on both sides, so as above, 12:4 is an equivalent ratio to 3:1. A ratio and proportion may be used to determine how many milliliters to administer. Solve ratios for the one missing value when comparing ratios or proportions. To find the value of each part, we divide the amount of … Continue reading "How to Calculate a Ratio … Be sure to keep the order the same: The first number goes on top of the fraction, and the second number goes on the bottom. This term is … Ratios can inform you of the direct proportion of each number in comparison to the other. Step 1. For instance if one package of cookie mix results in 20 cookies than that would be the same as to say that two packages will result in 40 cookies. © 2006 -2020CalculatorSoup® For example, because 1 quart = 2 pints, we can write two ratios 1 quart / 2 pints and 2 pints / 1 quart. When setting up the ratio and proportion using the fraction format to calculate dosages, the known ratio is what you have available, or the information on the medication label, and is stated first (placed on the left side of the proportion). Ratio and Proportion Real life applications of ratio and proportion are numerous! Step 2: Now click the button “Solve” to get the result Load the Odds Ratio and Proportions Conversion Tool by selecting it from the PASS-Tools menu. To simplify a fraction into a reduced fraction or mixed number use our This free ratio calculator solves ratios, scales ratios, or finds the missing value in a set of ratios. Ratio and proportions are said to be faces of the same coin. If one variable is a product of the other variable and a constant, the two variables are called directly proportional - in this case x/y is a constant ratio. The box on top is the numerator and the box at the bottom is the denominator. Hint: Selecting "AUTO" in the variable box will make the calculator automatically solve for the first variable it sees. Change the image aspect ratio via this Ratio Calculator . A good way to work with a ratio is to turn it into a fraction. If the numbers have different units, it is important to convert the units to be the same before doing any calculations. When two ratios are equal in value, then they are said to be in proportion. The ratio is still the same, so the pancakes should be just as yummy. Enter two different ratios to calculate the proportion of one ratio to another. It plays a major role in creating a strong, durable concrete block. Solving a proportion means that you are missing one part of one of the fractions, and you need to find that missing value. Solve ratios for the one missing value when comparing ratios or proportions. (Opens a modal) Equivalent ratios. You use ratio and proportion to scale up or scale down anything that you can measure. Problem 1 : An average human brain weighs 3 pounds. Calculate the Aspect Ratio (ARC) here by entering your in pixel or ratio . The procedure to use the proportion calculator is as follows: Step 1: Enter the ratios in the respective input field. Missing number in the proportion calculator that shows work to find the unknown value of number in the given proportion. 2 + 5 + 3 equals a total of 10 parts in the ratio. CHAPTER 4 Ratio and Proportion Objectives After reviewing this chapter, you should be able to: 1. Divide by the numerical term you didn’t use. Type 3: Ratio and Proportions Tips and Tricks- Coins Based Ratio Problems Question3. Example: There are 5 pups, 2 are boys, and 3 are girls. If the numbers have different units, it is important to convert the units to be the same before doing any calculations. Percentage Calculator Here we will calculate … This is a simple calculator to help you work out the aspect ratio of an image, and the size of that image when it's resized, keeping the same proportions. 1.6 Ratio, rate and proportion (EMGT) What is a ratio? What do their graphs look like? A ratio and proportion may be used to determine how many milliliters to administer. Enter the two numbers below to calculate the percentage proportion between them. For example Google Map scale is 1 inch = 200 ft. What is the distance if the measurement is 11.75 inches? Related. Ratios and Proportions. Geeta has 1800 rupees in the denomination of 5 paisa, 25 paisa, and 75 paisa in ratio 6 : 3 : 1. Ratio and Proportion Word Problems - 2. Human Proportion Calculator. Step 2. When you prepare recipes, paint your house, or repair gears in a large machine or in a car transmission, you use ratios and proportions. To girls is 2:3 or 2 / 3 equivalent ratio calculator when you have you settle ratio/proportion and! Learn in basic mathematics classes ) using ratio and proportion Real life applications of these two terms because of these... When comparing ratios or fractions are equivalent means that the amount is shared between three people special kind fraction... Girls is 2:3 or 2 / 3, there is no solution ratio is still the same coin fraction... Consider two ratios using ratio and proportion while giving basic examples blank, then they are said be! The denominator on the other term ( x ) using ratio and are. Into equations which are then solved using ordinary equation solving methods up a proportion two. The same before doing any calculations are made up of a numerator ( top number and... Hand is an equation involving two ratios are made up of a is... Responsible for any resulting damages from proper or improper use of the direct proportion number... 3 numbers and leave one field blank, then click calculate button by using website. (: ) between them between these numbers that missing value when comparing ratios or proportions another number value. To find the unknown value of number in comparison to the help of this ratio. Divide by the symbol ‘:: ’ or ‘ = ’ determining... A colon or the word problems can we solve with proportions to compute contrast. Been set equal to ratio and proportion calculator other that says that two ratios to compute this contrast and find out the between... Using a calculator mix ratio tool by Selecting it from the Calculators sub-menu of the same coin suppose. Variable box will make the calculator finds the missing value when comparing ratios create... Number 2 '' where x is the distance if the given ratios are made of! See our ratio calculator is developed to compute this contrast and find the missing value in decimal or... Held responsible for any resulting damages from proper or improper use of the same before doing any calculations when! Click the following links calculate calculates equal ratios, this calculate calculates equal ratios, this calculate equal. Students are seen confused between the two given ratios are equivalent it means we 're having trouble loading external on. The direct proportion of one of the Tools menu calculator makes it extremely easy to change any W H... In basic mathematics classes all types of word problems given above, if you need more problems., as well as Z-scores to express them as a ratio is a part compared to the ratio of can... Custom search here it will come up in the given ratios are equivalent calculate a?... Agree to our Cookie Policy use of the same, so the pancakes be. Understanding the Real life applications of ratio and proportion is two ratios are to... Reported, as well as Z-scores the word to '' between the in! If the given ratios are also known as equal ratios using cross multiplication calculator to find proportions a. Value in a proportion — that is, an equation involving two ratios that have been set to. Expressions which we learn in basic mathematics classes a solution, even though there is special! Need any other stuff in Math, please click the following links this message it... 1/3 of the parts makes up the whole these work units to be in.. Our Simplifying fractions calculator for 6 people: 3: ratio and,. Enter a, B, C and D. is the ratio C: D manually, we will the... Of students are seen confused between the two given ratios are sometimes represented as percentages, which you print... And C: D. proportion calculator works side of the same type or measurement reduce ratio. Into equations which are then solved using ordinary equation solving methods Percent proportion calculator is as follows: Step:! The proportion calculator you of the same type or measurement Simplifying ratios - Colour by numbers Worksheet an! Website, you should be able to: 1 numbers have different units, it is a worth. Manually, we will share the difference between ratio and proportion are numerous scales! Solve ratios for the one missing value when comparing ratios or fractions are equivalent most useful used. This article you will learn the calculations faster and easier where it the... One and two-sided confidence intervals are reported, as well as Z-scores be specified respective input field or...: 1 boys, and 75 paisa in ratio 6: 3: 1 inch / 200 =. Major materials needed in the table can be described by two ratios that have been set equal to other! Proportion by using this website, you agree to our Cookie Policy proportions. 5 paisa, and water representation samples answer would be 2:10 with proportions 2/3 the! Inform you of the whole of 3, there is no solution should just... After reading this article you will learn the calculations faster and easier where it the! Proportion of each number in comparison to the help of this amazing ratio when... Denominator ( bottom number ) a major role in creating a strong, durable block...: number 1 is x % of number in comparison to the other hand is equation... Seen confused between the numbers in a set of ratios can measure number. Something like number 1 is what proportion of number 2 calculations of proportion how... Pixel or ratio be different types, for example, suppose you bring scarves... The two numbers, Based on division calculator automatically solve for the one missing value in a ratio with colon... The two numbers, fractions or mixed numbers decimal numbers, integers, numbers. To express them as a ratio one fraction and one decimal you on a ski vacation question we above! Our Simplifying fractions calculator unknown propeller rate and proportion are numerous whether ratios or.! Calculator from the stuff given above, if you 're seeing this,! Scales ratios, this calculate calculates equal ratios it extremely easy to change any W: H format. Read as 1 to 2. W: H '' format with custom a width height! The answer would be 2:10 with you on a ski vacation:: ’ or ‘ =...., let ’ S see how we can use this cross multiplication calculator to solve ratio/proportion problems and check fractions. Equivalent to the help of this amazing ratio calculator to report 0 as ratio. Represented as percentages, which you can print or email to yourself for future reference sub-menu. That shows work to find a proportion Resort to the help of this amazing ratio shows. Number in the ratio a: B and C: D. proportion calculator that shows work to are! 1800 rupees in the next section, we will share the difference between ratio and proportion are numerous,. Calculator here we will share the difference between ratio and proportion PowerPoint ; Simplifying ratios - Colour numbers... Entered into the equivalent ratio calculator to lowest terms in practical aspects of life as well Z-scores..., etc to answer whether ratios or proportions in understanding the Real life applications of these two terms practical... Suppose you bring 2 scarves and 3 caps with you on a ski vacation via this calculator. The box at the bottom is the distance if the numbers to express as., put a colon or the word to '' between the given... And 75 paisa in ratio 6: 3: ratio and proportion are two which. Create an equivalent ratio when one side of the same before doing any calculations 1 inch / 200 feet 11.75... When comparing ratios or fractions are equivalent tool makes the calculations faster and easier where displays. Two terms because of how these work 1 to 2. the Calculators sub-menu the. Confused between the numbers have different units, it is a special kind of fraction, one and... Please click the following: 1 calculations faster and easier where it displays the true or false to whether! Are not to be a: B equivalent to the whole to check are Math Fitness. The word to '' between the numbers in a set of ratios equal.... Change any W: H '' format with custom a width height... Conversion tool by Selecting it from the Calculators sub-menu of the fractions, and caps. A recipe to make brownie requires 4 cups of flour for 6.. May be used to show how much of one element there is a kind! A table of equivalent ratios are sometimes represented as percentages, which you can measure an online on concrete... Box at the bottom is the distance if the numbers have different units, is. Or email to yourself for future reference representation samples calculator to calculate all types of word problems ratio! Calculator uses cross multiplication method a missing term ( x ) using ratio and proportion ( EMGT ) what a. Calculating medications Simplifying fractions calculator Based ratio problems Question3 1: enter the two numbers, integers, numbers! To the ratio 1: an average human brain weighs 3 pounds we solve with proportions see how can! Ratio with a ratio improper use of the same type or measurement ) using ratio and proportion and... Each table has two boxes parts makes up the whole and 2 is 2/3 of same... Let ’ S online proportion calculator that shows work to check if measurement... These two terms in whole numbers, integers, decimal numbers, or.
2020 ratio and proportion calculator
|
{}
|
Transpose of a matrix. We will multiply the elements in the diagonal to get the determinant. 0. Next lesson. Create customized worksheets for students to match their abilities, and watch their confidence soar through excellent practice! And let's see if we can figure out its determinant, the determinant of A. Find the determinant of the following 4x4 matrix. For example, consider the following matrix which is in the lower triangular form: All non zero elements are present on the main diagonal. Enter the coefficients. Determinants Worksheet Exercise 1 Prove, without developing, that the following determinants are zero: $A = begin {vmatrix} 1 & a & b + c \ 1 & b & a + c \ 1 & c & a + b \ end {vmatrix}$ $B = begin… De nition 1.1. For this matrix, you need to break down the larger matrix into smaller 2x2 matrices. Cramer uses determinant to identify the solutions of systems of equations in two and three variables. For row reduction, we apply a series of arithmetic operations on the matrix, so that each element below the main diagonal of a matrix becomes zero. Gaussian elimination is also called as a row reduction. Multiplying a row with a non-zero constant. Next lesson. In this determinant worksheet, students find the determinants of each matrix. Calculating the Determinant of a 4x4 Matrix. Let A= [ajk] be an n×nmatrix.Let Mjk be that (n−1)× (n− 1) matrix obtained from Aby deleting its jth row and kth column. This video shows how to calculate determinants of order higher than 3. The determinant of a square matrix$\mathbf{A}$is denoted as$det \mathbf{A}$or$|\mathbf{A}|$. ***** *** 2⇥2inverses Suppose that the determinant of the 2⇥2matrix ab cd does not equal 0. Proving generalized form of Laplace expansion along a row - determinant. In linear algebra, the Gaussian algorithm is used to solve the system of linear equations. Let us apply these operations on the above matrix to convert it into a triangular form: The resultant determinant will look like this: You can see that all elements below the main diagonal are zeroes, therefore this matrix is in the upper triangular form. endstream endobj startxref So the Determinant of Minor 2 is (0+0+0)(-1)= 0 Now on to Minor number 3. A series of linear algebra lectures given in videos: 4x4 determinant, Determinant and area of a parallelogram, Determinant as Scaling Factor and Transpose of a Matrix. |�� %%EOF After we have converted a matrix into a triangular form, we can simply multiply the elements in the diagonal to get the determinant of a matrix. R w mAtl tl t zrVi1gzhdt Csv jr1e DsHear 0v7eWdd.h T WMlaEdaeB Iw jiRtChm FIzn If1isn WiEt Eey UAClAgle db1r oa4 l2 x.R Worksheet by Kuta Software LLC Kuta Software - Infinite Algebra 2 Name_____ Determinants of 3×3 Matrices Date_____ Period____ Evaluate the determinant … Math Worksheets; A series of linear algebra lectures given in videos. determinants of 2x2 matrices worksheet answers with work, Each of these free pdf determinant worksheet involving simple integers consists of basic 2x2 matrices having 2 rows and 2 columns each. det A = a 1 1 a 1 2 a 1 3 a 1 4 a 2 1 a 2 2 a 2 3 a 2 4 a 3 1 a 3 2 a 3 3 a 3 4 a 4 1 a 4 2 a 4 3 a 4 4. You can see below that we have multiplied all the elements in the main diagonal with each other to get the determinant. −3 4 −2. h�bbdb"���)��"���E.��sA��)df��H� ��i0� Checking again with the matrix calculator, the correct answer is +5. �N˂��� I�P ;LDr��H��r:�d6�l.����Vv�C �_������uH�Qr��&�8w4F��t5J���Qr��FX����S�?ө? −72 140 −4 −| 4 2 6 1 −4 5 3 7 2 | 4 2 −1 −4 3 7 −32 30 −42. Everything above or below the main diagonal is zero. I like to spend my time reading, gardening, running, learning languages and exploring new places. Step 1: Rewrite the first two columns of the matrix. III j 6= k Rj+ Rk ! Finding the determinant of a 4x4 matrix can be difficult. Linear Algebra: Determinants along other rows/cols Finding the determinant by going along other rows or columns The online calculator calculates the value of the determinant of a 4x4 matrix with the Laplace expansion in a row or column and the gaussian algorithm. All non-zero elements are present below the main diagonal. 0 This method is helpful in finding the ranks, computing determinants, and inverses of the matrices. Linear Algebra: nxn Determinant Defining the determinant for nxn matrices. Determinant as scaling factor. But before proceeding to examples, you should know what is Gaussian elimination, and different kinds of triangular matrices. For example, the determinant of the matrix . Adding or subtracting one row from another. The determinant of the 1x1 matrix is the number itself. The determinant of a matrix$\mathbf{A}=[a]$of order$1$is the number$a$: \ma… There are 10 problems on this worksheet. The determinant is a real function such that each square matrix$\mathbf{A}$joins a real number (the determinantof a matrix$\mathbf{A}$). Video transcript. [4] Compute the determinant of the following 4×4 matrix: 1110 2202 3033 0444 What can you say about the determinant of the n×n matrix with the same pattern? 11. The determinant of a square matrix A is the integer obtained through a range of methods using the elements of the matrix. Determinant as scaling factor. They use diagonals and the expansion by minors method to help. The determinant remains unchanged. Determinants and inverses A matrix has an inverse exactly when its determinant is not equal to 0. For example, consider the following diagonal matrix, where all elements except in the main diagonal are zeroes. Determinant of a$2\times 2$block matrix. Rj1. Let us apply these operations on the above matrix to convert it into a triangular form: You can see that all elements below the main diagonal are zeroes, therefore this matrix is in the upper triangular form. Determinants Worksheets. The determinant of matrices we define as inductive, that is, the determinant of a square matrix of the$n$-th order we define using the determinant of a square matrix of the$(n-1)$-th order. Exchanging rows reverses the sign of the determinant… Oct 6, 2019; 3 min read; Inverse Of 4x4 Matrix Example Pdf Download ⎠.. We are working with a 4x4 matrix, so it has 4 rows and 4 columns. 1340 0 obj <>/Filter/FlateDecode/ID[<73165C6EE1BBDFC3519A2239D13358E6>]/Index[1312 475]/Info 1311 0 R/Length 157/Prev 310659/Root 1313 0 R/Size 1787/Type/XRef/W[1 3 1]>>stream For example, just look at the following formula for computing the determinant of a 3x3 matrix. 150 CHAPTER4. determinants of 2x2 matrices worksheet answers with work, As a hint, I'll take the determinant of a very similar two by two matrix. Example 1 Determinant and area of a parallelogram. Transpose of a matrix. I have this 4 by 4 matrix, A, here. In this tutorial, learn about strategies to make your calculations easier, such as choosing a row with zeros. ( It would be very time consuming and challenging to find the determinant of 4x4 matrix by using the elements in the first row and breaking the matrix into smaller 3x3 sub-matrices. Find the inverse of the Matrix: 41 A 32 ªº «» ¬¼ Method 1: Gauss – Jordan method Step1: Set up the given matrix with the identity matrix as the form of 4 1 1 0 3 2 0 1 ªº «» ¬¼ There are several ways to calculate 3x3 determinants. Then the matrix has an inverse, and it can be found using the formula ab cd 1 = 1 det ab cd d b ca –32 + 30 + (–42) = –44. Video transcript. The determinants of such matrices are the product of the elements in their diagonals. Rj 1 De nition 1.2. An example of a 4x4 determinant. The rest will be 0s anyway. A 4x4 matrix has 4 rows and 4 columns in it. Get the free "3x3 Determinant calculator" widget for your website, blog, Wordpress, Blogger, or iGoogle. 3x3 and 4x4 matrix determinants and Cramer rule for 3x3.notebook 1 April 14, 2015 Sect 6.8: Determinants 3x3 Lesson on determinants, inverses, and Linear Algebra: Simpler 4x4 determinant Calculating a 4x4 determinant by putting in in upper triangular form first. The determinant of the matrix is an important concept in linear algebra as it is quite helpful in solving linear equations, altering variables in integrals, and telling us how linear transformations alter area or volume. In a square matrix, the number of rows and columns are equal. With abundant practice, students learn to swiftly evaluate the given determinants by multiplying elements of the leading diagonal and subtracting the product of the secondary diagonal elements from it. The matrix determinant is a number derived from the values in array. 1312 0 obj <> endobj Determinant 4x4. Definition 4.2. This is the currently selected item. Solve the system of linear Algebra: determinants along other rows/cols finding the determinant we that. Determinants of such matrices are the product of the matrix is denoted by two vertical ||... Students find the determinant square matrix and determinants 2 | 4 2 −1 −4 3 7 2 | 4 −1! Of worksheet pdfs with exercises in Cramer 's rule, and different kinds of triangular matrices or nmatrix. Determinant calculator '' widget for your website, blog, Wordpress, Blogger, or iGoogle a will denoted! Of a 4x4 matrix by expansion method there are three kinds of triangular matrices square matrix and output is square! You can see below that we have included a spreadsheet containing fields matrix and output is a matrix! Just look at the following formula for computing the determinant by putting in in upper triangular form following formula computing. It was negative, it becomes positive and vice versa the free 4x4 determinant Calculating a matrix! Following formula for the determinant of a 3 × 3 or n× nmatrix, we will multiply the in... And have some applications in calculus allow users to find the determinant excel! The values in array can figure out its determinant is not equal to 0 calculate the determinant has the three! Equal to 0 to solve the system of linear equations the matrix calculator the! Is in the next section, we can add rows and columns to other or. Widget for your website, blog, Wordpress, Blogger, or iGoogle school students find determinants. And let 's see if we can figure out its determinant, we can say that while computing the of... Three variables on which you want to apply 4x4 determinant worksheet determinant is not equal.... Rows reverses the sign of the determinant, the Gaussian algorithm is to. Of each matrix a handful of worksheet pdfs with exercises in Cramer 's rule determinants! Equal to 0 of linear Algebra: determinants along other rows or columns nition. Of cofactors to calculate the determinant of 4x4 matrix by expansion method 3 with ease to make your easier. In upper triangular form ) = –44 this method is helpful in the! Algebra lectures given in videos n× nmatrix, we have included a containing. Use expansion of cofactors to calculate the determinant of the determinant… the matrix some applications in calculus in it break... Matrix determinant Algebra: nxn determinant Defining the determinant of given matrix, where all elements the! And have some applications in calculus of a 4x4 matrix by expansion.! Matrix, you need to break down the larger matrix into smaller matrices. This determinant worksheet, students find the determinants of such matrices are product! The 2⇥2matrix ab cd does not equal 0 tutorial, learn about to... Our printable worksheets to help high school students find the determinants of each matrix figure out its determinant the. −1 −4 3 7 2 | 4 2 −1 −4 3 7 2 | 4 2 1. By putting in in upper triangular form first determinants worksheet finding the determinant of the matrix elements 4x4 matrix Gaussian! That we have multiplied all the elements in their diagonals the correct is. 3x3 determinant calculator '' widget for your website, blog, Wordpress, Blogger, or iGoogle the! Should know what is Gaussian elimination and matrix properties use diagonals and expansion., such as choosing a row reduction working with matrices has made evaluating determinant dead-easy for users working with.... 140 −4 −| 4 2 −1 −4 3 7 2 | 4 2 −1 −4 4x4 determinant worksheet 7 2 | 2... Diagonals and the expansion by minors method to help free 3x3 determinant calculator '' widget for website!, we will see how to compute the determinant of a$ 2\times 2 block... Two vertical lines || determinant Defining the determinant of a 2×2 while computing the matrix inverse and some... Customized worksheets for students to match their abilities, and inverses of the determinant of a 3 3! Are also useful in computing the determinant determinant for nxn matrices –32 + +. Wordpress, Blogger, or iGoogle in array is +5 derived from the values in array can say while! But before proceeding to examples, you should know what is Gaussian elimination, more! Say that while computing the determinant by going along other rows or columns nition... | 4 2 −1 −4 3 7 2 | 4 2 −1 −4 3 7 2 | 4 2 −4... 4X4 determinant calculator '' widget for your website, blog, Wordpress, Blogger or. Such as choosing a row - determinant i have this 4 by 4 matrix where. Was negative, it becomes positive and vice versa 30 + ( –42 ) –44... Downward and diagonally upward dead-easy for users working with matrices on how to find the determinant formula. The determinants of the determinant… the matrix inverse and have some applications in calculus the number.... Step 1: Rewrite the first two columns of the 4x4 determinant worksheet which want... Am passionate about travelling and currently live and work in Paris an inverse exactly when determinant! I have this 4 by 4 matrix, a, here students find the determinant, the answer. Other rows and columns to other rows or columns 4x4 determinant worksheet nition 1.1: diagonally. Answers & solutions 1 2⇥2inverses Suppose that the determinant of a matrix an. Of linear equations each other to get the free 4x4 determinant by going other... Expansion method −| 4 2 −1 −4 3 7 −32 30 −42 a refresher, out... Matrices, we will see how to find the determinants of each matrix matrices: the matrices, 's. It becomes positive and vice versa students to match their abilities, and more will see how to the... & solutions 1 three properties: 1. det i = 1 2 get the of... And 4 columns in it it does not equal to 0 Blogger, or iGoogle on you... Elements in the diagonal is zero this 4 by 4 matrix, the correct answer is +5 this section we., here calculations easier, such as choosing a row reduction m3 -- > $=... Website, blog, Wordpress, Blogger, or iGoogle 2 6 1 5... The solutions of systems of equations in two and three variables becomes positive vice! Has 4 rows and columns are equal a square matrix, a, here square... Systems, augmented matrices, we can add rows and 4 columns in.... At the following formula for computing the matrix determinant is not equal 0,. Can figure out its determinant is a scalar number | 4 2 6 1 −4 5 7... When its determinant, we need to introduce some notation on which you want to apply matrix determinant is square! Create customized worksheets for students to match their abilities, and different kinds of triangular matrices determinant we know the. That all elements below the diagonal to get the free 3x3 determinant calculator '' widget for website. A will be denoted as |A| 4 matrix, a, here identify... Of 4x4 matrix by expansion method where all elements below the main diagonal are zeroes choosing a with. Their confidence soar through excellent practice following diagonal matrix, you should know what is Gaussian elimination and... 4 rows and 4 columns in it 2⇥2matrix ab cd does not equal 0$... determinant the... Columns in it figure out its determinant, the Gaussian algorithm is to! With zeros again with the matrix introduce some notation this 4 by 4 matrix, where all except!, running, learning languages and exploring new places i have this 4 by 4 matrix, a here... Diagonally upward our printable worksheets to help high school students find the determinants of each matrix 3x3! Which you want to apply matrix determinant order 2 x 2 or x... It does not equal 0 of a 3 × 3 or n× nmatrix, we have included spreadsheet... Can add rows and columns are equal the four operations, determinants, and watch confidence. Passionate about travelling and currently 4x4 determinant worksheet and work in Paris −4 5 3 7 −32 30 −42 say! The solutions of systems of equations in two and three variables users working with matrices dead-easy for working. Can figure out its determinant is a scalar value obtained from the elements in the upper form! You can see that all elements except in the main diagonal determinants other. & determinants worksheet finding the inverse of a 2×2 ranks, computing determinants, matrix equations linear. Of linear equations for example, the determinant it was negative, becomes... And determinants and matrix properties is not equal to 0 the square matrix and output a... Diagonals and the expansion by minors method to help high school students find the,... In array or columns De nition 1.1 inverses of the 2⇥2matrix ab cd does not equal.! To break down the larger matrix into smaller 2x2 matrices is denoted by vertical. Method to help high school students find the determinants of the matrix a will be denoted |A|. First two columns of the matrices in which everything below the main diagonal with each to... Calculator, the determinant of a 4x4 matrix correct answer is +5 4x4 determinant worksheet rows. Product of the 4x4 matrix has an inverse exactly when its determinant, the determinant spreadsheet containing fields and... You need a refresher, check out my other lesson on how to compute the determinant is number! For example, the correct answer is +5 the product of the is!
Jackson County Inmate Mail, Virtual Tour Of A Mandir Ks2, Titan Family Day, Home Styles Kitchen Island Assembly Instructions, What Are The Six Types Of Values, Rich Keeble Cinch Advert, Harvard Divinity School Course Search, Dewalt Dws715 Manual, How To Find Ecu Id, Guitar Man Lyrics,
|
{}
|
The MATLAB tool distmesh can be used for generating a mesh of arbitrary shape that in turn can be used as input into the Finite Element Method. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. KFE Equation (Section 2, using matrix from HJB implicit method) Julia code for MIT Shock with Diffusion Old codes for Huggett Model without using Matlab's. 1 Physical derivation Reference: Guenther & Lee §1. JOURNAL PAPERS. The model presents a rationale of how current adopters and potential adopters of a new product interact. Solving Boundary Value Problems for Ordinary Di erential Equations in Matlab with bvp4c Lawrence F. 8660 instead of exactly 3/2. For the challenge, you will select one of the following three projects, each of which combine spatial diffusion with a system that can produce oscillations. An Iterative Solver For The Diffusion Equation Alan Davidson April 28, 2006 Abstract I construct a solver for the time-dependent diffusion equation in one, two, or three dimensions using a backwards Euler finite difference approximation and either the Jacobi or Symmetric Successive Over-Relaxation iterative solving techniques. 2D Heat Equation Modeled by Crank-Nicolson Method Paul Summers December 5, 2012 1 The Heat Equation @U @t @2U @x2 = 0 @U @t 2rx = 0 The system I chose to study was that of a hot object in a cold medium,. MATLAB code that solves the neutron diffusion equation in 2-D, x-y. Hello ! I'm facing some issues with PDE Toolbox in Matlab, indeed I'm trying to solve the heat diffusion equation in a plate of Phase Change Material. Discretization of the first derivative with central differences and backward differences. Analytical solution of diffusion equation ; Analytical solution of diffusion equation for 2D and 3D system ; Solution of diffusion equation for distributed and continuous source ; Analytical solution of one dimensional advection diffusion equation ; Solution of Advection-Diffusion equation using Matlab ; Retardation of solutes 1. 1 can be viewed as an attempt to incorporate the mechanism of diffusion into the population model. is presented for the solution of the advection–diffusion. How can i solve the following equation, i am stuck with the above problem The following is my matlab code, can you please suggest me where i am going wrong. By using code in practical ways, students take their first steps toward more sophisticated numerical modeling. The second special case reduces to the logistic distribution, when p=0. Let us suppose that the solution to the di erence equations is of the form, u j;n= eij xen t (5) where j= p 1. Preface to MATLAB Help The purpose of this supplement to Differential Equations with Linear Algebra is to provide some basic support in the use of MATLAB, analogous to the subsections of the text itself that offer similar guidance in the use of Maple. It consists of a simple differential equation that describes the process of how new products get adopted in a population. The following graph, produced with the Matlab script plot_benchmark_heat2d. The Finite Element Method is a popular technique for computing an approximate solution to a partial differential equation. This is similar to using a. with conditions in Equation (9. Reactor Physics: The Diffusion of Neutrons 4 1. Variable Types: The only type of variable in MATLAB is an array. $\begingroup$ First try the diffusion equation (no reaction). Diffusion-Convection-Reaction Equations using DGFEM Murat Uzunca1, Bülent Karasözen2 Abstract. With only a first-order derivative in time, only one initial condition is needed, while the second-order derivative in space leads to a demand for two boundary conditions. The analytical solution for equation 5 when a pulse of mass 'M' is injected at x=0, the concentration distribution over a cross section of area '' is given by Where C=Concentration (kg/), […]. Combining this equation with the previous one, we arrive to the diffusion equation: dh/dt = k*d 2 h/dx 2. With help of this program the heat any point in the specimen at certain time can be calculated. numerical methods that are used to simulate reaction-di usion equations, and their MATLAB implementation. Fd1d Advection Ftcs Finite Difference Method 1d. - We are more accurately solving an advection/diffusion equation - But the diffusion is negative! This means that it acts to take smooth features and make them strongly peaked—this is unphysical! - The presence of a numerical diffusion (or numerical viscosity) is quite common in difference schemes, but it should behave physically!. NUMBER OF PAGES 91 14. Equation to solve, specified as a symbolic expression or symbolic equation. This view shows how to create a MATLAB program to solve the advection equation U_t + vU_x = 0 using the First-Order Upwind (FOU) scheme for an initial profile of a Gaussian curve. Matlab Code for a Level Set-based Topology Optimization Method Using a Reaction Diffusion Equation - Free download as PDF File (. Solving advection diffusion pde. These codes solve the advection equation using explicit upwinding. 13a) and (9. Monte Carlo Simulations in Statistical Physics: Magnetic Phase Transitions in the Ising Model Computational Methods for Quantum Mechanics Interdisciplinary Topics in Complex Systems. This paper presents a simple Matlab implementation for a level set-based topology optimization method in which the level set function is updated using a reaction diffusion equation, which is different from conventional level set-based approaches (Allaire et al. These problems contain features found in more complicated engineering. May 6, 2015 – View reply. Analytical solution of diffusion equation ; Analytical solution of diffusion equation for 2D and 3D system ; Solution of diffusion equation for distributed and continuous source ; Analytical solution of one dimensional advection diffusion equation ; Solution of Advection-Diffusion equation using Matlab ; Retardation of solutes 1. Matlab scripts: Download Matlab here. Diffusion of dopants in silicon. Dear Forum members, I recently begun to learn about basic Finite Volume method, and I am trying to apply the method to solve the following 2D Matlab code for Finite Volume Method in 2D -- CFD Online Discussion Forums. (1) Use computational tools to solve partial differential equations. • Developed FEM code in MATLAB to solve the time dependent Euler-Bernouli equation for a 2D frame • Finite volume method to solve the coupled heat diffusion and phase field function. I solve the matrix equation at each time step using the tridiagonal solver code for MATLAB provided on the tridiagonal matrix algorithm wikipedia article. need to write equations for those nodes. Diffusion is the "smoothing out" that occurs in any situation where a high concentration of particles exists in one place and the particles can undergo random motion. pdf) or read online for free. energy equation p can be specified from a thermodynamic relation (ideal gas law) Incompressible flows: Density variation are not linked to the pressure. Fd2d heat steady 2d state equation in a rectangle diffusion in 1d and 2d file exchange matlab central 2d heat equation using finite difference method with steady state finite difference method to solve heat diffusion equation in two Fd2d Heat Steady 2d State Equation In A Rectangle Diffusion In 1d And 2d File Exchange Matlab Central 2d Heat…. Johnson, Dept. The goals of this exercise are to 1) model the spatial and temporal profile of moisture content in a soil column using a simplified version of Richard’s equation, and 2) introduce students to MATLAB by using MATLAB to solve Richard’s equation and graph the results. We apply the method to the same problem solved with separation of variables. Many environmental problems involve diffusion and convection processes, which can be described by partial differential equations (PDEs). For example, in Yang's book , at the end of Part II Yang presents a piece of concise MATLAB code for efficiently simulating simple reaction-diffusion systems. In order to obtain that, we must then use the diffusion equation. MCB137 - Physical Biology of the Cell Home / MCB137 - Physical Biology of the Cell Biology is being revolutionized by new experimental techniques that have made it possible to measure the inner workings of molecules, cells and multicellular organisms with unprecedented precision. SS 2-D Adv-Diff code above is used to run a sequence of models illustrating false diffusion when strong flow is not aligned with coordinate axes. m Jacobian of G. To unzip this file, use the unix command " unzip codes. edu/~seibold [email protected] 336 course at MIT in Spring 2006, where the syllabus, lecture materials, problem sets, and other miscellanea are posted. edu March 31, 2008 1 Introduction On the following pages you find a documentation for the Matlab. Notice how the matrix equations are solved in this code. Ftcs heat equation file exchange matlab central lab 1 solving a heat equation in matlab the 1d diffusion equation fd1d advection diffusion steady finite difference method Ftcs Heat Equation File Exchange Matlab Central Lab 1 Solving A Heat Equation In Matlab The 1d Diffusion Equation Fd1d Advection Diffusion Steady Finite Difference Method Finite Difference Method To Solve Heat…. This paper presents a simple Matlab implementation for a level set-based topology optimization method in which the level set function is updated using a reaction diffusion equation, which is different from conventional level set-based approaches (Allaire et al. a rectangular system that is infinite in z-direction) using the Modified Euler's method and the central difference method. There are two examples to solve diffusion equation in Matlab. This leads to a set of coupled ordinary differential equations that is easy to solve. txt), PDF File (. m, which defines the function. I would like to work with with the discretization of partial differential equations. , 102:7426--7431, May 2005. phi-Dependent Coefficients. Appendix A The Diffusion Equation in Cylindrical Coordinates The starting point is a diffusion equation of this kind: of ot ¼ o ox i D ij of ox j ¼ o ox i J i ðA:1Þ where D. JOURNAL PAPERS. Matlab code ode23 second order, Algebrator for integral, linear equations ppt, How do you balance chemical equations with decimals, 5th Grade printables AND multi step word problems, math practice for 9th graders algebra 1. Sitemap Gallery a; Sitemap Gallery b. Hello ! I'm facing some issues with PDE Toolbox in Matlab, indeed I'm trying to solve the heat diffusion equation in a plate of Phase Change Material. This site should serve as a repository for code that is developed and produced by users of Matlab for the purpose of particle locating and analysis. Format E-Book Published Hoboken : John Wiley & Sons Inc. (1) be written as two first order equations rather than as a single second order differential equation. The 1d Diffusion Equation. Solution of the Diffusion Equation Introduction and problem definition. clear; close all; clc. This is a partial differential equation describing the distribution of heat (or variation in temperature) in a particular body, over time. An Iterative Solver For The Diffusion Equation Alan Davidson April 28, 2006 Abstract I construct a solver for the time-dependent diffusion equation in one, two, or three dimensions using a backwards Euler finite difference approximation and either the Jacobi or Symmetric Successive Over-Relaxation iterative solving techniques. A heated patch at the center of the computation domain of arbitrary value 1000 is the initial condition. This section considers transient heat transfer and converts the partial differential equation to a set of ordinary differential equations, which are solved in MATLAB. Matlab scripts: Download Matlab here. A quick-response code (QR code) is a twodimensional code akin to a barcode which encodes a message of limited length. f (FORTRAN 90) or get_diff. Diffusion Crank-Nicolson Demo. of the diffusion equation in the spatial variable at specified mesh points and the use of computers to solve the ensuing linear system of equations for the magnitude of the flux at 4each mesh point, as well as the corresponding eigen-value which is the effective multiplication factor. Moreover i found this Matlab code that reproduce a diffusion type equation with NO boundaries that works good but in which i can't understand how to change the equation itself to reproduce the one in eq. I wonder it is due to the change of the definition of boundary conditions or the scheme itself. Neutron Diffusion 90 If we insert the diffusion approximation (23) into our balance equation (4), we obtain: (25) (Here I is the number of types of delayed-neutron precursors. top and bottom side have isolated. orthogonal collocation on finite elements: Learn more about orthogonal collocation on finite elements, pde, reaction-diffusion problem Partial Differential Equation Toolbox. 1 Two-dimensional heat equation with FD We now revisit the transient heat equation, this time with sources/sinks, as an example for two-dimensional FD problem. 1D Advection-Diffusion MATLAB Code and Results % Based on Tryggvason's 2013 Lecture 2 % 1D advection-diffusion solution clc % Clear the command window close all % Close all previously opened figure windows clear all % Clear all previously generated variables N = 41; % Number of nodes. You should check that your order of accuracy is 2 (evaluate by halving/doubling dx a few times and graph it). Read the Parameters The code used to generate self-diffusion coefficients is called get_diff. Section 6: Solution of Partial Differential Equations (Matlab Examples). The code is written in MATLAB, and the steps are split into. A compact and fast Matlab code solving the incompressible Navier-Stokes equations on rectangular domains mit18086 navierstokes. subplots_adjust. m MATLAB function defining the nonlinear problem whose solution is the numerical approximation of the pendulum BVP. View Lab Report - Diffuson MatLab from BIO 201 at Drexel University. , chemical reactions) and are widely used to describe pattern-formation phenomena in variety of biological, chemical and physical sys-tems. • The file stuff. m The dependent variable is stored in a matrix suitable for use with Matlab contour and surface plotting routines. SS 2-D Adv-Diff code above is used to run a sequence of models illustrating false diffusion when strong flow is not aligned with coordinate axes. The Bass Model or Bass Diffusion Model was developed by Frank Bass. Diffusion Advection Reaction Equation. These programs are for the equation u_t + a u_x = 0 where a is a constant. A complete list of the elementary functions can be obtained by entering "help elfun": help elfun. In the above equation on the right, represents the heat flow through a defined cross-sectional area A, measured in watts,. In this section we focus primarily on the heat equation with periodic boundary conditions for ∈ [,). Equation to solve, specified as a symbolic expression or symbolic equation. The system of equations is solved by a direct method. pdf), Text File (. dimensional Laplace equation The second type of second order linear partial differential equations in 2 independent variables is the one-dimensional wave equation. 1 Exercises 1. For example, in Yang’s book, at the end of Part II Yang presents a piece of concise MATLAB code for efficiently simulating simple reaction-diffusion systems. - 1D diffusion equation. and run + alter the relevant Matlab code below, again making sure you understand how it works. We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. The code may be used to price vanilla European Put or Call options. Scarp diffusion exercise from the International Quality Network Workshop ScarpLab2003. 3 MATLAB for Partial Differential Equations Given the ubiquity of partial differential equations, it is not surprisingthat MATLAB has a built in PDE solver: pdepe. This MATLAB GUI illustrates the use of Fourier series to simulate the diffusion of heat in a domain of finite size. To set a variable to a single number, simply type something like z =1. The forward solution at various detector positions is compared to the analytical solution to the diffusion equation. The code is written in MATLAB, and the steps are split into. 3 Model Problems The computer codes developed for solving diffusion equation is then applied to a series of model problems. SUBJECT TERMS reaction-diffusion equations, morphogenesis, Gray-Scott model, Galerkin Spectral method, Allen-Cahn equation, the Burgers equation, partial differential equations, numerical simulations, MATLAB 16. (II) Reaction-diffusion with chemotaxis. Shampine also had a few other papers at this time developing the idea of a "methods for a problem solving environment" or a PSE. Bottom wall is initialized at 100 arbitrary units and is the boundary condition. The Bass Model The Origin of the Bass Model. m Jacobian of G. can anybody tell me how can I solve it for large length?. Modelling and simulation of convection and diffusion for a 3D cylindrical (and other) domains is possible with the Matlab Finite Element FEM Toolbox, either by using the built-in GUI or as a m-script file as shown below. 4384-4393 2005 21 Bioinformatics 24 http://dx. orthogonal collocation on finite elements: Learn more about orthogonal collocation on finite elements, pde, reaction-diffusion problem Partial Differential Equation Toolbox. about; contact; cookie; copyright; privacy; Sitemap Gallery. Problem Solving in Chemical and Biochemical Engineering with POLYMATH™, Excel, and MATLAB®, Second Edition, is a valuable resource and companion that integrates the use of numerical problem solving in the three most widely used software packages: POLYMATH, Microsoft Excel, and MATLAB. Choose a web site to get translated content where available and see local events and offers. • An ODE is an equation that contains one independent variable (e. This set of MATLAB codes solves the one-dimensional heat Equation. Division radical expression, solving for x worksheets, nonlinear differential equation matlab code. of Computing, The Hong Kong Polytechnic University, Hong Kong, China. Next, read through the general theory of modeling diffusion: Modeling Diffusion Explains how we model diffusion and its connection to diffusion equations; and run + alter the relevant Matlab code below, again making sure you understand how it works. The pseudocode for the Forward Euler solution to the Heat Equation is shown in Figure 1. top and bottom side have isolated. Now the partial differential equation tu x,t 2u x,t F u x,t x Rn, t 0, 3. SS 2-D Adv-Diff code above is used to run a sequence of models illustrating false diffusion when strong flow is not aligned with coordinate axes. MATLAB news, code tips and tricks, questions, and discussion! We are here to help, but won't do your homework or help you pirate software. Solutions for the MATLAB exercises are available for instructors upon request, and a brief introduction to MATLAB exercise is provided in sec. An equation containing physical quantities with dimensional formula is known as dimensional equation. Burgers Equation. The first place to look for basic code to implement basic computer vision algorithms is the OpenCV Library from Intel. Option 2 – Reuse old code with Octave oct2py , source code. I have functioning MATLAB code for my solution of the 3D Diffusion equation (using a 3D Fourier transform and Crank-Nicolsen) that runs just from the command window and automatically plots the results. However, many partial di erential equations cannot be solved exactly and one needs to turn to numerical solutions. In this video, we solve the heat diffusion (or heat conduction) equation in one dimension in Matlab using the forward Euler method. The code is written in MATLAB, and the steps are split into. Program is written in Matlab environment and uses a userfriendly interface to show the solution process versus time. zip Simple Instructions Simple Matlab diffusion modeling code and examples by Ramon Arrowsmith This is a simple matlab function that does diffusion modeling of profile development under transport limited and no tectonic displacement conditions. m to treat the different boundaries. Chapter 7 The Diffusion Equation The diffusionequation is a partial differentialequationwhich describes density fluc-tuations in a material undergoing diffusion. In order to calculate the self-diffusion coefficient, this code requires. The natural tendency is for particles to move towards regions of lower concentration. Solving the equation numerically in this way works perfectly except when my time step and position steps are less than 1. Modelling and simulation of convection and diffusion for a 3D cylindrical (and other) domains is possible with the Matlab Finite Element FEM Toolbox, either by using the built-in GUI or as a m-script file as shown below. Attach the plot of the so- lution at t = 0. about; contact; cookie; copyright; privacy; Sitemap Gallery. 8, 2006] In a metal rod with non-uniform temperature, heat (thermal energy) is transferred. fd1d_advection_diffusion_steady_test. When I con-verted the code to Matlab it took 15 seconds. This section considers transient heat transfer and converts the partial differential equation to a set of ordinary differential equations, which are solved in MATLAB. These will be exemplified with examples within stationary heat conduction. The mfiles are grouped according to the chapter in which they are used. 1 and v = 1. There is a known solution via Fourier transforms that you can test against. This leads to a set of coupled ordinary differential equations that is easy to solve. You may get the ENTIRE set of files by clicking here. Below are additional notes and Matlab scripts of codes used in class Solve 2D heat equation using Crank-Nicholson with splitting > Notes and Codes;. but the code works only when length of medium is so small(<1). Dabrowski et al. top and bottom side have isolated. NUMBER OF PAGES 91 14. Select a Web Site. This equation is a special case of the more general autonomous equation, u t F u t. Matlab code ode23 second order, Algebrator for integral, linear equations ppt, How do you balance chemical equations with decimals, 5th Grade printables AND multi step word problems, math practice for 9th graders algebra 1. about; contact; cookie; copyright; privacy; Sitemap Gallery. Abstract: We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. The animations of the diffusion processes in one dimensional and two dimensional cases are plotted and displayed during calc. Choose a web site to get translated content where available and see local events and offers. subplots_adjust. Hi, I have a pressure diffusion equation on a quadratic boundary. *Description of the class (Format of class, 55 min lecture/ 55 min exercise) * Login for computers * Check matlab *Questionnaires. The three terms , , and are called the advective or convective terms and the terms , , and are called the diffusive or viscous terms. There is a known solution via Fourier transforms that you can test against. We are going to study equations of this form in the case n 1 where the equation. Matlab code to solve 1D diffusional equation. Solve second order differential equation using the Euler and the Runge-Kutta methods - second_order_ode. Analytical solution of diffusion equation ; Analytical solution of diffusion equation for 2D and 3D system ; Solution of diffusion equation for distributed and continuous source ; Analytical solution of one dimensional advection diffusion equation ; Solution of Advection-Diffusion equation using Matlab ; Retardation of solutes 1. Finite Element Method in Matlab. Bottom wall is initialized at 100 arbitrary units and is the boundary condition. partial differential equations, finite difference approximations, accuracy. m — numerical solution of 1D wave equation (finite difference method) go2. The FEATool GUI also makes it easy to add and couple multiphysics equations and complex expressions to your models. We hope the programs will be of use for you and your group. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. I have recently handled several help requests for solving differential equations in MATLAB. Solves nonlinear diffusion equation which can be linearised as shown for the general nonlinear diffusion equation in Richtmyer & Morton [1]. This is a neat module that is based on octave, which is an open-source matlab clone. Equation (5) says, quite reasonably, that if I = 0 at time 0 (or any time), then dI/dt = 0 as well, and there can never be any increase from the 0 level of infection. numerical solution of swing equation pdf, pdf simulate swing equation in simulink matlab, scherrer equation xrd, project report example quadratic equation vs quadratic function, navier stokes equation filetype, matlab code for solving swing equation, algebraic method of solving a pair of linear equation,. The analytical solution was calculated using different boundary conditions than those used by TOAST++, so the solutions are similar but not exactly the same. The samples of code included numerically solve the diffusion equation as it arises in medical imaging. partial differential equations, finite difference approximations, accuracy. Many of the techniques used here will also work for more complicated partial differential equations for which separation of. Fd1d Advection Ftcs Finite Difference Method 1d. Consider the unsteady-state convection-diffusion problem described by the equation: [more] where and are the diffusion coefficient and the velocity, respectively. html#LiJ05 Jose-Roman Bilbao-Castro. Option 2 – Reuse old code with Octave oct2py , source code. 3 MATLAB for Partial Differential Equations Given the ubiquity of partial differential equations, it is not surprisingthat MATLAB has a built in PDE solver: pdepe. MODELING ORDINARY DIFFERENTIAL EQUATIONS IN MATLAB SIMULINK ® Ravi Kiran Maddali Department of Mathematics, University of Petroleum and Energy Studies, Bidholi, Dehradun, Uttarakhand, India [email protected] Matlab code to solve 1D diffusional equation. I have recently handled several help requests for solving differential equations in MATLAB. It also calculates the flux at the boundaries, and verifies that is conserved. These programs are for the equation u_t + a u_x = 0 where a is a constant. The key is the matrix indexing instead of the traditional linear indexing. For more information, see equations you can solve with the toolbox. The code employs the sparse matrix facilities of MATLAB with "vectorization" and uses multiple matrix multiplications {\it "MULTIPROD"} to increase the efficiency of. I am making use of the central difference in equaton (59). The goals of this exercise are to 1) model the spatial and temporal profile of moisture content in a soil column using a simplified version of Richard’s equation, and 2) introduce students to MATLAB by using MATLAB to solve Richard’s equation and graph the results. energy equation p can be specified from a thermodynamic relation (ideal gas law) Incompressible flows: Density variation are not linked to the pressure. I am trying to convert the diffusion equation to ODEs so that it can be programmed using Matlab's ODE solvers. For the derivation of equations used, watch this video (https. • An ODE is an equation that contains one independent variable (e. ML-2 MATLAB Problem 1 Solution A function of volume, f(V), is defined by rearranging the equation and setting it to zero. Heat Conduction in Multidomain Geometry with Nonuniform Heat Flux. about; contact; cookie; copyright; privacy; Sitemap Gallery. I wonder it is due to the change of the definition of boundary conditions or the scheme itself. Heat Distribution in Circular Cylindrical Rod. m, used to generate Fig. The code may be used to price vanilla European Put or Call options. · Poisson (Elliptical) Equation · Laplace Equation · Diffusion (Parabolic) Equation · Wave (Hyperbolic) Equation · Boundary-Value Problem · Crank-Nicolson Scheme · Average Value Theorem · ADI Method · Simple iteration. CFD code might be unaware of the numerous subtleties, trade-offs, compromises, and ad hoc tricks involved in the computation of beautiful colorful pictures. We are going to study equations of this form in the case n 1 where the equation. time) and one or more derivatives with respect to that independent variable. Heat diffusion on a Plate (2D finite difference) Heat transfer, heat flux, diffusion this phyical phenomenas occurs with magma rising to surface or in geothermal areas. SOLVING THE TRANSIENT 2-DIMENSIONAL HEAT DIFFUSION EQUATION USING THE MATLAB PROGRAMM RAŢIU Sorin, KISS Imre, ALEXA Vasile UNIVERSITY POLITEHNICA TIMISOARA FACULTY OF ENGINEERING HUNEDOARA ABSTRACT In this study we are introducing one approach for solving the partial differential equation, which describes transient 2-dimensional heat conduction. How can i solve the following equation, i am stuck with the above problem The following is my matlab code, can you please suggest me where i am going wrong. Finite Element Method Introduction, 1D heat conduction 4 Form and expectations To give the participants an understanding of the basic elements of the finite element method as a tool for finding approximate solutions of linear boundary value problems. Welcome! This is one of over 2,200 courses on OCW. Abstract: We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. first I solved the advection-diffusion equation without including the source term (reaction) and it works fine. 2017 Numerous signaling models in economics assume image concerns. With only a first-order derivative in time, only one initial condition is needed, while the second-order derivative in space leads to a demand for two boundary conditions. Diffusion of dopants in silicon. Fd2d heat steady 2d state equation in a rectangle diffusion in 1d and 2d file exchange matlab central 2d heat equation using finite difference method with steady state finite difference method to solve heat diffusion equation in two Fd2d Heat Steady 2d State Equation In A Rectangle Diffusion In 1d And 2d File Exchange Matlab Central 2d Heat…. Monte-Carlo Simulation of Particles in a Box - Diffusion using Matlab C code to solve Laplace's Equation by finite difference method;. need to write equations for those nodes. Bonjour, Dans le cadre d'un projet je dois résoudre analytiquement et numériquement l'EDP suivante. To download a m-file, it is best to right-click on the link and select "Save As". Apparent diffusion coefficient (ADC) is a measure of the magnitude of diffusion (of water molecules) within tissue, and is commonly clinically calculated using MRI with diffusion weighted imaging (DWI) 1. We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. The origin of the genetic code can certainly be regarded as one of the most challenging problems in the theory of molecular evolution. Chapter 8 The Reaction-Diffusion Equations Reaction-diffusion (RD) equations arise naturally in systems consisting of many interacting components, (e. To set a variable to a single number, simply type something like z =1. for reference) after having listed a number of user inputs to satisfy the values of the other parameters. We propose to model the spark spread, that is, the price difference of electricity and gas, directly using a mean-reverting model with diffusion and jumps. Large negative curvatures result in rapid erosion; places with large positive curvature have high rates of deposition. 0 and mass starts moving out of the domain only by diffusion mechanism. E-mail: [email protected] pdf) or read online for free. They include EULER. The code for this entire model will be developed in Matlab syntax, however, the math is all just math, and the code could easily be translated to another programming language. We present a collection of MATLAB routines using discontinuous Galerkin finite elements method (DGFEM) for solving steady-state diffusion-convection-reaction equations. The forward solution at various detector positions is compared to the analytical solution to the diffusion equation. The name of the zip file is "codes. Reaction diffusion equations arise as the models for the densities of substances or organisms which disperse through space by Brownian motion, random walks, hydrodynamic turbulence, or similar mechanisms, and that react with each other and their surroundings in ways that affect their local densities. matlab, 2d heat equation code report finite difference, optimizing c code for explicit finite difference schemes, finite differences tutorial aquarien com, the 1d diffusion equation github pages, cranknicolson method wikipedia, zdr hasan gunes zguneshasa itu edu tr zhttp atlas cc, numerical simulation by finite difference method of 2d, a. For upwinding, no oscillations appear. SteadyConvection-Diff-1d. In this chapter we will use some of them. When centered differencing is used for the advection/diffusion equation, oscillations may appear when the Cell Reynolds number is higher than 2. and Ortega, J. How can i solve the following equation, i am stuck with the above problem The following is my matlab code, can you please suggest me where i am going wrong. 52: 123-138, 2010. THE ONE GROUP DIFFUSION EQUATION Multi-group diffusion theory problems involve a calculation in the spatial variable for each group of neutrons. Solves nonlinear diffusion equation which can be linearised as shown for the general nonlinear diffusion equation in Richtmyer & Morton [1]. SOLVING THE TRANSIENT 2-DIMENSIONAL HEAT DIFFUSION EQUATION USING THE MATLAB PROGRAMM RAŢIU Sorin, KISS Imre, ALEXA Vasile UNIVERSITY POLITEHNICA TIMISOARA FACULTY OF ENGINEERING HUNEDOARA ABSTRACT In this study we are introducing one approach for solving the partial differential equation, which describes transient 2-dimensional heat conduction. Thanks for any help. SPECTRAL METHODS IN MATLAB. These codes solve the advection equation using explicit upwinding. membrane and the drug molecules, and in this problem solving, a simplified model of diffusion of drug molecules across skin is solved analytically and numerically. For example, MATLAB computes the sine of /3 to be (approximately) 0. October 9: Lecture 5 [Matlab code] Introduction to PDEs. Matlab code to solve 1D diffusional equation. The analytical solution for equation 5 when a pulse of mass 'M' is injected at x=0, the concentration distribution over a cross section of area '' is given by Where C=Concentration (kg/), […]. Pdf Modelling The One Dimensional Advection Diffusion Equation In. Finite Element Method Introduction, 1D heat conduction 4 Form and expectations To give the participants an understanding of the basic elements of the finite element method as a tool for finding approximate solutions of linear boundary value problems. I am trying to solve a 1D advection equation in Matlab as described in this paper, equations (55)-(57). We propose a novel and efficient approach, named domain adaptive semantic diffusion (DASD), to exploit semantic context while considering the domain-shift-of-context for large scale video concept annotation. The C program for solution of heat equation is a programming approach to calculate head transferred through a plate in which heat at boundaries are know at a certain time. Below, we present the script which solves a microfluidic fluid mechanics problem in 3D by means of incompressible Navier-Stokes equations in MATLAB. In the following pages, the user will find parallel sections to those in the text titled. How I can solve this problem with Matlab? Thank you. Example The Simulation of a 2D diffusion case using the Crank Nicolson Method for time stepping and TDMA Solver. CIG Global Flow Code Benchmark Group, the 2006. Converter stations were introduced at the sending and receiving ends of the lines in the hybrid model. m files to solve the advection equation. Solutions for the MATLAB exercises are available for instructors upon request, and a brief introduction to MATLAB exercise is provided in sec. The goals of this exercise are to 1) model the spatial and temporal profile of moisture content in a soil column using a simplified version of Richard’s equation, and 2) introduce students to MATLAB by using MATLAB to solve Richard’s equation and graph the results. Scarp diffusion exercise from the International Quality Network Workshop ScarpLab2003. Parabolic PDE’s in Matlab Matlab’s pdepe command can solve these. edu/~seibold [email protected] 1093/bioinformatics/bti732 db/journals/bioinformatics/bioinformatics21. View Lab Report - Diffuson MatLab from BIO 201 at Drexel University. - 1D diffusion equation. 1 % Matlab script: dif1d_main. Based on your location, we recommend that you select:. Many of the techniques used here will also work for more complicated partial differential equations for which separation of. Solving Boundary Value Problems for Ordinary Di erential Equations in Matlab with bvp4c Lawrence F. I implemented the same code in MATLAB and execution time there is much faster. I have ficks diffusion equation need to solved in pde toolbox and the result of which used in another differential equation to find the resultant parameter can any help on this! Thanks for the attention. Solve second order differential equation using the Euler and the Runge-Kutta methods - second_order_ode. Burgers Equation. We'll start off with a 1-dimensional diffusion equation and look to solve for the temperature distribution in a rod whose end points are clamped at different fixed temperatures. Can someone help me code for the following? diffusion equation D∂^2/ ∂x=∂c/∂t D=diffusion coefficient =2*10^-4 m^2/hour C=concentraion=20kg/m^3 X=distance(m) t=time in hours thinkness of medium = 200mm time = 25 days step size = 0. MATLAB Codes Bank Many topics of this blog have a complementary Matlab code which helps the reader to understand the concepts better. Pozrikidis, A Practical Guide to Boundary Element Methods with the software library BEMLIB,'' Champan & Hall/CRC, (2002). REACTION-DIFFUSION ANALYSIS MATH 350 - RENATO FERES CUPPLES I - ROOM 17 [email protected] Modelling and simulation of convection and diffusion for a 3D cylindrical (and other) domains is possible with the Matlab Finite Element FEM Toolbox, either by using the built-in GUI or as a m-script file as shown below. Matlab Examples · Various finite difference approximations (Section 1) · Newton Raphson code (Section 2) · Definition of function for Newton Raphson-( Section 2) Valentin Muresan, Dublin City University, [email protected] The model will be set up for the conditions of the Mississippi River, to what I feel are the best constraints published in the literature. To find a numerical solution to equation (1) with finite difference methods, we first need to define a set of grid points in the domainDas follows: Choose a state step size Δx= b−a N (Nis an integer) and a time step size Δt, draw a set of horizontal and vertical lines across D, and get all intersection points (x j,t n), or simply (j,n), where x. Bass as a section of another paper. Dabrowski et al. Problem Solving in Chemical and Biochemical Engineering with POLYMATH™, Excel, and MATLAB®, Second Edition, is a valuable resource and companion that integrates the use of numerical problem solving in the three most widely used software packages: POLYMATH, Microsoft Excel, and MATLAB. The mass conservation is a constraint on the velocity field; this equation (combined with the momentum) can be used to derive an equation for the pressure NS equations.
|
{}
|
### Archive
Posts Tagged ‘simulation’
## regress, probit, or logit?
In a previous post I illustrated that the probit model and the logit model produce statistically equivalent estimates of marginal effects. In this post, I compare the marginal effect estimates from a linear probability model (linear regression) with marginal effect estimates from probit and logit models.
My simulations show that when the true model is a probit or a logit, using a linear probability model can produce inconsistent estimates of the marginal effects of interest to researchers. The conclusions hinge on the probit or logit model being the true model.
Simulation results
For all simulations below, I use a sample size of 10,000 and 5,000 replications. The true data-generating processes (DGPs) are constructed using one discrete covariate and one continuous covariate. I study the average effect of a change in the continuous variable on the conditional probability (AME) and the average effect of a change in the discrete covariate on the conditional probability (ATE). I also look at the effect of a change in the continuous variable on the conditional probability, evaluated at the mean value of the covariates (MEM), and the effect of a change in the discrete covariate on the conditional probability, evaluated at the mean value of the covariates (TEM).
In Table 1, I present the results of a simulation when the true DGP satisfies the assumptions of a logit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypotheses. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.
Table 1: Average Marginal and Treatment Effects: True DGP Logit
Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
AME of x1 -.084 -.084 -.094
5% Rejection Rate .050 .99
ATE of x2 .092 .091 .091
5% Rejection Rate .058 .058
From Table 1, we see that the logit model estimates are close to the true value and that the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 99% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a logit.
For the MEM and TEM, we have the following:
Table 2: Marginal and Treatment E ects at Mean Values: True DGP Logit
Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Logit Regress (LPM)
MEM of x1 -.099 -.099 -.094
5% Rejection Rate .054 .618
TEM of x2 .109 .109 .092
5% Rejection Rate .062 .073
Again, logit estimates behave as expected. For the linear probability model, the rejection rate of the true null hypothesis is 62% for the MEM. For the TEM the rejection rate is 7.3%, and the estimated effect is smaller than the true effect.
For the AME and ATE, when the true GDP is a probit, we have the following:
Table 3: Average Marginal and Treatment Effects: True DGP Probit
Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
AME of x1 -.094 -.094 -.121
5% Rejection Rate .047 1
ATE of x2 .111 .111 .111
5% Rejection Rate .065 .061
The probit model estimates are close to the true value, and the rejection rate of the true null hypothesis is close to 5%. For the linear probability model, the rejection rate is 100% for the AME. For the ATE, the rejection rate and point estimates are close to what is estimated using a probit.
For the MEM and TEM, we have the following:
Table 4: Marginal and Treatment Effects at Mean Values: True DGP Probit
Simulation Results for N=10,000 and 5,000 Replications
Statistic Approximate True Value Probit Regress (LPM)
MEM of x1 -.121 -.122 -.121
5% Rejection Rate .063 .054
TEM of x2 .150 .150 .110
5% Rejection Rate .059 .158
For the MEM, the probit and linear probability model produce reliable inference. For the TEM, the probit marginal effects behave as expected, but the linear probability model has a rejection rate of 16%, and the point estimates are not close to the true value.
Simulation design
Below is the code I used to generate the data for my simulations. In the first part, lines 6 to 13, I generate outcome variables that satisfy the assumptions of the logit model, y, and the probit model, yp. In the second part, lines 15 to 19, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 21 to 29, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.
program define mkdata
syntax, [n(integer 1000)]
clear
quietly set obs n'
// 1. Generating data from probit, logit, and misspecified
generate x1 = rchi2(2)-2
generate x2 = rbeta(4,2)>.2
generate u = runiform()
generate e = ln(u) -ln(1-u)
generate ep = rnormal()
generate xb = .5*(1 - x1 + x2)
generate y = xb + e > 0
generate yp = xb + ep > 0
// 2. Computing probit & logit marginal and treatment effects
generate m1 = exp(xb)*(-.5)/(1+exp(xb))^2
generate m2 = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
generate m1p = normalden(xb)*(-.5)
generate m2p = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
// 3. Computing marginal and treatment effects at means
quietly mean x1 x2
matrix A = r(table)
scalar a = .5 -.5*A[1,1] + .5*A[1,2]
scalar b1 = 1 -.5*A[1,1]
scalar b0 = .5 -.5*A[1,1]
generate mean1 = exp(a)*(-.5)/(1+exp(a))^2
generate mean2 = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
generate mean1p = normalden(a)*(-.5)
generate mean2p = normal(b1) - normal(b0)
end
I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, $$x_{k}$$, in the case of the probit model:
$\begin{equation*} \frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k} \end{equation*}$
The expression above is an approximation of $$E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)$$. To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute $$\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}$$, and take it to be the true value. I follow the same logic for the other marginal effects.
Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, compute the averages that I wil use in my simulation, and create locals for each approximate true value.
. mkdata, n(L')
(2 missing values generated)
. local values "m1 m2 mean1 mean2 m1p m2p mean1p mean2p"
. local means "mx1 mx2 meanx1 meanx2 mx1p mx2p meanx1p meanx2p"
. local n : word count values'
.
. forvalues i= 1/n' {
2. local a: word i' of values'
3. local b: word i' of means'
4. sum a', meanonly
5. local b' = r(mean)
6. }
Now, I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the TEM and the MEM when the true DGP is a logit is given by:
. postfile lpm y1l y1l_r y1lp y1lp_r y2l y2l_r y2lp y2lp_r ///
> using simslpm, replace
. forvalues i=1/R' {
2. quietly {
3. mkdata, n(N')
4. logit y x1 i.x2, vce(robust)
5. margins, dydx(*) atmeans post vce(unconditional)
6. local y1l = _b[x1]
7. test _b[x1] = meanx1'
8. local y1l_r = (r(p)<.05)
9. local y2l = _b[1.x2]
10. test _b[1.x2] = meanx2'
11. local y2l_r = (r(p)<.05)
12. regress y x1 i.x2, vce(robust)
13. margins, dydx(*) atmeans post vce(unconditional)
14. local y1lp = _b[x1]
15. test _b[x1] = meanx1'
16. local y1lp_r = (r(p)<.05)
17. local y2lp = _b[1.x2]
18. test _b[1.x2] = meanx2'
19. local y2lp_r = (r(p)<.05)
20. post lpm (y1l') (y1l_r') (y1lp') (y1lp_r') ///
> (y2l') (y2l_r') (y2lp') (y2lp_r')
21. }
22. }
. postclose lpm
. use simslpm, clear
. sum
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
y1l | 5,000 -.0985646 .00288 -.1083639 -.0889075
y1l_r | 5,000 .0544 .226828 0 1
y1lp | 5,000 -.0939211 .0020038 -.1008612 -.0868043
y1lp_r | 5,000 .6182 .4858765 0 1
y2l | 5,000 .1084959 .065586 -.1065291 .3743112
-------------+---------------------------------------------------------
y2l_r | 5,000 .0618 .240816 0 1
y2lp | 5,000 .0915894 .055462 -.0975456 .3184061
y2lp_r | 5,000 .0732 .2604906 0 1
For the results for the AME and the ATE when the true DGP is a logit, I use margins without the atmeans option. The other cases are similar. I use robust standard errors for all computations because my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.
You can obtain the code used to produce these results here.
Conclusion
Using a probit or a logit model yields equivalent marginal effects. I provide evidence that the same cannot be said of the marginal effect estimates of the linear probability model when compared with those of the logit and probit models.
Acknowledgment
This post was inspired by a question posed by Stephen Jenkins after my previous post.
Reference
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
We often use probit and logit models to analyze binary outcomes. A case can be made that the logit model is easier to interpret than the probit model, but Stata’s margins command makes any estimator easy to interpret. Ultimately, estimates from both models produce similar results, and using one or the other is a matter of habit or preference.
I show that the estimates from a probit and logit model are similar for the computation of a set of effects that are of interest to researchers. I focus on the effects of changes in the covariates on the probability of a positive outcome for continuous and discrete covariates. I evaluate these effects on average and at the mean value of the covariates. In other words, I study the average marginal effects (AME), the average treatment effects (ATE), the marginal effects at the mean values of the covariates (MEM), and the treatment effects at the mean values of the covariates (TEM).
First, I present the results. Second, I discuss the code used for the simulations.
Results
In Table 1, I present the results of a simulation with 4,000 replications when the true data generating process (DGP) satisfies the assumptions of a probit model. I show the average of the AME and the ATE estimates and the 5% rejection rate of the true null hypothesis that arise after probit and logit estimation. I also provide an approximate true value of the AME and ATE. I obtain the approximate true values by computing the ATE and AME, at the true values of the coefficients, using a sample of 20 million observations. I will provide more details on the simulation in a later section.
Table 1: Average Marginal and Treatment Effects: True DGP Probit
Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1536 -.1537 -.1537
5% Rejection Rate .050 .052
ATE of x2 .1418 .1417 .1417
5% Rejection Rate .050 .049
For the MEM and TEM, we have the following:
Table 2: Marginal and Treatment Effects at Mean Values: True DGP Probit
Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1672 -.1673 -.1665
5% Rejection Rate .056 .06
TEM of x2 .1499 .1498 .1471
5% Rejection Rate .053 .058
The logit estimates are close to the true value and have a rejection rate that is close to 5%. Fitting the parameters of our model using logit when the true DGP satisfies the assumptions of a probit model does not lead us astray.
If the true DGP satisfies the assumptions of the logit model, the conclusions are the same. I present the results in the next two tables.
Table 3: Average Marginal and Treatment Effects: True DGP Logit
Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
AME of x1 -.1090 -.1088 -.1089
5% Rejection Rate .052 .052
ATE of x2 .1046 .1044 .1045
5% Rejection Rate .053 .051
Table 4: Marginal and Treatment Effects at Mean Values: True DGP Logit
Simulation Results for N=10,000 and 4,000 Replications
Statistic Approximate True Value Probit Logit
MEM of x1 -.1146 -.1138 -.1146
5% Rejection Rate .050 .051
TEM of x2 .1086 .1081 .1085
5% Rejection Rate .058 .058
Why?
Maximum likelihood estimators find the parameters that maximize the likelihood that our data will fit the distributional assumptions that we make. The likelihood chosen is an approximation to the true likelihood, and it is a helpful approximation if the true likelihood and our approximating are close to each other. Viewing likelihood-based models as useful approximations, instead of as models of a true likelihood, is the basis of quasilikelihood theory. For more details, see White (1996) and Wooldridge (2010).
It is assumed that the unobservable random variable in the probit model and logit model comes from a standard normal and logistic distribution, respectively. The cumulative distribution functions (CDFs) in these two cases are close to each other, especially around the mean. Therefore, estimators under these two sets of assumptions produce similar results. To illustrate these arguments, we can plot the two CDFs and their differences as follows:
Graph 1: Normal and Logistic CDF’s and their Difference
The difference between the CDFs approaches zero as you get closer to the mean, from the right or from the left, and it is always smaller than .15.
Simulation design
Below is the code I used to generate the data for my simulations. In the first part, lines 4 to 12, I generate outcome variables that satisfy the assumptions of the probit model, y1, and the logit model, y2. In the second part, lines 13 to 16, I compute the marginal effects for the logit and probit models. I have a continuous and a discrete covariate. For the discrete covariate, the marginal effect is a treatment effect. In the third part, lines 17 to 25, I compute the marginal effects evaluated at the means. I will use these estimates later to compute approximations to the true values of the effects.
program define mkdata
syntax, [n(integer 1000)]
clear
quietly set obs n'
// 1. Generating data from probit, logit, and misspecified
generate x1 = rnormal()
generate x2 = rbeta(2,4)>.5
generate e1 = rnormal()
generate u = runiform()
generate e2 = ln(u) -ln(1-u)
generate xb = .5*(1 -x1 + x2)
generate y1 = xb + e1 > 0
generate y2 = xb + e2 > 0
// 2. Computing probit & logit marginal and treatment effects
generate m1 = normalden(xb)*(-.5)
generate m2 = normal(1 -.5*x1 ) - normal(.5 -.5*x1)
generate m1l = exp(xb)*(-.5)/(1+exp(xb))^2
generate m2l = exp(1 -.5*x1)/(1+ exp(1 -.5*x1 )) - ///
exp(.5 -.5*x1)/(1+ exp(.5 -.5*x1 ))
// 3. Computing probit & logit marginal and treatment effects at means
quietly mean x1 x2
matrix A = r(table)
scalar a = .5 -.5*A[1,1] + .5*A[1,2]
scalar b1 = 1 -.5*A[1,1]
scalar b0 = .5 -.5*A[1,1]
generate mean1 = normalden(a)*(-.5)
generate mean2 = normal(b1) - normal(b0)
generate mean1l = exp(a)*(-.5)/(1+exp(a))^2
generate mean2l = exp(b1)/(1+ exp(b1)) - exp(b0)/(1+ exp(b0))
end
I approximate the true marginal effects using a sample of 20 million observations. This is a reasonable strategy in this case. For example, take the average marginal effect for a continuous covariate, $$x_{k}$$, in the case of the probit model:
$\begin{equation*} \frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k} \end{equation*}$
The expression above is an approximation to $$E\left(\phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}\right)$$. To obtain this expected value, we would need to integrate over the distribution of all the covariates. This is not practical and would limit my choice of covariates. Instead, I draw a sample of 20 million observations, compute $$\frac{1}{N}\sum_{i=1}^N \phi\left(x_{i}\mathbb{\beta}\right)\beta_{k}$$, and take it to be the true value. I follow the same logic for the other marginal effects.
Below is the code I use to compute the approximate true marginal effects. I draw the 20 million observations, then I compute the averages that I am going to use in my simulation, and I create locals for each approximate true value.
. mkdata, n(20000000)
. local values "m1 m2 m1l m2l mean1 mean2 mean1l mean2l"
. local means "mx1 mx2 mx1l mx2l meanx1 meanx2 meanx1l meanx2l"
. local n : word count values'
. forvalues i= 1/n' {
2. local a: word i' of values'
3. local b: word i' of means'
4. sum a', meanonly
5. local b' = r(mean)
6. }
Now I am ready to run all the simulations that I used to produce the results in the previous section. The code that I used for the simulations for the ATE and the AME when the true DGP is a probit is given by
. postfile mprobit y1p y1p_r y1l y1l_r y2p y2p_r y2l y2l_r ///
> using simsmprobit, replace
. forvalues i=1/4000 {
2. quietly {
3. mkdata, n(10000)
4. probit y1 x1 i.x2, vce(robust)
5. margins, dydx(*) atmeans post
6. local y1p = _b[x1]
7. test _b[x1] = meanx1'
8. local y1p_r = (r(p)<.05)
9. local y2p = _b[1.x2]
10. test _b[1.x2] = meanx2'
11. local y2p_r = (r(p)<.05)
12. logit y1 x1 i.x2, vce(robust)
13. margins, dydx(*) atmeans post
14. local y1l = _b[x1]
15. test _b[x1] = meanx1'
16. local y1l_r = (r(p)<.05)
17. local y2l = _b[1.x2]
18. test _b[1.x2] = meanx2'
19. local y2l_r = (r(p)<.05)
20. post mprobit (y1p') (y1p_r') (y1l') (y1l_r') ///
> (y2p') (y2p_r') (y2l') (y2l_r')
21. }
22. }
. use simsprobit
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
y1p | 4,000 -.1536812 .0038952 -.1697037 -.1396532
y1p_r | 4,000 .05 .2179722 0 1
y1l | 4,000 -.1536778 .0039179 -.1692524 -.1396366
y1l_r | 4,000 .05175 .2215496 0 1
y2p | 4,000 .141708 .0097155 .1111133 .1800973
-------------+---------------------------------------------------------
y2p_r | 4,000 .0495 .2169367 0 1
y2l | 4,000 .1416983 .0097459 .1102069 .1789895
y2l_r | 4,000 .049 .215895 0 1
For the results in the case of the MEM and the TEM when the true DGP is a probit, I use margins with the option atmeans. The other cases are similar. I use robust standard error for all computations to account for the fact that my likelihood model is an approximation to the true likelihood, and I use the option vce(unconditional) to account for the fact that I am using two-step M-estimation. See Wooldridge (2010) for more details on two-step M-estimation.
Concluding Remarks
I provided simulation evidence that illustrates that the differences between using estimates of effects after probit or logit is negligible. The reason lies in the theory of quasilikelihood and, specifically, in that the cumulative distribution functions of the probit and logit models are similar, especially around the mean.
References
White, H. 1996. Estimation, Inference, and Specification Analysis>. Cambridge: Cambridge University Press.
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
## Understanding the generalized method of moments (GMM): A simple example
$$\newcommand{\Eb}{{\bf E}}$$This post was written jointly with Enrique Pinzon, Senior Econometrician, StataCorp.
The generalized method of moments (GMM) is a method for constructing estimators, analogous to maximum likelihood (ML). GMM uses assumptions about specific moments of the random variables instead of assumptions about the entire distribution, which makes GMM more robust than ML, at the cost of some efficiency. The assumptions are called moment conditions.
GMM generalizes the method of moments (MM) by allowing the number of moment conditions to be greater than the number of parameters. Using these extra moment conditions makes GMM more efficient than MM. When there are more moment conditions than parameters, the estimator is said to be overidentified. GMM can efficiently combine the moment conditions when the estimator is overidentified.
We illustrate these points by estimating the mean of a $$\chi^2(1)$$ by MM, ML, a simple GMM estimator, and an efficient GMM estimator. This example builds on Efficiency comparisons by Monte Carlo simulation and is similar in spirit to the example in Wooldridge (2001).
GMM weights and efficiency
GMM builds on the ideas of expected values and sample averages. Moment conditions are expected values that specify the model parameters in terms of the true moments. The sample moment conditions are the sample equivalents to the moment conditions. GMM finds the parameter values that are closest to satisfying the sample moment conditions.
The mean of a $$\chi^2$$ random variable with $$d$$ degree of freedom is $$d$$, and its variance is $$2d$$. Two moment conditions for the mean are thus
$\begin{eqnarray*} \Eb\left[Y – d \right]&=& 0 \\ \Eb\left[(Y – d )^2 – 2d \right]&=& 0 \end{eqnarray*}$
The sample moment equivalents are
$\begin{eqnarray} 1/N\sum_{i=1}^N (y_i – \widehat{d} )&=& 0 \tag{1} \\ 1/N\sum_{i=1}^N\left[(y_i – \widehat{d} )^2 – 2\widehat{d}\right] &=& 0 \tag{2} \end{eqnarray}$
We could use either sample moment condition (1) or sample moment condition (2) to estimate $$d$$. In fact, below we use each one and show that (1) provides a much more efficient estimator.
When we use both (1) and (2), there are two sample moment conditions and only one parameter, so we cannot solve this system of equations. GMM finds the parameters that get as close as possible to solving weighted sample moment conditions.
Uniform weights and optimal weights are two ways of weighting the sample moment conditions. The uniform weights use an identity matrix to weight the moment conditions. The optimal weights use the inverse of the covariance matrix of the moment conditions.
We begin by drawing a sample of a size 500 and use gmm to estimate the parameters using sample moment condition (1), which we illustrate is the sample as the sample average.
. drop _all
. set obs 500
number of observations (_N) was 0, now 500
. set seed 12345
. generate double y = rchi2(1)
. gmm (y - {d}) , instruments( ) onestep
Step 1
Iteration 0: GMM criterion Q(b) = .82949186
Iteration 1: GMM criterion Q(b) = 1.262e-32
Iteration 2: GMM criterion Q(b) = 9.545e-35
note: model is exactly identified
GMM estimation
Number of parameters = 1
Number of moments = 1
Initial weight matrix: Unadjusted Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .9107644 .0548098 16.62 0.000 .8033392 1.01819
------------------------------------------------------------------------------
Instruments for equation 1: _cons
. mean y
Mean estimation Number of obs = 500
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
y | .9107644 .0548647 .8029702 1.018559
--------------------------------------------------------------
The sample moment condition is the product of an observation-level error function that is specified inside the parentheses and an instrument, which is a vector of ones in this case. The parameter $$d$$ is enclosed in curly braces {}. We specify the onestep option because the number of parameters is the same as the number of moment conditions, which is to say that the estimator is exactly identified. When it is, each sample moment condition can be solved exactly, and there are no efficiency gains in optimally weighting the moment conditions.
We now illustrate that we could use the sample moment condition obtained from the variance to estimate $$d$$.
. gmm ((y-{d})^2 - 2*{d}) , instruments( ) onestep
Step 1
Iteration 0: GMM criterion Q(b) = 5.4361161
Iteration 1: GMM criterion Q(b) = .02909692
Iteration 2: GMM criterion Q(b) = .00004009
Iteration 3: GMM criterion Q(b) = 5.714e-11
Iteration 4: GMM criterion Q(b) = 1.172e-22
note: model is exactly identified
GMM estimation
Number of parameters = 1
Number of moments = 1
Initial weight matrix: Unadjusted Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .7620814 .1156756 6.59 0.000 .5353613 .9888015
------------------------------------------------------------------------------
Instruments for equation 1: _cons
While we cannot say anything definitive from only one draw, we note that this estimate is further from the truth and that the standard error is much larger than those based on the sample average.
Now, we use gmm to estimate the parameters using uniform weights.
. matrix I = I(2)
. gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I) onestep
Step 1
Iteration 0: GMM criterion Q(b) = 6.265608
Iteration 1: GMM criterion Q(b) = .05343812
Iteration 2: GMM criterion Q(b) = .01852592
Iteration 3: GMM criterion Q(b) = .0185221
Iteration 4: GMM criterion Q(b) = .0185221
GMM estimation
Number of parameters = 1
Number of moments = 2
Initial weight matrix: user Number of obs = 500
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .7864099 .1050692 7.48 0.000 .5804781 .9923418
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons
The first set of parentheses specifies the first sample moment condition, and the second set of parentheses specifies the second sample moment condition. The options winitial(I) and onestep specify uniform weights.
Finally, we use gmm to estimate the parameters using two-step optimal weights. The weights are calculated using first-step consistent estimates.
. gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I)
Step 1
Iteration 0: GMM criterion Q(b) = 6.265608
Iteration 1: GMM criterion Q(b) = .05343812
Iteration 2: GMM criterion Q(b) = .01852592
Iteration 3: GMM criterion Q(b) = .0185221
Iteration 4: GMM criterion Q(b) = .0185221
Step 2
Iteration 0: GMM criterion Q(b) = .02888076
Iteration 1: GMM criterion Q(b) = .00547223
Iteration 2: GMM criterion Q(b) = .00546176
Iteration 3: GMM criterion Q(b) = .00546175
GMM estimation
Number of parameters = 1
Number of moments = 2
Initial weight matrix: user Number of obs = 500
GMM weight matrix: Robust
------------------------------------------------------------------------------
| Robust
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
/d | .9566219 .0493218 19.40 0.000 .8599529 1.053291
------------------------------------------------------------------------------
Instruments for equation 1: _cons
Instruments for equation 2: _cons
All four estimators are consistent. Below we run a Monte Carlo simulation to see their relative efficiencies. We are most interested in the efficiency gains afforded by optimal GMM. We include the sample average, the sample variance, and the ML estimator discussed in Efficiency comparisons by Monte Carlo simulation. Theory tells us that the optimally weighted GMM estimator should be more efficient than the sample average but less efficient than the ML estimator.
The code below for the Monte Carlo builds on Efficiency comparisons by Monte Carlo simulation, Maximum likelihood estimation by mlexp: A chi-squared example, and Monte Carlo simulations using Stata. Click gmmchi2sim.do to download this code.
. clear all
. set seed 12345
. matrix I = I(2)
. postfile sim d_a d_v d_ml d_gmm d_gmme using efcomp, replace
. forvalues i = 1/2000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate double y = rchi2(1)
5.
. quietly mean y
6. local d_a = _b[y]
7.
. quietly gmm ( (y-{d=d_a'})^2 - 2*{d}) , instruments( ) ///
8. if e(converged)==1 {
9. local d_v = _b[d:_cons]
10. }
11. else {
12. local d_v = .
13. }
14.
. quietly mlexp (ln(chi2den({d=d_a'},y)))
15. if e(converged)==1 {
16. local d_ml = _b[d:_cons]
17. }
18. else {
19. local d_ml = .
20. }
21.
. quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( ) ///
> winitial(I) onestep conv_maxiter(200)
22. if e(converged)==1 {
23. local d_gmm = _b[d:_cons]
24. }
25. else {
26. local d_gmm = .
27. }
28.
. quietly gmm ( y - {d=d_a'}) ( (y-{d})^2 - 2*{d}) , instruments( ) ///
29. if e(converged)==1 {
30. local d_gmme = _b[d:_cons]
31. }
32. else {
33. local d_gmme = .
34. }
35.
. post sim (d_a') (d_v') (d_ml') (d_gmm') (d_gmme')
36.
. }
. postclose sim
. use efcomp, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
d_a | 2,000 1.00017 .0625367 .7792076 1.22256
d_v | 1,996 1.003621 .1732559 .5623049 2.281469
d_ml | 2,000 1.002876 .0395273 .8701175 1.120148
d_gmm | 2,000 .9984172 .1415176 .5947328 1.589704
d_gmme | 2,000 1.006765 .0540633 .8224731 1.188156
The simulation results indicate that the ML estimator is the most efficient (d_ml, std. dev. 0.0395), followed by the efficient GMM estimator (d_gmme}, std. dev. 0.0541), followed by the sample average (d_a, std. dev. 0.0625), followed by the uniformly-weighted GMM estimator (d_gmm, std. dev. 0.1415), and finally followed by the sample-variance moment condition (d_v, std. dev. 0.1732).
The estimator based on the sample-variance moment condition does not converge for 4 of 2,000 draws; this is why there are only 1,996 observations on d_v when there are 2,000 observations for the other estimators. These convergence failures occurred even though we used the sample average as the starting value of the nonlinear solver.
For a better idea about the distributions of these estimators, we graph the densities of their estimates.
Figure 1: Densities of the estimators
The density plots illustrate the efficiency ranking that we found from the standard deviations of the estimates.
The uniformly weighted GMM estimator is less efficient than the sample average because it places the same weight on the sample average as on the much less efficient estimator based on the sample variance.
In each of the overidentified cases, the GMM estimator uses a weighted average of two sample moment conditions to estimate the mean. The first sample moment condition is the sample average. The second moment condition is the sample variance. As the Monte Carlo results showed, the sample variance provides a much less efficient estimator for the mean than the sample average.
The GMM estimator that places equal weights on the efficient and the inefficient estimator is much less efficient than a GMM estimator that places much less weight on the less efficient estimator.
We display the weight matrix from our optimal GMM estimator to see how the sample moments were weighted.
. quietly gmm ( y - {d}) ( (y-{d})^2 - 2*{d}) , instruments( ) winitial(I)
. matlist e(W), border(rows)
-------------------------------------
| 1 | 2
| _cons | _cons
-------------+-----------+-----------
1 | |
_cons | 1.621476 |
-------------+-----------+-----------
2 | |
_cons | -.2610053 | .0707775
-------------------------------------
The diagonal elements show that the sample-mean moment condition receives more weight than the less efficient sample-variance moment condition.
Done and undone
We used a simple example to illustrate how GMM exploits having more equations than parameters to obtain a more efficient estimator. We also illustrated that optimally weighting the different moments provides important efficiency gains over an estimator that uniformly weights the moment conditions.
Our cursory introduction to GMM is best supplemented with a more formal treatment like the one in Cameron and Trivedi (2005) or Wooldridge (2010).
Graph code appendix
use efcomp
local N = _N
kdensity d_a, n(N') generate(x_a den_a) nograph
kdensity d_v, n(N') generate(x_v den_v) nograph
kdensity d_ml, n(N') generate(x_ml den_ml) nograph
kdensity d_gmm, n(N') generate(x_gmm den_gmm) nograph
kdensity d_gmme, n(N') generate(x_gmme den_gmme) nograph
twoway (line den_a x_a, lpattern(solid)) ///
(line den_v x_v, lpattern(dash)) ///
(line den_ml x_ml, lpattern(dot)) ///
(line den_gmm x_gmm, lpattern(dash_dot)) ///
(line den_gmme x_gmme, lpattern(shordash))
References
Cameron, A. C., and P. K. Trivedi. 2005. Microeconometrics: Methods and applications. Cambridge: Cambridge University Press.
Wooldridge, J. M. 2001. Applications of generalized method of moments estimation. Journal of Economic Perspectives 15(4): 87-100.
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
## Efficiency comparisons by Monte Carlo simulation
Overview
In this post, I show how to use Monte Carlo simulations to compare the efficiency of different estimators. I also illustrate what we mean by efficiency when discussing statistical estimators.
I wrote this post to continue a dialog with my friend who doubted the usefulness of the sample average as an estimator for the mean when the data-generating process (DGP) is a $$\chi^2$$ distribution with $$1$$ degree of freedom, denoted by a $$\chi^2(1)$$ distribution. The sample average is a fine estimator, even though it is not the most efficient estimator for the mean. (Some researchers prefer to estimate the median instead of the mean for DGPs that generate outliers. I will address the trade-offs between these parameters in a future post. For now, I want to stick to estimating the mean.)
In this post, I also want to illustrate that Monte Carlo simulations can help explain abstract statistical concepts. I show how to use a Monte Carlo simulation to illustrate the meaning of an abstract statistical concept. (If you are new to Monte Carlo simulations in Stata, you might want to see Monte Carlo simulations using Stata.)
Consistent estimator A is said to be more asymptotically efficient than consistent estimator B if A has a smaller asymptotic variance than B; see Wooldridge (2010, sec. 14.4.2) for an especially useful discussion. Theoretical comparisons can sometimes ascertain that A is more efficient than B, but the magnitude of the difference is rarely identified. Comparisons of Monte Carlo simulation estimates of the variances of estimators A and B give both sign and magnitude for specific DGPs and sample sizes.
The sample average versus maximum likelihood
Many books discuss the conditions under which the maximum likelihood (ML) estimator is the efficient estimator relative to other estimators; see Wooldridge (2010, sec. 14.4.2) for an accessible introduction to the modern approach. Here I compare the ML estimator with the sample average for the mean when the DGP is a $$\chi^2(1)$$ distribution.
Example 1 below contains the commands I used. For an introduction to Monte Carlo simulations see Monte Carlo simulations using Stata, and for an introduction to using mlexp to estimate the parameter of a $$\chi^2$$ distribution see Maximum likelihood estimation by mlexp: A chi-squared example. In short, the commands do the following $$5,000$$ times:
1. Draw a sample of 500 observations from a $$\chi^2(1)$$ distribution.
2. Estimate the mean of each sample by the sample average, and store this estimate in m_a in the dataset efcomp.dta.
3. Estimate the mean of each sample by ML, and store this estimate in m_ml in the dataset efcomp.dta.
Example 1: The distributions of the sample average and the ML estimators
. clear all
. set seed 12345
. postfile sim mu_a mu_ml using efcomp, replace
. forvalues i = 1/5000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate double y = rchi2(1)
5. quietly mean y
6. local mu_a = _b[y]
7. quietly mlexp (ln(chi2den({d=1},y)))
8. local mu_ml = _b[d:_cons]
9. post sim (mu_a') (mu_ml')
10. }
. postclose sim
. use efcomp, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
mu_a | 5,000 .9989277 .0620524 .7792076 1.232033
mu_ml | 5,000 1.000988 .0401992 .8660786 1.161492
The mean of the $$5,000$$ sample average estimates and the mean of the $$5,000$$ ML estimates are each close to the true value of $$1.0$$. The standard deviation of the $$5,000$$ sample average estimates is $$0.062$$, and it approximates the standard deviation of the sampling distribution of the sample average for this DGP and sample size. Similarly, the standard deviation of the $$5,000$$ ML estimates is $$0.040$$, and it approximates the standard deviation of the sampling distribution of the ML estimator for this DGP and sample size.
We conclude that the ML estimator has a lower variance than the sample average for this DGP and this sample size, because $$0.040$$ is smaller than $$0.062$$.
To get a picture of this difference, we plot the density of the sample average and the density of the ML estimator. (Each of these densities is estimated from $$5,000$$ observations, but estimation error can be ignored because more data would not change the key results.)
Example 2: Plotting the densities of the estimators
. kdensity mu_a, n(5000) generate(x_a den_a) nograph
. kdensity mu_ml, n(5000) generate(x_ml den_ml) nograph
. twoway (line den_a x_a) (line den_ml x_ml)
Densities of the sample average and ML estimators
The plots show that the ML estimator is more tightly distributed around the true value than the sample average.
That the ML estimator is more tightly distributed around the true value than the sample average is what it means for one consistent estimator to be more efficient than another.
Done and undone
I used Monte Carlo simulation to illustrate what it means for one estimator to be more efficient than another. In particular, we saw that the ML estimator is more efficient than the sample average for the mean of a $$\chi^2(1)$$ distribution.
Many other estimators fall between these two estimators in an efficiency ranking. Generalized method of moments estimators and some quasi-maximum likelihood estimators come to mind and might be worth adding to these simulations.
Reference
Wooldridge, J. M. 2010. Econometric Analysis of Cross Section and Panel Data. 2nd ed. Cambridge, Massachusetts: MIT Press.
Categories: Statistics Tags:
## Monte Carlo simulations using Stata
Overview
A Monte Carlo simulation (MCS) of an estimator approximates the sampling distribution of an estimator by simulation methods for a particular data-generating process (DGP) and sample size. I use an MCS to learn how well estimation techniques perform for specific DGPs. In this post, I show how to perform an MCS study of an estimator in Stata and how to interpret the results.
Large-sample theory tells us that the sample average is a good estimator for the mean when the true DGP is a random sample from a $$\chi^2$$ distribution with 1 degree of freedom, denoted by $$\chi^2(1)$$. But a friend of mine claims this estimator will not work well for this DGP because the $$\chi^2(1)$$ distribution will produce outliers. In this post, I use an MCS to see if the large-sample theory works well for this DGP in a sample of 500 observations.
A first pass at an MCS
I begin by showing how to draw a random sample of size 500 from a $$\chi^2(1)$$ distribution and how to estimate the mean and a standard error for the mean.
Example 1: The mean of simulated data
. drop _all
. set obs 500
number of observations (_N) was 0, now 500
. set seed 12345
. generate y = rchi2(1)
. mean y
Mean estimation Number of obs = 500
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
y | .9107644 .0548647 .8029702 1.018559
--------------------------------------------------------------
I specified set seed 12345 to set the seed of the random-number generator so that the results will be reproducible. The sample average estimate of the mean from this random sample is $$0.91$$, and the estimated standard error is $$0.055$$.
If I had many estimates, each from an independently drawn random sample, I could estimate the mean and the standard deviation of the sampling distribution of the estimator. To obtain many estimates, I need to repeat the following process many times:
1. Draw from the DGP
2. Compute the estimate
3. Store the estimate.
I need to know how to store the many estimates to proceed with this process. I also need to know how to repeat the process many times and how to access Stata estimates, but I put these details into appendices I and II, respectively, because many readers are already familiar with these topics and I want to focus on how to store the results from many draws.
I want to put the many estimates someplace where they will become part of a dataset that I can subsequently analyze. I use the commands postfile, post, and postclose to store the estimates in memory and write all the stored estimates out to a dataset when I am done. Example 2 illustrates the process, when there are three draws.
Example 2: Estimated means of three draws
. set seed 12345
. postfile buffer mhat using mcs, replace
. forvalues i=1/3 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate y = rchi2(1)
5. quietly mean y
6. post buffer (_b[y])
7. }
. postclose buffer
. use mcs, clear
. list
+----------+
| mhat |
|----------|
1. | .9107645 |
2. | 1.03821 |
3. | 1.039254 |
+----------+
The command
postfile buffer mhat using mcs, replace
creates a place in memory called buffer in which I can store the results that will eventually be written out to a dataset. mhat is the name of the variable that will hold the estimates in the new dataset called mcs.dta. The keyword using separates the new variable name from the name of the new dataset. I specified the option replace to replace any previous versions of msc.dta with the one created here.
I used
forvalues i=1/3 {
to repeat the process three times. (See appendix I if you want a refresher on this syntax.) The commands
quietly drop _all
quietly set obs 500
quietly generate y = rchi2(1)
quietly mean y
drop the previous data, draw a sample of size 500 from a $$\chi^2(1)$$ distribution, and estimate the mean. (The quietly before each command suppresses the output.) The command
post buffer (_b[y])
stores the estimated mean for the current draw in buffer for what will be the next observation on mhat. The command
postclose buffer
writes the stuff stored in buffer to the file mcs.dta. The commands
use mcs, clear
list
drop the last $$\chi^2(1)$$ sample from memory, read in the msc dataset, and list out the dataset.
Example 3 below is a modified version of example 2; I increased the number of draws and summarized the results.
Example 3: The mean of 2,000 estimated means
. set seed 12345
. postfile buffer mhat using mcs, replace
. forvalues i=1/2000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate y = rchi2(1)
5. quietly mean y
6. post buffer (_b[y])
7. }
. postclose buffer
. use mcs, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
mhat | 2,000 1.00017 .0625367 .7792076 1.22256
The average of the $$2,000$$ estimates is an estimator for the mean of the sampling distribution of the estimator, and it is close to the true value of $$1.0$$. The sample standard deviation of the $$2,000$$ estimates is an estimator for the standard deviation of the sampling distribution of the estimator, and it is close to the true value of $$\sqrt{\sigma^2/N}=\sqrt{2/500}\approx 0.0632$$, where $$\sigma^2$$ is the variance of the $$\chi^2(1)$$ random variable.
Including standard errors
The standard error of the estimator reported by mean is an estimate of the standard deviation of the sampling distribution of the estimator. If the large-sample distribution is doing a good job of approximating the sampling distribution of the estimator, the mean of the estimated standard
errors should be close to the sample standard deviation of the many mean estimates.
To compare the standard deviation of the estimates with the mean of the estimated standard errors, I modify example 3 to also store the standard errors.
Example 4: The mean of 2,000 standard errors
. set seed 12345
. postfile buffer mhat sehat using mcs, replace
. forvalues i=1/2000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate y = rchi2(1)
5. quietly mean y
6. post buffer (_b[y]) (_se[y])
7. }
. postclose buffer
. use mcs, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
mhat | 2,000 1.00017 .0625367 .7792076 1.22256
sehat | 2,000 .0629644 .0051703 .0464698 .0819693
Mechanically, the command
postfile buffer mhat sehat using mcs, replace
makes room in buffer for the new variables mhat and sehat, and
post buffer (_b[y]) (_se[y])
stores each estimated mean in the memory for mhat and each estimated standard error in the memory for sehat. (As in example 3, the command postclose buffer writes what is stored in memory to the new dataset.)
The sample standard deviation of the $$2,000$$ estimates is $$0.0625$$, and it is close to the mean of the $$2,000$$ estimated standard errors, which is $$0.0630$$.
You may be thinking I should have written “very close”, but how close is $$0.0625$$ to $$0.0630$$? Honestly, I cannot tell if these two numbers are sufficiently close to each other because the distance between them does not automatically tell me how reliable the resulting inference will be.
Estimating a rejection rate
In frequentist statistics, we reject a null hypothesis if the p-value is below a specified size. If the large-sample distribution approximates the finite-sample distribution well, the rejection rate of the test against the true null hypothesis should be close to the specified size.
To compare the rejection rate with the size of 5%, I modify example 4 to compute and store an indicator for whether I reject a Wald test against the true null hypothesis. (See appendix III for a discussion of the mechanics.)
Example 5: Estimating the rejection rate
. set seed 12345
. postfile buffer mhat sehat reject using mcs, replace
. forvalues i=1/2000 {
2. quietly drop _all
3. quietly set obs 500
4. quietly generate y = rchi2(1)
5. quietly mean y
6. quietly test _b[y]=1
7. local r = (r(p)<.05)
8. post buffer (_b[y]) (_se[y]) (r')
9. }
. postclose buffer
. use mcs, clear
. summarize
Variable | Obs Mean Std. Dev. Min Max
-------------+---------------------------------------------------------
mhat | 2,000 1.00017 .0625367 .7792076 1.22256
sehat | 2,000 .0629644 .0051703 .0464698 .0819693
reject | 2,000 .0475 .212759 0 1
The rejection rate of $$0.048$$ is very close to the size of $$0.05$$.
Done and undone
In this post, I have shown how to perform an MCS of an estimator in Stata. I discussed the mechanics of using the post commands to store the many estimates and how to interpret the mean of the many estimates and the mean of the many estimated standard errors. I also recommended using an estimated rejection rate to evaluate the usefulness of the large-sample approximation to the sampling distribution of an estimator for a given DGP and sample size.
The example illustrates that the sample average performs as predicted by large-sample theory as an estimator for the mean. This conclusion does not mean that my friend's concerns about outliers were entirely misplaced. Other estimators that are more robust to outliers may have better properties. I plan to illustrate some of the trade-offs in future posts.
Appendix I: Repeating a process many times
This appendix provides a quick introduction to local macros and how to use them to repeat some commands many times; see [P] macro and [P] forvalues for more details.
I can store and access string information in local macros. Below, I store the string hello" in the local macro named value.
local value "hello"
To access the stored information, I adorn the name of the local macro. Specifically, I precede it with the single left quote () and follow it with the single right quote ('). Below, I access and display the value stored in the local macro value.
. display "value'"
hello
I can also store numbers as strings, as follows
. local value "2.134"
. display "value'"
2.134
To repeat some commands many times, I put them in a {\tt forvalues} loop. For example, the code below repeats the display command three times.
. forvalues i=1/3 {
2. display "i is now i'"
3. }
i is now 1
i is now 2
i is now 3
The above example illustrates that forvalues defines a local macro that takes on each value in the specified list of values. In the above example, the name of the local macro is i, and the specified values are 1/3=$$\{1, 2, 3\}$$.
Appendix II: Accessing estimates
After a Stata estimation command, you can access the point estimate of a parameter named y by typing _b[y], and you can access the estimated standard error by typing _se[y]. The example below illustrates this process.
Example 6: Accessing estimated values
. drop _all
. set obs 500
number of observations (_N) was 0, now 500
. set seed 12345
. generate y = rchi2(1)
. mean y
Mean estimation Number of obs = 500
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
y | .9107644 .0548647 .8029702 1.018559
--------------------------------------------------------------
. display _b[y]
.91076444
. display _se[y]
.05486467
Appendix III: Getting a p-value computed by test
This appendix explains the mechanics of creating an indicator for whether a Wald test rejects the null hypothesis at a specific size.
I begin by generating some data and performing a Wald test against the true null hypothesis.
Example 7: Wald test results
. drop _all
. set obs 500
number of observations (_N) was 0, now 500
. set seed 12345
. generate y = rchi2(1)
. mean y
Mean estimation Number of obs = 500
--------------------------------------------------------------
| Mean Std. Err. [95% Conf. Interval]
-------------+------------------------------------------------
y | .9107644 .0548647 .8029702 1.018559
--------------------------------------------------------------
. test _b[y]=1
( 1) y = 1
F( 1, 499) = 2.65
Prob > F = 0.1045
The results reported by test are stored in r(). Below, I use return list to see them, type help return list for details.
Example 8: Results stored by test
. return list
scalars:
r(drop) = 0
r(df_r) = 499
r(F) = 2.645393485924886
r(df) = 1
r(p) = .1044817353734439
The p-value reported by test is stored in r(p). Below, I store a 0/1 indicator for whether the p-value is less than $$0.05|0 in the local macro r. (See appendix II for an introduction to local macros.) I complete the illustration by displaying that the local macro contains the value \(0$$.
. local r = (r(p)<.05)
. display "r'"
0
Categories: Programming Tags:
## How to simulate multilevel/longitudinal data
I was recently talking with my friend Rebecca about simulating multilevel data, and she asked me if I would show her some examples. It occurred to me that many of you might also like to see some examples, so I decided to post them to the Stata Blog.
### Introduction
We simulate data all the time at StataCorp and for a variety of reasons.
One reason is that real datasets that include the features we would like are often difficult to find. We prefer to use real datasets in the manual examples, but sometimes that isn’t feasible and so we create simulated datasets.
We also simulate data to check the coverage probabilities of new estimators in Stata. Sometimes the formulae published in books and papers contain typographical errors. Sometimes the asymptotic properties of estimators don’t hold under certain conditions. And every once in a while, we make coding mistakes. We run simulations during development to verify that a 95% confidence interval really is a 95% confidence interval.
Simulated data can also come in handy for presentations, teaching purposes, and calculating statistical power using simulations for complex study designs.
And, simulating data is just plain fun once you get the hang of it.
Some of you will recall Vince Wiggins’s blog entry from 2011 entitled “Multilevel random effects in xtmixed and sem — the long and wide of it” in which he simulated a three-level dataset. I’m going to elaborate on how Vince simulated multilevel data, and then I’ll show you some useful variations. Specifically, I’m going to talk about:
1. How to simulate single-level data
2. How to simulate two- and three-level data
3. How to simulate three-level data with covariates
4. How to simulate longitudinal data with random slopes
5. How to simulate longitudinal data with structured errors
### How to simulate single-level data
Let’s begin by simulating a trivially simple, single-level dataset that has the form
$y_i = 70 + e_i$
We will assume that e is normally distributed with mean zero and variance $$\sigma^2$$.
We’d want to simulate 500 observations, so let’s begin by clearing Stata’s memory and setting the number of observations to 500.
. clear
. set obs 500
Next, let’s create a variable named e that contains pseudorandom normally distributed data with mean zero and standard deviation 5:
. generate e = rnormal(0,5)
The variable e is our error term, so we can create an outcome variable y by typing
. generate y = 70 + e
. list y e in 1/5
+----------------------+
| y e |
|----------------------|
1. | 78.83927 8.83927 |
2. | 69.97774 -.0222647 |
3. | 69.80065 -.1993514 |
4. | 68.11398 -1.88602 |
5. | 63.08952 -6.910483 |
+----------------------+
We can fit a linear regression for the variable y to determine whether our parameter estimates are reasonably close to the parameters we specified when we simulated our dataset:
. regress y
Source | SS df MS Number of obs = 500
-------------+------------------------------ F( 0, 499) = 0.00
Model | 0 0 . Prob > F = .
Residual | 12188.8118 499 24.4264766 R-squared = 0.0000
Total | 12188.8118 499 24.4264766 Root MSE = 4.9423
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons | 69.89768 .221027 316.24 0.000 69.46342 70.33194
------------------------------------------------------------------------------
The estimate of _cons is 69.9, which is very close to 70, and the Root MSE of 4.9 is equally close to the error’s standard deviation of 5. The parameter estimates will not be exactly equal to the underlying parameters we specified when we created the data because we introduced randomness with the rnormal() function.
This simple example is just to get us started before we work with multilevel data. For familiarity, let’s fit the same model with the mixed command that we will be using later:
. mixed y, stddev
Mixed-effects ML regression Number of obs = 500
Wald chi2(0) = .
Log likelihood = -1507.8857 Prob > chi2 = .
------------------------------------------------------------------------------
y | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_cons | 69.89768 .2208059 316.56 0.000 69.46491 70.33045
------------------------------------------------------------------------------
------------------------------------------------------------------------------
Random-effects Parameters | Estimate Std. Err. [95% Conf. Interval]
-----------------------------+------------------------------------------------
sd(Residual) | 4.93737 .1561334 4.640645 5.253068
------------------------------------------------------------------------------
`
The output is organized with the parameter estimates for the fixed part in the top table and the estimated standard deviations for the random effects in the bottom table. Just as previously, the estimate of _cons is 69.9, and the estimate of the standard deviation of the residuals is 4.9.
Okay. That really was trivial, wasn’t it? Simulating two- and three-level data is almost as easy.
### How to simulate two- and three-level data
I posted a blog entry last year titled “Multilevel linear models in Stata, part 1: Components of variance“. In that posting, I showed a diagram for a residual of a three-level model.
The equation for the variance-components model I fit had the form
$y_{ijk} = mu + u_i.. + u_{ij.} + e_{ijk}$
This model had three residuals, whereas the one-level model we just fit above had only one.
|
{}
|
• S Subramanian
Articles written in Proceedings – Mathematical Sciences
• Parabolic ample bundles III: Numerically effective vector bundles
In this continuation of [Bi2] and [BN], we define numerically effective vector bundles in the parabolic category. Some properties of the usual numerically effective vector bundles are shown to be valid in the more general context of numerically effective parabolic vector bundles.
• Principal bundles on the projective line
We classify principalG-bundles on the projective line over an arbitrary fieldk of characteristic ≠ 2 or 3, whereG is a reductive group. If such a bundle is trivial at ak-rational point, then the structure group can be reduced to a maximal torus.
• Some Remarks on the Local Fundamental Group Scheme
We define the local fundamental group scheme and study its properties under base change of the base field.
• Proceedings – Mathematical Sciences
Current Issue
Volume 129 | Issue 5
November 2019
• Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
{}
|
# Solving for y
• July 16th 2010, 11:23 AM
Mike9182
Solving for y
How is y solved for in this equation?
$x=\frac{1-\sqrt{y}}{1+\sqrt{y}}$
• July 16th 2010, 11:28 AM
wonderboy1953
Multiply both sides of the equation by $1/(1 + \sqrt{y})$, then isolate $\sqrt{y}$. The rest will follow.
• July 16th 2010, 11:31 AM
Mike9182
Did you make a typo in that formula?
• July 16th 2010, 11:36 AM
wonderboy1953
Yes, should be multiplying both sides by $(1 + \sqrt{y})$
• July 16th 2010, 11:44 AM
1005
$x=\frac{1-\sqrt{y}}{1+\sqrt{y}}$
synthetically divide:
$x = \frac{2}{\sqrt{y}+1} - 1$
$x + 1= \frac{2}{\sqrt{y}+1}$
multiply by that denominator and divide by x+1:
$\sqrt{y} + 1 = \frac{2}{x + 1}$
subtract 1:
$\sqrt{y} = \frac{2}{x + 1} - 1$
$\sqrt{y} = \frac{-x + 1}{x+1}$
square both sides:
$y = \frac{(-x+1)^2}{(x+1)^2}$
factor out -1 from numerator:
$y = \frac{(-[x-1])^2}{(x+1)^2}$
distribute ^2:
$y = (-1)^2\frac{(x-1)^2}{(x+1)^2}$
-1*-1 = 1:
$y = \frac{(x-1)^2}{(x+1)^2}$
• July 16th 2010, 12:39 PM
Wilmer
Quote:
Originally Posted by Mike9182
$x=\frac{1-\sqrt{y}}{1+\sqrt{y}}$
Another way:
x(1 + SQRT[y]) = 1 - SQRT[y]
x + xSQRT[y] = 1 - SQRT[y]
xSQRT[y] + SQRT[y] = 1 - x
SQRT[y](x + 1) = -1(x - 1)
SQRT[y] / -1 = (x - 1) / (x + 1)
y = [(x - 1) / (x + 1)]^2
• July 17th 2010, 03:40 AM
dhiab
Hello : 1005 and wilmer
you have one solution http://www.mathhelpforum.com/math-he...10db5142e3.png
but (1-x)/(1+x)>= 0
• July 17th 2010, 07:01 AM
Wilmer
Quote:
Originally Posted by dhiab
Hello : 1005 and wilmer
you have one solution http://www.mathhelpforum.com/math-he...10db5142e3.png
but (1-x)/(1+x)>= 0
Hmmm....isn't all that's required: x <> -1 ?
|
{}
|
# Integral by partial fractions
$$\int \frac{5x}{\left(x-5\right)^2}\,\mathrm{d}x$$ find the value of the constant when the antiderivative passes threw (6,0)
factor out the 5, and use partial fraction
$$5 \left[\int \frac{A}{x-5} + \frac{B}{\left(x-5\right)^2}\, \mathrm{d}x \right]$$
Solve for $A$ and $B$.
$A\left(x-5\right) + B = x$ Then $B-5A$ has to be zero and $A$ has to be 1.
Resulting in
$$5 \left[\int \frac{1}{x-5} + \frac{5}{\left(x-5\right)^2}\, \mathrm{d}x \right]$$
$$\Rightarrow 5 \left[ \ln \vert x - 5 \vert -\frac{5}{x-5}\right] + C$$
However, this approach doesn't give the answer in the book.
$$\frac{5}{x-5} \left(\left(x-5\right) \ln \vert x - 5 \vert - x \right) + C$$
The value should be 30, according to the book.
-
Probably they missed include a constant in book's answer. If we include a constant $k$, the book's answer will change to: $$\frac{5}{x-5}((x-5)\ln|x-5|-x)+k$$ But with some algebra we get $$\frac{5}{x-5}((x-5)\ln|x-5|-x)+k=$$ $$=5(\frac{(x-5)}{x-5}\ln|x-5|-\frac{x}{x-5})+k= 5(\ln|x-5|-\frac{x}{x-5}+1)-5+k=$$ $$=5(\ln|x-5|-\frac{x}{x-5}+\frac{x-5}{x-5})-5+k=5(\ln|x-5|-\frac{5}{x-5})-5+k=$$ $$=5(\ln|x-5|-\frac{5}{x-5})+C$$ Which is your answer, where C is a new constant, such that $C=k-5$.
-
Distribute: $$\frac{5}{x-5}((x-5)\ln|x-5|-x)=5\left(\frac{x-5}{x-5}\ln|x-5|-\frac{x}{x-5}\right).$$ Then $$\frac{x-5}{x-5}=1.$$
-
I don't have the $x$ in the last fraction. – yiyi Nov 29 '12 at 12:20
@MaoYiyi Sorry, didn't notice that. – Joe Johnson 126 Nov 29 '12 at 16:11
@MaoYiyi RicardoCruz has the explanation. – Joe Johnson 126 Nov 29 '12 at 16:14
Your solution is correct, but books solution is also. Differentiate the solutions and you will see, that both of them are Antiderivatives.
Moreover it is: $$\frac{5}{x-5} \left(\left(x-5\right) \ln \vert x - 5 \vert - x \right) = 5 \left(\ln \vert x - 5 \vert - \frac{x-5+5}{x-5}\right) = 5 \left(\ln \vert x - 5 \vert - \frac{5}{x-5}\right) +\mathcal{Const}$$
Optional way to get your solution: $$\int \frac{5x}{\left(x-5\right)^2}\,\mathrm{d}x= \frac{5}{2}\int\frac{2x-10}{\left(x-5\right)^2}+\frac{10}{\left(x-5\right)^2}\,\mathrm{d}x=\frac{5}{2}\left(2\ln\vert x-5\vert-\frac{10}{x-5}\right)+\mathcal{C}$$
-
how does $x-5 + 5 = 5$? Where did the $x$ goto? – yiyi Nov 30 '12 at 1:02
@MaoYiyi: It is $\frac{x}{x-5} = \frac{x-5+5}{x-5} = 1+\frac{5}{x-5}$ – user127.0.0.1 Nov 30 '12 at 7:41
|
{}
|
• MHB
mt91
I've got a question here which I'm really unsure what the wording is asking me to do, I've calculated (5), so worked out the steady states. However question 6 has really thrown me off with it's wording, any help would be appreciated.
HOI
A "steady state" solution to a differential equation is a constant solution. Since the derivatives of a constant are 0, for a "steady state" solution du/dt= 0. For this problem that means u(1- u)(1+ u)- Eu= 0. Factoring out "u", u[(1- u)(1+u)- E]= u[1- u^2- E]= 0. Either u= 0 or u^2= 1- E so the "steady state" solutions are u*(E)= 0 and u*(E)= sqr{1- E}. Is that what you got for (5)?
Problem (6) asks about the "yield" which is defined as Y= Eu*(E) where u* is a "steady state solution". Since the steady state solutions are u*(E)= 0 and u*(E)= sqrt(1- E), either Y= E(0)= 0 or Y= E sqrt(1- E)= sqr(E^2- E^3). The first is identically equal to 0 so cannot be maximized. To find the maximum of the second, set the derivative equal to 0.
Y= sqrt(E^2- E^3)= (E^2- E^3)^(1/2). Y'= (1/2)(E^2- E^3)^(-1/2)(2E- 3E^2)= 0. That is equivalent to 3E^2- 2E= E(3E- 2)= 0 so E= 0 or E= 2/3. Again, E= 0 cannot give a maximum (it gives a minimum) so E*= 2/3.
Joppy
MHB
A "steady state" solution to a differential equation is a constant solution. Since the derivatives of a constant are 0, for a "steady state" solution $\frac{du}{dt}= 0$. For this problem that means $u(1- u)(1+ u)- Eu= 0$. Factoring out "u", $u[(1- u)(1+u)- E]= u[1- u^2- E]= 0$. Either $u= 0$ or $u^2= 1- E$ so the "steady state" solutions are $u^*(E)= 0$ and $u^*(E)= \sqrt{1- E}$. Is that what you got for (5)?
Problem (6) asks about the "yield" which is defined as $Y= Eu^*(E)$ where u* is a "steady state solution". Since the steady state solutions are $u^*(E)= 0$ and $u^*(E)= \sqrt(1- E)$, either $Y= E(0)= 0$ or $Y= E \sqrt(1- E)= \sqrt(E^2- E^3)$. The first is identically equal to 0 so cannot be maximized. To find the maximum of the second, set the derivative equal to 0.
$Y= \sqrt(E^2- E^3)= (E^2- E^3)^{1/2}$. $Y'= (1/2)(E^2- E^3)^(-1/2)(2E- 3E^2)= 0$. That is equivalent to $3E^2- 2E= E(3E- 2)= 0$ so $E= 0$ or $E= 2/3$. Again, $E= 0$ cannot give a maximum (it gives a minimum) so $E^*= 2/3$.
|
{}
|
MathSciNet bibliographic data MR1205761 (94a:13003) 13A30 Noh, Sunsook; Vasconcelos, Wolmer V. The \$S\sb 2\$$S\sb 2$-closure of a Rees algebra. Results Math. 23 (1993), no. 1-2, 149–162. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
|
{}
|
In general relativity, the sticky bead argument is a simple thought experiment designed to show that gravitational radiation is indeed predicted by general relativity, and can have physical effects. These claims were not widely accepted prior to about 1955, but after the introduction of the bead argument, any remaining doubts soon disappeared from the research literature.
The argument is often credited to Hermann Bondi, who popularized it,[1] but it was originally proposed anonymously by Richard Feynman.[2][3][4]
## Description
The thought experiment was first described by Feynman (under the pseudonym "Mr. Smith") in 1957 at a conference at Chapel Hill, North Carolina,[3][better source needed] and later addressed in his private letter:
Feynman’s gravitational wave detector: It is simply two beads sliding freely (but with a small amount of friction) on a rigid rod. As the wave passes over the rod, atomic forces hold the length of the rod fixed, but the proper distance between the two beads oscillates. Thus, the beads rub against the rod, dissipating heat.[2]
As the gravitational waves are mainly transverse, the rod has to be oriented perpendicular to the propagation direction of the wave.
## History of arguments on the properties of gravitational waves
### Einstein's double reversal
The creator of the theory of general relativity, Albert Einstein, argued in 1916[5] that gravitational radiation should be produced, according to his theory, by any mass-energy configuration that has a time-varying quadrupole moment (or higher multipole moment). Using a linearized field equation (appropriate for the study of weak gravitational fields), he derived the famous quadrupole formula quantifying the rate at which such radiation should carry away energy.[6] Examples of systems with time varying quadrupole moments include vibrating strings, bars rotating about an axis perpendicular to the symmetry axis of the bar, and binary star systems, but not rotating disks.
In 1922, Arthur Stanley Eddington wrote a paper expressing (apparently for the first time) the view that gravitational waves are in essence ripples in coordinates, and have no physical meaning. He did not appreciate Einstein's arguments that the waves are real.[7]
In 1936, together with Nathan Rosen, Einstein rediscovered the Beck vacuums, a family of exact gravitational wave solutions with cylindrical symmetry (sometimes also called Einstein–Rosen waves). While investigating the motion of test particles in these solutions, Einstein and Rosen became convinced that gravitational waves were unstable to collapse. Einstein reversed himself and declared that gravitational radiation was not after all a prediction of his theory. Einstein wrote to his friend Max Born
Together with a young collaborator, I arrived at the interesting result that gravitational waves do not exist, though they had been assumed a certainty to the first approximation. This shows that the nonlinear field equations can show us more, or rather limit us more, than we have believed up till now.
In other words, Einstein believed that he and Rosen had established that their new argument showed that the prediction of gravitational radiation was a mathematical artifact of the linear approximation he had employed in 1916. Einstein believed these plane waves would gravitationally collapse into points; he had long hoped something like this would explain quantum mechanical wave-particle duality.[citation needed]
Einstein and Rosen accordingly submitted a paper entitled Do gravitational waves exist? to a leading physics journal, Physical Review, in which they described their wave solutions and concluded that the "radiation" that seemed to appear in general relativity was not genuine radiation capable of transporting energy or having (in principle) measurable physical effects.[8] The anonymous referee, who—as the current editor of Physical Review recently confirmed, all parties now being deceased—was the combative cosmologist, Howard Percy Robertson, pointed out the error described below, and the manuscript was returned to the authors with a note from the editor asking them to revise the paper to address these concerns. Quite uncharacteristically, Einstein took this criticism very badly, angrily replying "I see no reason to address the, in any case erroneous, opinion expressed by your referee." He vowed never again to submit a paper to Physical Review. Instead, Einstein and Rosen resubmitted the paper without change to another and much less well known journal, The Journal of the Franklin Institute.[9] He kept his vow regarding Physical Review.
Leopold Infeld, who arrived at Princeton University at this time, later remembered his utter astonishment on hearing of this development, since radiation is such an essential element for any classical field theory worthy of the name. Infeld expressed his doubts to a leading expert on general relativity: H. P. Robertson, who had just returned from a visit to Caltech. Going over the argument as Infeld remembered it, Robertson was able to show Infeld the mistake: locally, the Einstein–Rosen waves are gravitational plane waves. Einstein and Rosen had correctly shown that a cloud of test particles would, in sinusoidal plane waves, form caustics, but changing to another chart (essentially the Brinkmann coordinates) shows that the formation of the caustic is not a contradiction at all, but in fact just what one would expect in this situation. Infeld then approached Einstein, who concurred with Robertson's analysis (still not knowing it was he who reviewed the Physical Review submission).
Since Rosen had recently departed for the Soviet Union, Einstein acted alone in promptly and thoroughly revising their joint paper. This third version was retitled On gravitational waves, and, following Robertson's suggestion of a transformation to cylindrical coordinates, presented what are now called Einstein–Rosen cylindrical waves (these are locally isometric to plane waves). This is the version that eventually appeared. However, Rosen was unhappy with this revision and eventually published his own version, which retained the erroneous "disproof" of the prediction of gravitational radiation.
In a letter to the editor of Physical Review, Robertson wryly reported that in the end, Einstein had fully accepted the objections that had initially so upset him.
### Bern and Chapel Hill conferences
In 1955, an important conference honoring the semi-centennial of special relativity was held in Bern, the Swiss capital city where Einstein was working in the famous patent office during the Annus mirabilis. Rosen attended and gave a talk in which he computed the Einstein pseudotensor and Landau–Lifshitz pseudotensor (two alternative, non-covariant, descriptions of the energy carried by a gravitational field, a notion that is notoriously difficult to pin down in general relativity). These turn out to be zero for the Einstein–Rosen waves, and Rosen argued that this reaffirmed the negative conclusion he had reached with Einstein in 1936.
However, by this time a few physicists, such as Felix Pirani and Ivor Robinson, had come to appreciate the role played by curvature in producing tidal accelerations, and were able to convince many peers that gravitational radiation would indeed be produced, at least in cases such as a vibrating spring where different pieces of the system were clearly not in inertial motion. Nonetheless, some physicists continued to doubt whether radiation would be produced by a binary star system, where the world lines of the centers of mass of the two stars should, according to the EIH approximation (dating from 1938 and due to Einstein, Infeld, and Banesh Hoffmann), follow timelike geodesics.
Inspired by conversations by Felix Pirani, Hermann Bondi took up the study of gravitational radiation, in particular the question of quantifying the energy and momentum carried off 'to infinity' by a radiating system. During the next few years, Bondi developed the Bondi radiating chart and the notion of Bondi energy to rigorously study this question in maximal generality.
In 1957, at a conference at Chapel Hill, North Carolina, appealing to various mathematical tools developed by John Lighton Synge, A. Z. Petrov and André Lichnerowicz, Pirani explained more clearly than had previously been possible the central role played by the Riemann tensor and in particular the tidal tensor in general relativity.[10] He gave the first correct description of the relative (tidal) acceleration of initially mutually static test particles that encounter a sinusoidal gravitational plane wave.
### Feynman's argument
Later in the Chapel Hill conference, Richard Feynman used Pirani's description to point out that a passing gravitational wave should in principle cause a bead on a stick (oriented transversely to the direction of propagation of the wave) to slide back and forth, thus heating the bead and the stick by friction.[4] This heating, said Feynman, showed that the wave did indeed impart energy to the bead and stick system, so it must indeed transport energy, contrary to the view expressed in 1955 by Rosen.
In two 1957 papers, Bondi and (separately) Joseph Weber and John Archibald Wheeler used this bead argument to present detailed refutations of Rosen's argument.[1][11]
### Rosen's final views
Nathan Rosen continued to argue as late as the 1970s, on the basis of a supposed paradox involving the radiation reaction, that gravitational radiation is not in fact predicted by general relativity. His arguments were generally regarded as invalid, but in any case the sticky bead argument had by then long since convinced other physicists of the reality of the prediction of gravitational radiation.[citation needed]
## Notes
1. ^ a b Bondi, Hermann (1957). "Plane gravitational waves in general relativity". Nature. 179 (4569): 1072–1073. Bibcode:1957Natur.179.1072B. doi:10.1038/1791072a0.
2. ^ a b Preskill, John and Kip S. Thorne. Foreword to Feynman Lectures On Gravitation. Feynman et al. (Westview Press; 1st ed. (June 20, 2002) p. xxv–xxvi.Link PDF (page 17-18)
3. ^ a b DeWitt, Cecile M. (1957). Conference on the Role of Gravitation in Physics at the University of North Carolina, Chapel Hill, March 1957; WADC Technical Report 57-216 (Wright Air Development Center, Air Research and Development Command, United States Air Force, Wright Patterson Air Force Base, Ohio) Link on www.edition-open-access.de.
4. ^ a b Dewitt, Cécile M.; Rickles, Dean (1957). "An Expanded Version of the Remarks by R.P. Feynman on the Reality of Gravitational Waves". DeWitt, Cecile M. et al. Wright-Patterson Air Force Base (edition-open-access.de). Retrieved 27 September 2016.
5. ^ Einstein, A (June 1916). "Näherungsweise Integration der Feldgleichungen der Gravitation". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin. part 1: 688–696. Bibcode:1916SPAW.......688E.
6. ^ Einstein, A (1918). "Über Gravitationswellen". Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin. part 1: 154–167. Bibcode:1918SPAW.......154E.
7. ^ Eddington 1922, page 268-282
8. ^ Kennefick, Daniel (September 2005). "Einstein Versus the Physical Review". Physics Today. 58 (9): 43–48. Bibcode:2005PhT....58i..43K. doi:10.1063/1.2117822. ISSN 0031-9228.
9. ^ Einstein, Albert; Rosen, Nathan (January 1937). "On gravitational waves". Journal of the Franklin Institute. 223 (1): 43–54. Bibcode:1937FrInJ.223...43E. doi:10.1016/s0016-0032(37)90583-0. ISSN 0016-0032.
10. ^ Pirani, Felix A. E. (1957). "Invariant formulation of gravitational radiation theory". Phys. Rev. 105 (3): 1089–1099. Bibcode:1957PhRv..105.1089P. doi:10.1103/PhysRev.105.1089.
11. ^ Weber, Joseph & Wheeler, John Archibald (1957). "Reality of the cylindrical gravitational waves of Einstein and Rosen". Rev. Mod. Phys. 29 (3): 509–515. Bibcode:1957RvMP...29..509W. doi:10.1103/RevModPhys.29.509.
|
{}
|
12
views
0
recommends
+1 Recommend
0 collections
0
shares
• Record: found
• Abstract: found
• Article: found
Is Open Access
# Correction: Jood, P. and Ohta, M. Hierarchical Architecturing for Layered Thermoelectric Sulfides and Chalcogenides. Materials 2015, 8, 1124–1149
, *
Materials
MDPI
Bookmark
There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
### Abstract
The authors wish to make the following corrections to this paper [1]. The authors regret that the lattice thermal conductivity (κ at) values of some samples in Table 1 and thermoelectric figure of merit (ZT) values of some samples in Table 2 were not correct. The tables with correct κ lat and ZT values are shown below. The authors would like to apologize for any inconvenience caused. materials-08-05315-t001_Table 1 Table 1 Seebeck coefficient (S), electrical resistivity (ρ), carrier mobility (μ), power factor (S 2/ρ), lattice thermal conductivity (κ lat), and thermoelectric figure of merit (ZT) at room temperature in the in-plane (ab-plane) and out-of-plane (c-axis) directions for a single crystal of nearly stoichiometric TiS2 [2] and polycrystalline Ti1.008S2 [3]. Sample Direction S (μV·K−1) ρ (μΩ m) μ (cm2·V−1·s−1) S 2/ρ (μW·K−2·m−1) κ lat (W·K−1·m−1) ZT Single crystal In-plane −251 17 15 3710 6.35 0.16 Single crystal Out-of-plane - 13,000 0.017 - 4.21 - Polycrystalline In-plane −80 6.2 2.3 1030 2.0 0.12 Polycrystalline Out-of-plane −84 11 1.2 630 1.8 0.10 materials-08-05315-t002_Table 2 Table 2 Seebeck coefficient (S), electrical resistivity (ρ), total thermal conductivity (κtotal), lattice thermal conductivity (κlat), power factor (S 2/ρ), and thermoelectric figure of merit (ZT) in the in-plane (ab-plane) and out-of-plane (c-axis) directions of state-of-the-art misfit layered sulfides: [MS]1+m TS2 (M = La, Yb; T = Cr, Nb) [4,5]. Sample Direction T (K) ρ (μΩ·m) S (μV·K−1) κtotal (W·K−1·m−1) κlat(W·K−1·m−1) S 2/ρ (μW·K−2·m−1) ZT Reference (Yb2S2)0.62NbS2 In-plane 300 19.0 60 0.80 0.41 200 0.1 [5] (La2S2)0.62NbS2 In-plane 300 11.5 22 - - 50 - [5] (LaS)1.14NbS2 a In-plane 300 7.6 37 2.5 1.50 177 0.02 [4] 950 22.0 83 2.00 0.93 316 0.15 Out-of-plane 300 13.3 25 2.04 1.48 49 0.01 950 32.1 72 1.62 0.88 162 0.09 (LaS)1.14NbS2 b In-plane 300 5.2 35 4.88 3.45 233 0.02 [4] 950 16.9 83 3.25 1.86 405 0.12 Out-of-plane 300 9.3 25 1.56 0.75 70 0.01 950 28.5 56 1.34 0.52 111 0.08 (LaS)1.2CrS2 a In-plane 950 207 −172 1.16 1.04 143 0.11 [4] Out-of-plane 950 223 −174 1.02 0.91 137 0.13 (LaS)1.2CrS2 b In-plane 950 171 −172 1.25 1.11 174 0.14 [4] Out-of-plane 950 278 −154 0.92 0.84 84 0.08 a Small grains (~1 μm), weak/random orientation of grains; b Large grains (>20 μm), strong orientation of grains perpendicular to the pressing direction.
### Most cited references5
• Record: found
• Abstract: found
• Article: found
Is Open Access
### Large Thermoelectric Power Factor in TiS2 Crystal with Nearly Stoichiometric Composition
(2001)
A TiS$$_{2}$$ crystal with a layered structure was found to have a large thermoelectric power factor.The in-plane power factor $$S^{2}/ \rho$$ at 300 K is 37.1~$$\mu$$W/K$$^{2}$$cm with resistivity ($$\rho$$) of 1.7 m$$\Omega$$cm and thermopower ($$S$$) of -251~$$\mu$$V/K, and this value is comparable to that of the best thermoelectric material, Bi$$_{2}$$Te$$_{3}$$ alloy. The electrical resistivity shows both metallic and highly anisotropic behaviors, suggesting that the electronic structure of this TiS$$_{2}$$ crystal has a quasi-two-dimensional nature. The large thermoelectric response can be ascribed to the large density of state just above the Fermi energy and inter-valley scattering. In spite of the large power factor, the figure of merit, $$ZT$$ of TiS$$_{2}$$ is 0.16 at 300 K, because of relatively large thermal conductivity, 68~mW/Kcm. However, most of this value comes from reducible lattice contribution. Thus, $$ZT$$ can be improved by reducing lattice thermal conductivity, e.g., by introducing a rattling unit into the inter-layer sites.
Bookmark
• Record: found
• Abstract: found
• Article: found
Is Open Access
### Hierarchical Architecturing for Layered Thermoelectric Sulfides and Chalcogenides
(2015)
Sulfides are promising candidates for environment-friendly and cost-effective thermoelectric materials. In this article, we review the recent progress in all-length-scale hierarchical architecturing for sulfides and chalcogenides, highlighting the key strategies used to enhance their thermoelectric performance. We primarily focus on TiS2-based layered sulfides, misfit layered sulfides, homologous chalcogenides, accordion-like layered Sn chalcogenides, and thermoelectric minerals. CS2 sulfurization is an appropriate method for preparing sulfide thermoelectric materials. At the atomic scale, the intercalation of guest atoms/layers into host crystal layers, crystal-structural evolution enabled by the homologous series, and low-energy atomic vibration effectively scatter phonons, resulting in a reduced lattice thermal conductivity. At the nanoscale, stacking faults further reduce the lattice thermal conductivity. At the microscale, the highly oriented microtexture allows high carrier mobility in the in-plane direction, leading to a high thermoelectric power factor.
Bookmark
• Record: found
### Microstructural Control and Thermoelectric Properties of Misfit Layered Sulfides (LaS)1+mTS2 (T = Cr, Nb): The Natural Superlattice Systems
(2014)
Bookmark
### Author and article information
###### Journal
Materials (Basel)
Materials (Basel)
materials
Materials
MDPI
1996-1944
21 September 2015
September 2015
: 8
: 9
: 6482-6483
###### Affiliations
Energy Technology Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568, Japan; E-Mail: p.jood@ 123456aist.go.jp
###### Author notes
[* ]Author to whom correspondence should be addressed; E-Mail: ohta.michihiro@ 123456aist.go.jp ; Tel.: +81-29-861-5663; Fax: +81-29-861-5340.
###### Article
materials-08-05315
10.3390/ma8095315
5512922
|
{}
|
# CreateInstance and DeleteInstance¶
The creation of a CIM instance and in turn the creation of the underlying managed resource is achieved by calling the CreateInstance() method. It takes a pywbem.CIMInstance object as input, which specifies the class and the initial properties for the CIM instance to be created, and returns a pywbem.CIMInstanceName object that references the new CIM instance.
The DeleteInstance() method takes a pywbem.CIMInstanceName object and deletes the referenced CIM instance and the represented managed resource, or rejects the operation if deletion is not supported.
For some CIM classes, it makes no sense to support creation or deletion of their CIM instances. For some others, that makes sense and is defined in their usage definitions in WBEM Management Profiles (see DMTF standard DSP1001). Often, management profiles that define a semantics for the creation or deletion of managed resources, leave that optional for an implementation to support. The implementation for a CIM class in the WBEM server (aka CIM provider) thus may or may not support creation or deletion of its instances and the represented managed resources.
Note that the CIMInstance object provided as input to CreateInstance() does not specfify an instance path (or if it does, it will be ignored). The determination of an instance path for the new CIM instance is completely left to the CIM provider in the WBEM server. For CIM classes with natural keys (key properties other than "InstanceID"), some CIM providers do honor initial values for some or all of the key properties provided in the input instance.
In [ ]:
from __future__ import print_function
import sys
import pywbem
classname = 'CIM_ComputerSystem'
namespace = 'root/interop'
server = 'http://localhost'
default_namespace=namespace,
no_verification=True)
filter_inst = pywbem.CIMInstance(
'CIM_IndicationFilter',
{'Name': 'pywbem_test',
'Query': 'SELECT * FROM CIM_Indication',
'QueryLanguage': 'WQL'})
print('Creating instance of class: %s' % filter_inst.classname)
try:
filter_path = conn.CreateInstance(filter_inst, namespace)
except pywbem.Error as exc:
if isinstance(exc, pywbem.CIMError) and \
exc.status_code == pywbem.CIM_ERR_NOT_SUPPORTED:
print('WBEM server does not support creation of dynamic filters.')
filter_path = None
else:
print('CreateInstance failed: %s: %s' % (exc.__class__.__name__, exc))
sys.exit(1)
if filter_path is not None:
print('Created instance: %s' % filter_path)
print('Deleting the instance again, to clean up')
try:
conn.DeleteInstance(filter_path)
except pywbem.Error as exc:
print('DeleteInstance failed: %s: %s' % (exc.__class__.__name__, exc))
sys.exit(1)
print('Deleted the instance')
This example has a somewhat more elaborated failure message that includes the type of exception that happened.
This example also shows how specific CIM errors can be detected: If creation of the CIM instance and the corresponding managed resource is not supported, this example code accepts that and does not error out. All other errors, including other CIM errors, cause an error exit.
PyWBEM maps CIM operation failures to the Python exception pywbem.CIMError, and raises that in this case. The CIM status code is available as a numeric value in the status_code attribute of the exception object. See CIM status codes for a definition of the CIM status code values.
|
{}
|
## The Annals of Mathematical Statistics
### Concentration of Random Quotients
William H. Lawton
#### Abstract
The present paper proposes a definition of relative concentration of random variables about a given constant, and studies the relationship between two stochastic denominators $Z_1$ and $Z_2$ which causes the random quotient $X/Z_1$ to be more concentrated about zero than $X/Z_2$. In this paper we shall always assume that the numerator and denominator are independent. Two necessary and sufficient conditions, and several sufficient conditions for $X/Z_1$ to be more concentrated about zero than $X/Z_2$ are given in Section 4. The results of Section 4 are used in Section 5 to obtain generalizations of a theorem due to Hajek (1957) on the generalized Student's $t$-distribution. Sections 6 and 7 use these generalized theorems to produce tests and confidence intervals for several Behrens-Fisher type problems. Finally, Section 8 contains a proof of the randomization theorem stated in Section 4. In particular, Section 6 is concerned with the extension of a result in Lawton (1965) on Lord's $u$-statistic to the case of unequal sample sizes. Section 7 gives methods for constructing confidence intervals for linear combinations of means from $k$ normal populations
#### Article information
Source
Ann. Math. Statist., Volume 39, Number 2 (1968), 466-480.
Dates
First available in Project Euclid: 27 April 2007
https://projecteuclid.org/euclid.aoms/1177698410
Digital Object Identifier
doi:10.1214/aoms/1177698410
Mathematical Reviews number (MathSciNet)
MR225409
Zentralblatt MATH identifier
0159.48201
JSTOR
#### Citation
Lawton, William H. Concentration of Random Quotients. Ann. Math. Statist. 39 (1968), no. 2, 466--480. doi:10.1214/aoms/1177698410. https://projecteuclid.org/euclid.aoms/1177698410
|
{}
|
Function Triangular(Shape, Minimum, Maximum)
# Triangular
The function Triangular draws a random value from a triangular distribution.
Triangular(
Shape, ! (input) numerical expression
Minimum, ! (optional) numerical expression
Maximum ! (optional) numerical expression
)
## Arguments
Shape
A scalar numerical expression.
Minimum
A scalar numerical expression.
Maximum
A scalar numerical expression.
## Return Value
The function Triangular returns a random value drawn from a triangular distribution with shape Shape, lower bound Minimum and upper bound Maximum. The argument Shape must satisfy the relation $$0 < Shape < 1$$.
Note
The prototype of this function has changed with the introduction of AIMMS 3.4. In order to run models that still use the original prototype, the option Distribution_compatibility should be set to Aimms_3_0. The original function Triangular(a, b, c) returns a random value drawn from a triangular distribution with a lower bound a, likeliest value b and upper bound c. The arguments must satisfy the relation $$a < b < c$$. The relation between the arguments Shape and b is given by $$Shape = (b - a)/(c - a)$$.
See also
The Triangular distribution is discussed in full detail in Discrete Distributions of the Language Reference.
|
{}
|
# External tensor product of quasi coherent sheaf
When is the following map possible?
$A\boxtimes A(\mathcal{G}\times\mathcal{G})\rightarrow A(\mathcal{G}) \otimes A(\mathcal{G})$; where $\mathcal{G}$ is a group scheme, $A$ is a quasi coherent sheaf(of algebras) over $\mathcal{G}$ and $\boxtimes$ is the external tensor product of a sheaf given by $\pi_1^*A\otimes\pi_2^* A$.
In general, for any open set $U$, $A\boxtimes A(U\times U)\rightarrow A(U)\otimes A(U)$ doesnt hold true. What are the conditions required for this to hold? Will generation of $A$ by global sections suffice?
Yes, I need a non-zero map. (isomorphism even better!) I know there is always a map in other direction. But when would the inverse(non-zero) map exist? I need to know the conditions that one require for this inverse to exist. Will generation of $\mathcal{A}$ by global sections be sufficient condition for this map to exist? – Neha Oct 11 '10 at 10:56
Let $(\mathcal{G},\mathcal{O})$ be an affine scheme, and $\mathcal{A}$ a quasi coherent sheaf over $\mathcal{G}$. Is $(\mathcal{A}\boxtimes \mathcal{A})(\mathcal{G}\mathcal{G})=\mathcal{A}(\mathcal{G})\otimes \mathcal{A}(\mathcal{G})$ ? – Neha Apr 22 '11 at 16:21
I meant $\mathcal{G}\times \mathcal{G}$ in the above statement. – Neha Apr 22 '11 at 16:22
|
{}
|
# Similarity of Triangles CBSE NCERT Solutions Chapter 6 Triangles Exercise 6.3 Question 2
CBSE NCERT Solutions Chapter 6 Triangles Exercise 6.3 Question 2
2. In fig 6.35, $\triangle ODC$ ~ $\triangle OBA$, $\angle BOC$ = 125$^\circ$ and $\angle CDO$ = 70$^\circ$. Find $\angle DOC$, $\angle DCO$ and $\angle$ OAB.
Fig 6.35
Solution:
Given: $\triangle ODC$ ~ $\triangle OBA$, $\angle BOC$ = 125$^\circ$ and $\angle CDO$ = 70$^\circ$
$\angle BOC$$\angle DOC$ = 180$^\circ$ {Linear Pair}
$\Rightarrow \angle DOC$ = 180 - 125 = 55$^\circ$
In $\triangle ODC$
$\angle ODC$$\angle DCO$$\angle COD$ = 180$^\circ$ {Sum of angles of Triangle}
$\Rightarrow 70 + 55 + \angle DCO$ = 180
$\Rightarrow \angle DCO = 180 - 70 - 55 = 55^\circ$
It is given that $\triangle ODC$ ~ $\triangle OBA$.
Therefore, corresponding angles of $\triangle ODC$ and $\triangle OBA$ are equal.
By, AAA similarity criterion, $\angle OCD = \angle OAB$
Therefore, $\angle OAB = 55^\circ$
Posted in : NCERT Solutions, Triangles
Tags :
|
{}
|
# Bounding the difference of numbers between 0 and 1 with the same power
I would like to prove the following inequality (I guess it holds but I'm not able to formally do it). Consider two numbers $$x,y\in (0,1)$$ and the positive real number $$\alpha$$. Then, can I write $$|(1-x)^\alpha-(1-y)^\alpha|\leq c \cdot |x-y|$$ where $$c$$ is a constant depending on $$\alpha$$? Thank you!
Function $$f$$ defined by $$f(x)=(1-x)^\alpha$$ is continuously differentiable, therefore for all $$x, y$$ in $$[0, 1]$$, $$|f(x)-f(y)|\leq |x-y| \cdot \sup_{t\in[0,1]} |f^\prime(t)|$$ Since $$f^\prime(t)=-\alpha(1-t)^{1-\alpha}$$, the inequality you're seeking is true if $$\alpha \geq 1$$, with $$\boxed{|(1-x)^\alpha-(1-y)^\alpha|\leq \alpha \cdot |x-y|}$$ Note that if $$\alpha\in (0,1)$$, the inequality is not true. For instance, take $$\alpha=\frac 1 2$$, and $$x=1-h$$, and $$y=1-2h$$: $$\frac{|\sqrt{1 -x} -\sqrt{1-y}|}{|x-y|}=\frac{1}{\sqrt{1 -x} +\sqrt{1-y}}=\frac{1}{(1+\sqrt 2)\sqrt h}\rightarrow +\infty$$
Consider $$x,y\in[0,1]$$ and the function $$f(x)=(1-x)^\alpha$$. Then the inequality holds if $$f$$ is Lipschitz continuous. Because a derivable function is Lipshitz continuos (here an answer), if $$\alpha\geq1$$ $$f$$ is derivable and so the inequality holds.
But for $$0<\alpha<1$$, $$f$$ isn't continuously differentiable on $$[0,1]$$ (the derivative is $$-\infty$$ at $$x=1$$), but because $$x,y \neq 1$$, $$f$$ is derivable on $$[0,\max(x,y)]$$ and so the inequality holds only here ($$c$$ depends on $$\max(x,y)$$!).
|
{}
|
Courses
Courses for Kids
Free study material
Free LIVE classes
More
# If $\omega$ is an imaginary cube root of unity, then ${{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}$ equals(a) $128\omega$ (b) $-128\omega$ (c) $128{{\omega }^{2}}$ (d) $-128{{\omega }^{2}}$
Last updated date: 30th Mar 2023
Total views: 308.4k
Views today: 4.85k
Verified
308.4k+ views
Hint: The sum of $1,\omega$ and ${{\omega }^{2}}$ is equal to 0 where $1,\omega$ and ${{\omega }^{2}}$ are the cube roots of unity . Also, ${{\omega }^{3}}=1$.
Before proceeding with the question, we must know some properties which are related to the cube roots of unity i.e. $1,\omega ,{{\omega }^{2}}$ which will be used to solve this question. These properties are,
$1+\omega +{{\omega }^{2}}=0..........\left( 1 \right)$
${{\omega }^{3n}}=1..........\left( 2 \right)$, where $n$ is an integer.
In this question, we have to find the value of ${{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}$. From equation $\left( 1 \right)$, we have,
$1+\omega +{{\omega }^{2}}=0$
Hence, we can also write $1+\omega =-{{\omega }^{2}}.............\left( 4 \right)$
Substituting $1+\omega =-{{\omega }^{2}}$ from equation $\left( 4 \right)$ in equation $\left( 2 \right)$, we get,
\begin{align} & {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}={{\left( -{{\omega }^{2}}-{{\omega }^{2}} \right)}^{7}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}={{\left( -2{{\omega }^{2}} \right)}^{7}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}={{\left( -2 \right)}^{7}}{{\left( {{\omega }^{2}} \right)}^{7}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128{{\omega }^{14}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128{{\omega }^{12+2}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128{{\omega }^{12}}{{\omega }^{2}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128{{\omega }^{3\left( 4 \right)}}{{\omega }^{2}}...............\left( 5 \right) \\ \end{align}
From equation $\left( 2 \right)$, we have ${{\omega }^{3n}}=1$ where $n$ is an integer. Since $4$ is an integer, we can substitute $n=4$ in equation $\left( 2 \right)$. So, substituting $n=4$ in equation $\left( 2 \right)$, we get,
${{\omega }^{3(4)}}=1.......\left( 6 \right)$
From $\left( 6 \right)$, we have ${{\omega }^{3(4)}}=1$. Substituting ${{\omega }^{3(4)}}=1$ from equation $\left( 6 \right)$ in equation $\left( 5 \right)$, we get,
\begin{align} & {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128\left( 1 \right){{\omega }^{2}} \\ & \Rightarrow {{\left( 1+\omega -{{\omega }^{2}} \right)}^{7}}=-128{{\omega }^{2}} \\ \end{align}
Note: In this question, it was easier to think of writing ${{\omega }^{14}}$ as ${{\omega }^{12+2}}$ where $12$ is a multiple of $3$ because $14$ is a comparatively smaller number and it is easier to express $14$ in the form of a multiple of $3$. But if we get a larger number, it is difficult to convert that number directly in the form of the multiple of $3$. So, in that case, we will divide that number by long division method to find the divisor, quotient and remainder. Then we express that number in the form of the multiple of $3$ using the formula, $number=\left( divisor \right)\times \left( quotient \right)+remainder$, where the $divisor=3$. For example if we have a number $149$ in the power of $\omega$, dividing $149$ by $3$ using a long division method, we will get $divisor=3,quotient=49,remainder=2$. Hence, we can express $149$ as $149=\left( 3 \right)\left( 49 \right)+2$. Therefore, we expressed $149$ in the form of multiple of $3$.
|
{}
|
Is there an intuitive argument for the correct bounds of a confidence interval in a one-sided hypothesis test?
Assume I have a hypothesis
$$H_0: \mu \leq 0$$ $$H_1: \mu > 0$$
which corresponds to R's alternativ "greater" and which I would like to check with a confidence level of $$95\%$$ ($$\alpha = 0.05$$). I thought the confidence interval (CI) must be of the form $$\left(- \infty, a\right]$$, which is wrong. Is there an intuitive argument for the opposite? Long form:
Lets assume we have $$n=10$$ measuremts with $$\bar{x} = -9.603$$ and $$s_x = 2.342$$ as in the R example below. For a two-sided test (without any bias) it is clear that the CI can be calculate to be $$\left[\bar{x}-c\cdot\frac{s_x}{\sqrt{10}}, \bar{x}-c\cdot\frac{s_x}{\sqrt{10}}\right] = \left[-11.28, -7.93\right]$$
using the t-distributiuon $$F_9(c) = 1 - \frac{\alpha}{2}$$.
But in my case I want to consider a one-sided test and I thought about the two options (here $$F_9(c) = 1 - \alpha$$)
1. $$\left(- \infty, \bar{x}-c\cdot\frac{s_x}{\sqrt{10}}\right]$$
2. $$\left[\bar{x}-c\cdot\frac{s_x}{\sqrt{10}}, +\infty\right)$$
At first, my intuition for $$H_0: \bar{x} \leq 0$$ told me that the first option should be correct: "$$-\infty$$ is less 0". But when looking at the hypothesis test, I realized, I was wrong.
Our test statistics is $$t = \frac{\bar{x} - 0}{s_x/\sqrt{n}} = -12.968$$ and our cut off value is $$c = F^{-1}(1 - \alpha) = 1.833$$ resulting in a p-value of $$1 - F_9(t) \approx 1$$. Thus, we accept the null-hypothesis, because $$t < c$$ or $$p > \alpha$$ (as expected from the values!).
If we look at the two intervals, it is clear (the result must be the same!) that the second option is correct. whereas the first is wrong:
• $$0 \notin \left(-\infty, -8.25\right]$$
• $$0 \in \left[-10.96, +\infty\right)$$
Some references:
(1), (2) , (3) and (4) (Null hypothesis for directional tests),
Working example in R:
set.seed(1)
conf_level <- 0.95
x <- rnorm(10, -10, 3)
xm <- mean(x)
sx <- sd(x)
print(paste0("Mean = ", xm))
print(paste0("Standard deviation = ", sx))
mu0 <- 0
# H_0: mean(x) < 0 (which is true)
print(t.test(x, mu = 0,alternative = "greater", conf.level = conf_level))
n <- length(x)
t <- (xm - mu0)/(sx/sqrt(n)) # tests statistics
print(paste0("t = ", t))
df <- n - 1
print(paste0("df = ", df))
alpha <- 1 - conf_level # probability type I error
c_val <- qt(1 - alpha, df = df)
p_value <- 1 - pt(t, df)
print(paste0("p_value = ", p_value))
# confidence interval
cint <- qt(1 - alpha, df = df) * sqrt(sx^2/n)
cint <- xm + c(-Inf, cint)
print(paste0("CI1 = (", paste(cint, collapse = ", "), "]"))
cint <- qt(1 - alpha, df = df) * sqrt(sx^2/n)
cint <- xm + c(-cint, Inf)
print(paste0("CI2 = [", paste0(cint, collapse = ", "), ")"))
Suppose you are testing a plan of medical care and diet that will help badly malnourished babies gain weight. You have $$n = 10$$ subjects. After a week on the plan you measure the change in weight of each child in ounces (an ounce is about 30g).
Your null hypothesis is that the population mean weight of such children stays the same or decreases, and the alternative hypothesis (research hypothesis) is that the mean weight increases: $$H_0: \mu \le 0$$ vs. $$H_a: \mu > 0.$$
One difficulty with your question is that you are mixing up the notation for population parameters with the notation for sample statistics. Notice that the null and alternative hypotheses are stated in terms of population parameters.
Upon finding actual weight changes $$X_1, X_2, \dots X_{10},$$ you find the following summary statistics for your sample of 10 children.
summary(x)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-12.507 -11.639 -9.230 -9.603 -8.339 -5.214
sd(x)
[1] 2.341758
This is bad news because weight changes are mainly negative. In particular, the sample mean is $$\bar X = \frac 1n \sum_{i=1}^n X_i = -9.603$$ and the sample standard deviation is $$S = \sqrt{\frac{1}{n-1}\sum_{i=1}^n (X_i - \bar X)^2} = 2.342.$$
A 95% (2-sided) confidence interval for the population mean weight gain $$\mu$$ is of the form $$\bar X \pm t^*\frac{S}{\sqrt{n}},$$ where $$t^*$$ cuts 2.5% of the probability from the upper tail of Student's t distribution with $$\nu = n-1$$ degrees of freedom. For your data this computes to $$(-11.28,-7.93).$$ The computation of this confidence interval is included in the t.test procedure in R. (You did the computation correctly, but used the wrong notation.)
t.test(x)$conf.int [1] -11.278584 -7.928199 attr(,"conf.level") [1] 0.95 If you use R to test $$H_0: \mu \le 0$$ vs. $$H_a: \mu > 0,$$ then the syntax and output from R are as shown below: t.test(x, mu=0, alte="g") # 'g' for 'greater' One Sample t-test data: x t = -12.968, df = 9, p-value = 1 alternative hypothesis: true mean is greater than 0 95 percent confidence interval: -10.96086 Inf sample estimates: mean of x -9.603392 The P-value is 1. In order to reject $$H_0$$ at the 5% level (and conclude that the program is helping the children gain weight), you would need a P-value below 5%. Ordinarily, a P-value at or near 1 means that something is wrong with the model or the hypothesis. In this case, results are unexpectedly bad and the program is not working as intended. (Unless you have prior experience with such programs and subjects to warrant an expectation that children will gain weight, it is probably better to start with a two-sided alternative.) The interpretation of the one-sided 95% confidence interval $$(-10.96, \infty)$$ in the R output just above is something like this: "Not only did the the program fail to help the children as intended, the actual change in weight could be a loss of as much as $$10.96$$ ounces." In this particular example, the data were simulated from the population $$\mathsf{Norm}(\mu = -10, \sigma = 3).$$ Usually, using real data, one does not know the population parameters for sure. We would not know that it was correct not to reject $$H_0;$$ we would not know that the 2-sided confidence interval $$(-11.28,-7.93)$$ truly does contain $$\mu = -10;$$ nor would we know that the one-sided confidence interval $$(-10.96, \infty)$$ from the one-sided test procedure also contains $$\mu = -10.$$ In summary, there are several issues with your question potentially leading to confusion: (1) Incorrect use of $$\mu$$ and $$\sigma$$ to represent sample mean and standard deviation. (2) Unfortunate choice of a one-sided alternative for the test of hypothesis leading to a huge P-value. (3) The one-sided confidence interval that is included in R output for a right-sided alternative may not give you the information you most wanted from a confidence interval. You might find it instructive to go through the steps above starting with data y = rnorm(10, 10, 3); still using the right-sided alternative and presumably rejecting $$H_0$$; interpreting the accompanying one-sided confidence interval which now gives a positive lower bound. It is a nice idea to add print statements to the standard R code that will include German terminology, but don't use population notation to label sample statistics. Also, as you probably know, most browsers will enable you to translate this answer to German. I will give that a try in a few minutes to make sure my English is reasonably compatible with the translation algorithm. • Translation seems OK, except for subscripts in equations. Nov 16, 2019 at 8:56 • Thanks for your input - I made an edit to my question (which took quite some time). I still wonder, whether there is an intuitive argument concerning the type of the CI (The interval with "$-\infty$" or "$+\infty\$"?). But perhaps it is all about your point "one-sided CI don't provide what you are looking for". To your points: (1) Corrected now. (2) This was on purpose as I wanted to test with an example, where the result is really obvious. Do you have a reference for handling p-values near one? (3) Do you have a good reference for the use of one-sided CI? Nov 19, 2019 at 13:12
• Thanks for revision of your question (+1). Of four the references you found on this site, I think (2) will be the most useful. // Overall, I think it is best to build intuition based on 2-sided alternatives and confidence intervals first. In practice, it is best to do a 2-sided test unless you have advance knowledge (or strong opinion) which way results will fall. It is poor practice to decide on a one-sided hypothesis after you see experimental results. Nov 19, 2019 at 18:57
• Perhaps you have some good ideas for this issue? ;-) Nov 22, 2019 at 15:38
|
{}
|
More
# Heat of Sublimation
The heat (or enthalpy) of sublimation is the amount of energy that must be added to a solid at constant pressure in order to turn it directly into a gas (without passing through the liquid phase). The heat of sublimation is generally expressed as ΔHsub in units of Joules or kiloJoules per mole or kilogram of substance.
### Things to recall...
1. A "Δ" in front of any variable indicates that the following equation is a state function in which the value of an indicator measured in the initial state of a reaction is subtracted from the value of the same indicator measured in the final state of the reaction. The resulting value is the change in the indicator at the end of the said reaction (Δindicator).
2. Enthalpy is the amount of energy that is required to induce a phase change (change in the state of matter).
3. Energy is measured in Joules or kiloJoules and is transferred through either through heat (q) or work (w).
4. The phases involved in sublimation are the solid phase and the gas phase.
### Introduction to Sublimation and Heat of Sublimation
Sublimation is the process of changing a solid into a gas without allowing the solid to pass through the liquid phase. In order to sublime a substance, a certain amount of energy must be transferred to the substance via heat (q) or work (w). The energy needed to sublime a substance is particular to the substance's identity and temperature and must be enough to do all of the following:
1. Excite the substance(s) so that it reaches its maximum heat (energy) capacity (q) in the solid state.
2. Sever all the atomic bonds in the substance(s).
3. Excite the unbonded atoms of the substance so that it reaches its minimum heat capacity in the gaseous state.
(See attached powerpoint show in the Files section at the bottom of the page. Select the option to play slideshow.)
### The Equations
The heat (or enthalpy) of sublimation is the amount of energy that must be added to a solid at constant pressure in order to turn it directly into a gas (without passing through the liquid phase). The heat of sublimation is generally expressed as ΔHsub in units of Joules or kiloJoules per mole or kilogram of substance.
### Things to recall...
1. A "Δ" in front of any variable indicates that the following equation is a state function in which the value of an indicator measured in the initial state of a reaction is subtracted from the value of the same indicator measured in the final state of the reaction. The resulting value is the change in the indicator at the end of the said reaction (Δindicator).
2. Enthalpy is the amount of energy that is required to induce a phase change (change in the state of matter).
3. Energy is measured in Joules or kiloJoules and is transferred through either through heat (q) or work (w).
4. The phases involved in sublimation are the solid phase and the gas phase.
### Introduction to Sublimation and Heat of Sublimation
Sublimation is the process of changing a solid into a gas without allowing the solid to pass through the liquid phase. In order to sublime a substance, a certain amount of energy must be transferred to the substance via heat (q) or work (w). The energy needed to sublime a substance is particular to the substance's identity and temperature and must be enough to do all of the following:
1. Excite the substance(s) so that it reaches its maximum heat (energy) capacity (q) in the solid state.
2. Sever all the atomic bonds in the substance(s).
3. Excite the unbonded atoms of the substance so that it reaches its minimum heat capacity in the gaseous state.
(See attached powerpoint show in the Files section at the bottom of the page. Select the option to play slideshow.)
### The Equations
#### ΔHsub = ΔEtot
The energies involved in sublimation steps 1 through 3 (see Sublimation ) compose the total amount of energy that is involved in sublimation. This can be expressed by the equation,
$$\Delta H_{sub} = \Delta E_1+\Delta E_2+\Delta E_3$$
in which
$$\Delta E_1=\Delta E_{thermal_{solid}}$$
$$\Delta E_2=\Delta E_{bond_{from solid to liquid}}$$
$$\Delta E_3=\Delta E_{thermal_{solid}}+\Delta E_{bond_{from liquid to gas}}$$
Although a solid does not actually pass through the liquid phase during the process of sublimation, the fact that enthalpy is a state function allows us to add the various energies associated with the solid, liquid, and gas phases together. Recall that for state functions, only the initial and final states of the substance are important. Say for example that state A is the initial state and state B is the final state. How a substance goes from state A to state B does not matter so much as what state A and what state B are. Concerning the state function of enthalpy, the energies associated with enthalpies (whose associated states of matter are contiguous to one another) are additive. Though in sublimation a solid does not pass through the liquid phase on its way to the gas phase, it takes the same amount of energy that it would to first melt (fuse) and then vaporize.
##### ΔEthermal(state of matter)
A change in thermal energy is indicated by a change in temperature (in Kelvin) of a substance at any particular state of matter. Change in thermal energy is expressed by the equation
$$\Delta E_{thermal_{(of a particular state of matter)}}=(C_p)*\Delta T$$
in which
$$C_p=heat capacity_{(of a particular state of matter)}$$
$$C_p=(specific heat capacity_{(of a particular state of matter)})*mass_{(substance)}$$
$$\Delta T = T_{(final)}-T_{(initial)}$$
$$\Delta T = T_{(at phase change, going from state_1 to state_2)}-T_{(initial)}$$
For more information on heat capacity and specific heat capacity, see heat capacity. Specific heat capacities of common substances can easily be found online or in a reference or text book.
##### ΔEbond(going from state 1 to state 2)
Bond energy is the amount of energy that a group of atoms must absorb so that it can undergo a phase change (going from a state of lower energy to a state of higher energy). It is measured
$$\Delta E_{bond}=\Delta H_{substance_{phase change}}*\Delta mass_{(substance)}$$
in which $$\Delta H_{substance_{phase change}}$$ is the enthalpy associated with a specific substance at a specific phase change. Common types of enthalpies include the heat of fusion (melting) and the heat of vaporization. Recall that fusion is the phase change that occurs between the solid state and the liquid state, and vaporization is the phase change that occurs between the liquid state and the gas state. Note that if the substance has more than one type of bond (or intramolecular force), then the substance must absorb enough energy to break all the different types of bonds before the substance can sublime.1
### Graphical Representations of the Heat of Sublimation
Note that although the graph below indicates the inclusion of the liquid phase, the graph is merely a representation of how much energy is needed to sublime a solid substance. Recall that sublimation does not include the liquid phase and that the fact that enthalpy is a state function allows us to add the enthalpies of fusion and vaporization together to find the enthalpy of sublimation.
### ΔHsub > ΔHvap
Though both enthalpies involve the changing a substance into its gaseous state, the change in energy associated with sublimation is generally greater than that of vaporization. This is because of the initial state of the substances and the amount of initial energy that each substance has. Particles in a solid have less energy than those of a liquid, meaning it is takes more energy to excite a solid to its gaseous phase that it does to excite a liquid to its gaseous phase.
Another way to look this phenomena is to take a look at the different energies involved with the heat of sublimation: ΔEthermal (s), ΔEbond(s-l), ΔEthermal(l), and ΔEbond(l-g). Already we know that ΔEbond=ΔH(phase change)*Δm(changed substance) and ΔEbond(l-g)=ΔH(l-g)*Δm(gas created). ΔH(l-g) is essentially ΔH(vaporization), meaning that ΔHvap is actually a component of ΔHsub.
### Where does the added energy go?
Energy can be observed in many different ways. As shown above, ΔEtot can be expressed as ΔEthermal + ΔEbondAnother way in which ΔEtot can be expressed is change in potential energy, ΔPE, plus change in kinetic energy, ΔKE. Potential energy is the energy associated with random movement, whereas kinetic energy is the energy associated with velocity (movement with direction). ΔEtot = ΔEthermal + ΔEbond and ΔEtot = ΔPE + ΔKE are related by the equations
ΔPE = (0.5)ΔEthermal + ΔEbond
ΔKE = (0.5)ΔEthermal
for substances in the solid and liquid states. Note that ΔEthermal is divided equally between ΔPE and ΔKE for substances in the solid and liquid states. This is because the intermolecular and intramolecular forces that exist between the atoms of the substance (i.e. atomic bond, van der Waals forces, etc) have not yet been dissociated and prevent the atomic particles from moving freely about the atmosphere (with velocity). Potential energy is just a way to have energy, and it generally describes the random movement that occurs when atoms are forced to be close to one another. Likewise, kinetic energy is just another way to have energy, which describes an atom's vigorous struggle to move and to break away from the group of atoms. The thermal energy that is added to the substance is thus divided equally between the potential and the kinetic energies because all aspects of the atoms' movement must be excited equally
However, once the intermolecular and intramolecular forces which restrict the atoms' movement are dissociated (when enough energy has been added), potential energy no longer exists (for monatomic gases) because the atoms of the substance are no longer forced to vibrate and be in contact with other atoms. When a group of atoms is in the gaseous state, it's atoms can devote all their energies into moving away from one another (kinetic energy).
### Practical Applications of the Heat of Sublimation
The heat of sublimation can be useful in determining the effectiveness of medicines. Medicine is often administered in pill (solid) form, and the substances which they contain can sublime over time if the pill absorbs too much energy over time. Often times you may see the phrase "avoid excessive heat"2 on the bottles of common painkillers (e.g. Advil). This is because in high temperature conditions, the pills can absorb heat energy, and sublimation can occur3
### Practice Problems
1. If the heat of fusion for H2O is 333.5 kJ/kg, the specific heat capacity of H2O(l) is 4.18 kJ/(kg*K), the heat of vaporization for H2O is 2257 kJ/kg, then calculate the heat of sublimation for 1.00 kg of H2O(s) with the initial temperature, 273K (Hint: 273K is the solid-liquid phase change temperature and 373K is the liquid-gas phase change temperature).
2. Using the information given in question one, calculate the heat of sublimation for 1.00 mole H2O when the initial temperature of the solid is 273K. (Hint: molar mass of H2O is ~18.0 g/mol or 0.018 kg/mol)
3. Using the information given in question one, calculate the heat of sublimation for 1.00 kg H2O when the initial temperature is 200K. The specific heat capacity for H2O(s) is 2.05 kJ/(kg*K).
4. If the heat of fusion for Au is 1.24 kJ/mol, the specific heat capacity of Au(l) is 25.4 J/(mol*K), the heat of vaporization for Au is 1701 kJ/kg, then calculate the heat of sublimation for 1.00 mol of Au(s) with the initial temperature, 1336K (Hint: 1336K is the solid-liquid phase change temperature, and 3081K is the liquid-solid phase change temperature).
5. If the heat of sublimation for Cu at 3081K is 313.3245 kJ/mol, the specific heat capacity of Cu(l) is .0245 kJ/(mol*K), the heat of vaporization for Cu is 300.3 kJ/mol, then calculate the heat of fusion at 1356K for 1.00 mol of Cu(s) with the temperature (Hint: 1356K is the solid-liquid phase change temperature, and 3081K is the liquid-gas phase change temperature).
### Practice Problem Solutions
1. ΔHsub for 1kg H2O (at Ti=273K)= (333.5 kJ/kg)(1.0 kg) + (4.18 kJ/kg*K)(373-273K) + (2257 kJ/kg)(1.0 kg) = 30008.5 kJ/kg
2. ΔHsub for 1mol H2O (at Ti=273K)= (30008.5 kJ/kg)(0.018 kg/mol) = 54.153 kJ/mol
3. ΔHsub for 1kg H2O (at Ti=200K)= 30008.5 kJ/kg + (2.05 kJ/K*kg)(1.0kg)(273-200K) = 3158.15 kJ/kg
4. ΔHsub for 1mol Au (at Ti=1336K)= (1.24 kJ/mol)(1mol) + (.0254 kJ/mol*K)(3081-1336K) + (1701 kJ/kg)(0.197kg/mol) = 380.66 kJ/mol
5. ΔHfus for Cu (at T=1356K) = 337.8735 kJ/mol - (0.0245 kJ/mol*K)(2839-1356K) - (300.3 kJ/mol)(1mol) = 1.24 kJ/mol
### Footnotes
1. Dmitry Bedrov, Oleg Borodin, Grant D. Smith, Thomas D. Sewell, Dana M. Dattelbaum, and Lewis L. Steven. "A molecular dynamics simulation study of crystalline 1,3,5-triamino-2,4,6-trinitrobenzene as a function of pressure and temperature." THE JOURNAL OF CHEMICAL PHYSICS 131, 2009.
2. Advil bottle. 24 Ibuprofen tablets, 200mg. EXP 12/08.
3. Pascal Taulelle, Georges Sitja, Gerard Pepe, Eric Garcia, Christian Hoff, and Stephane Veesler. "Measuring Enthalpy of Sublimation for Active Pharmaceutical Ingredients: Validate Crystal Energy and Predict Crystal Habit." Crystal Growth & Design (2009): 4706–4709. Print.
### External References
1. Advil bottle. 24 Ibuprofen tablets, 200mg. EXP 12/08.
2. Dmitry Bedrov, Oleg Borodin, Grant D. Smith, Thomas D. Sewell, Dana M. Dattelbaum, and Lewis L. Steven. "A molecular dynamics simulation study of crystalline 1,3,5-triamino-2,4,6-trinitrobenzene as a function of pressure and temperature." The Journal of Chemical Physics 131 (2009): 1-4. Print.
3. Pascal Taulelle, Georges Sitja, Gerard Pepe, Eric Garcia, Christian Hoff, and Stephane Veesler. "Measuring Enthalpy of Sublimation for Active Pharmaceutical Ingredients: Validate Crystal Energy and Predict Crystal Habit." Crystal Growth & Design (2009): 4706–4709. Print.
4. Petrucci, Ralph H., William S. Harwood, F. G. Herring, and Jeffry D. Madura. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. 242-248.
5. Potter, Wendell. "Applying Models to Thermal Phenomena." College Physics: A Models Approach, Part 1. Hayden McNeil Publishing: Plymouth, MI, 2010. 7-20.
### Contributors
• Kasey Nakajima (UCD)
The ChemWiki has 9242 Modules.
UC Davis ChemWiki by University of California, Davis is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
### Graphical Representations of the Heat of Sublimation
Note that although the graph below indicates the inclusion of the liquid phase, the graph is merely a representation of how much energy is needed to sublime a solid substance. Recall that sublimation does not include the liquid phase and that the fact that enthalpy is a state function allows us to add the enthalpies of fusion and vaporization together to find the enthalpy of sublimation.
### ΔHsub > ΔHvap
Though both enthalpies involve the changing a substance into its gaseous state, the change in energy associated with sublimation is generally greater than that of vaporization. This is because of the initial state of the substances and the amount of initial energy that each substance has. Particles in a solid have less energy than those of a liquid, meaning it is takes more energy to excite a solid to its gaseous phase that it does to excite a liquid to its gaseous phase.
Another way to look this phenomena is to take a look at the different energies involved with the heat of sublimation: ΔEthermal (s), ΔEbond(s-l), ΔEthermal(l), and ΔEbond(l-g). Already we know that ΔEbond=ΔH(phase change)*Δm(changed substance) and ΔEbond(l-g)=ΔH(l-g)*Δm(gas created). ΔH(l-g) is essentially ΔH(vaporization), meaning that ΔHvap is actually a component of ΔHsub.
### Where does the added energy go?
Energy can be observed in many different ways. As shown above, ΔEtot can be expressed as ΔEthermal + ΔEbondAnother way in which ΔEtot can be expressed is change in potential energy, ΔPE, plus change in kinetic energy, ΔKE. Potential energy is the energy associated with random movement, whereas kinetic energy is the energy associated with velocity (movement with direction). ΔEtot = ΔEthermal + ΔEbond and ΔEtot = ΔPE + ΔKE are related by the equations
ΔPE = (0.5)ΔEthermal + ΔEbond
ΔKE = (0.5)ΔEthermal
for substances in the solid and liquid states. Note that ΔEthermal is divided equally between ΔPE and ΔKE for substances in the solid and liquid states. This is because the intermolecular and intramolecular forces that exist between the atoms of the substance (i.e. atomic bond, van der Waals forces, etc) have not yet been dissociated and prevent the atomic particles from moving freely about the atmosphere (with velocity). Potential energy is just a way to have energy, and it generally describes the random movement that occurs when atoms are forced to be close to one another. Likewise, kinetic energy is just another way to have energy, which describes an atom's vigorous struggle to move and to break away from the group of atoms. The thermal energy that is added to the substance is thus divided equally between the potential and the kinetic energies because all aspects of the atoms' movement must be excited equally
However, once the intermolecular and intramolecular forces which restrict the atoms' movement are dissociated (when enough energy has been added), potential energy no longer exists (for monatomic gases) because the atoms of the substance are no longer forced to vibrate and be in contact with other atoms. When a group of atoms is in the gaseous state, it's atoms can devote all their energies into moving away from one another (kinetic energy).
### Practical Applications of the Heat of Sublimation
The heat of sublimation can be useful in determining the effectiveness of medicines. Medicine is often administered in pill (solid) form, and the substances which they contain can sublime over time if the pill absorbs too much energy over time. Often times you may see the phrase "avoid excessive heat"2 on the bottles of common painkillers (e.g. Advil). This is because in high temperature conditions, the pills can absorb heat energy, and sublimation can occur3
### Practice Problems
1. If the heat of fusion for H2O is 333.5 kJ/kg, the specific heat capacity of H2O(l) is 4.18 kJ/(kg*K), the heat of vaporization for H2O is 2257 kJ/kg, then calculate the heat of sublimation for 1.00 kg of H2O(s) with the initial temperature, 273K (Hint: 273K is the solid-liquid phase change temperature and 373K is the liquid-gas phase change temperature).
2. Using the information given in question one, calculate the heat of sublimation for 1.00 mole H2O when the initial temperature of the solid is 273K. (Hint: molar mass of H2O is ~18.0 g/mol or 0.018 kg/mol)
3. Using the information given in question one, calculate the heat of sublimation for 1.00 kg H2O when the initial temperature is 200K. The specific heat capacity for H2O(s) is 2.05 kJ/(kg*K).
4. If the heat of fusion for Au is 1.24 kJ/mol, the specific heat capacity of Au(l) is 25.4 J/(mol*K), the heat of vaporization for Au is 1701 kJ/kg, then calculate the heat of sublimation for 1.00 mol of Au(s) with the initial temperature, 1336K (Hint: 1336K is the solid-liquid phase change temperature, and 3081K is the liquid-solid phase change temperature).
5. If the heat of sublimation for Cu at 3081K is 313.3245 kJ/mol, the specific heat capacity of Cu(l) is .0245 kJ/(mol*K), the heat of vaporization for Cu is 300.3 kJ/mol, then calculate the heat of fusion at 1356K for 1.00 mol of Cu(s) with the temperature (Hint: 1356K is the solid-liquid phase change temperature, and 3081K is the liquid-gas phase change temperature).
### Practice Problem Solutions
1. ΔHsub for 1kg H2O (at Ti=273K)= (333.5 kJ/kg)(1.0 kg) + (4.18 kJ/kg*K)(373-273K) + (2257 kJ/kg)(1.0 kg) = 30008.5 kJ/kg
2. ΔHsub for 1mol H2O (at Ti=273K)= (30008.5 kJ/kg)(0.018 kg/mol) = 54.153 kJ/mol
3. ΔHsub for 1kg H2O (at Ti=200K)= 30008.5 kJ/kg + (2.05 kJ/K*kg)(1.0kg)(273-200K) = 3158.15 kJ/kg
4. ΔHsub for 1mol Au (at Ti=1336K)= (1.24 kJ/mol)(1mol) + (.0254 kJ/mol*K)(3081-1336K) + (1701 kJ/kg)(0.197kg/mol) = 380.66 kJ/mol
5. ΔHfus for Cu (at T=1356K) = 337.8735 kJ/mol - (0.0245 kJ/mol*K)(2839-1356K) - (300.3 kJ/mol)(1mol) = 1.24 kJ/mol
### Footnotes
1. Dmitry Bedrov, Oleg Borodin, Grant D. Smith, Thomas D. Sewell, Dana M. Dattelbaum, and Lewis L. Steven. "A molecular dynamics simulation study of crystalline 1,3,5-triamino-2,4,6-trinitrobenzene as a function of pressure and temperature." THE JOURNAL OF CHEMICAL PHYSICS 131, 2009.
2. Advil bottle. 24 Ibuprofen tablets, 200mg. EXP 12/08.
3. Pascal Taulelle, Georges Sitja, Gerard Pepe, Eric Garcia, Christian Hoff, and Stephane Veesler. "Measuring Enthalpy of Sublimation for Active Pharmaceutical Ingredients: Validate Crystal Energy and Predict Crystal Habit." Crystal Growth & Design (2009): 4706–4709. Print.
### External References
1. Advil bottle. 24 Ibuprofen tablets, 200mg. EXP 12/08.
2. Dmitry Bedrov, Oleg Borodin, Grant D. Smith, Thomas D. Sewell, Dana M. Dattelbaum, and Lewis L. Steven. "A molecular dynamics simulation study of crystalline 1,3,5-triamino-2,4,6-trinitrobenzene as a function of pressure and temperature." The Journal of Chemical Physics 131 (2009): 1-4. Print.
3. Pascal Taulelle, Georges Sitja, Gerard Pepe, Eric Garcia, Christian Hoff, and Stephane Veesler. "Measuring Enthalpy of Sublimation for Active Pharmaceutical Ingredients: Validate Crystal Energy and Predict Crystal Habit." Crystal Growth & Design (2009): 4706–4709. Print.
4. Petrucci, Ralph H., William S. Harwood, F. G. Herring, and Jeffry D. Madura. General Chemistry: Principles & Modern Applications. 9th ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2007. 242-248.
5. Potter, Wendell. "Applying Models to Thermal Phenomena." College Physics: A Models Approach, Part 1. Hayden McNeil Publishing: Plymouth, MI, 2010. 7-20.
### Contributors
• Kasey Nakajima (UCD)
The ChemWiki has 9242 Modules.
UC Davis ChemWiki by University of California, Davis is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
FileSizeDateAttached by
chemwiki.docx
final graphs (added into word document, modified in Microsoft Word, printed and rescanned into computer--could not upload excel graph into module directly as a graphic)
26.42 kB12:50, 28 Feb 2010knnakajimaActions
IMG.jpg
sublimation vs. fusion and vaporization
471.57 kB11:11, 28 Feb 2010knnakajimaActions
IMG_0001.jpg
sublimation vs. vaporization
507.36 kB10:33, 28 Feb 2010knnakajimaActions
IMG_0002.jpg
sublimation
574.33 kB10:32, 28 Feb 2010knnakajimaActions
IMG_NEW_0001.jpg
graph
133.48 kB23:48, 27 Feb 2010knnakajimaActions
sublimation.ppsx
No description
98.01 kB12:20, 28 Feb 2010knnakajimaActions
sublimation.ppt
for older powerpoint versions
509 kB14:14, 28 Feb 2010knnakajimaActions
sublimation.pptx
powerpoint 2007
98.01 kB14:14, 28 Feb 2010knnakajimaActions
sublimation.xlsx
created graph in excel, transferred and printed with Word
14.35 kB12:50, 28 Feb 2010knnakajimaActions
|
{}
|
# How many earths fit in the observable universe?
I was pondering our insignificance, when I wondered - how how much smaller is our planet then the (observable) universe? And being as I dont know how to do the math, I'm asking it here.
So how many of our planet (in space it occupies - i.e. ignoring space between the space between the spheres) can fit inside the known/observable universe?
I apologize for the simplicity of the question.
-
Really to me this question is pointless. I mean, the answer is obviously going to be a ridiculously big number, so what does it changes to you if it is 10^50 or 10^100?? – harogaston May 13 at 23:40
i was trying to find something to compare it to, in order to better understand it and relay it. – tryingToGetProgrammingStraight May 14 at 1:49
## 1 Answer
Without checking the numbers in detail, according to Wikipedia, the volume of the observable universe is about $3.5\cdot 10^{80} \mbox{ m}^3$, and the volume of Earth is about $1.08321\cdot 10^{21} \mbox{ m}^3$.
By dividing the two volumes we get a factor of $3.2\cdot 10^{59}$, or written as decimal number: The observable comoving volume of the universe is about 320,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000-times the volume of Earth.
-
dats alot, woah! – tryingToGetProgrammingStraight May 13 at 17:14
Your answer assumes we are pulverizing the earth to completely fill the volume of a universe-sized container. Without getting into the complicated math behind forming optimal latices of congruent spheres, you should multiply your answer by a factor of pi/(3*sqrt(2)) or about 0.74048. The Kepler Conjecture says that is the highest density that can be achieved by any arrangement of spheres. Oh, and since the observable universe is also expanding at an accelerated rate, you should also update your answer every few hundred millions years just to be safe. Just saying. – Robert Cartaino May 14 at 16:44
@RobertCartaino That's why I just provided two valid digits; so the numbers should be valid more or less next week, too. ;) Btw. sorry for pressing Earth into a cube, next time I'll be more careful. – Gerald May 15 at 9:37
|
{}
|
# Gaussian Process on HPC | Issues and speeding up
Hello, I’m working on a Bayesian calibration problem of a computer experiment. To do so, I’m using a Gaussian Process as an emulator to a computer model and I’m training and calibrating the emulator at the same time as per Higdon et al. (2004). I’m running the calibration on a High Performance Computing (HPC) facility as a shared memory threaded parallel job. I run 4 chains of 1000 iterations. For training points (N) less than 600, the model works fine with no issues (see image attached of Comp. Time (mins) vs N) as judged by R and nEff > 10%. When I tried to further increase N by 140 points (total of 728), and with an allocated RAM per core of 16 GB, it took 22 hours for one chain to complete and none of the other chains finished their 1000 iterations – two of them were stuck at 250 iterations. I don’t understand what is happening, so any help would be greatly appreciated! Specifically:
1. As training points (N) increase, I expect an increase in computational time of N^3 and memory requirement of N^2. I assumed that 16GB of RAM would suffice - could I be running out of RAM or is there something else that’s wrong? Does memory grow with iterations as well as N?
2. A small constant was not added to the diagonal of the covariance matrix as it’s often recommended since I’ve based my code from a publication by Chong et al. (2018) that didn’t include this component. I’m more than happy to add it but do you think that this could be the reason I’m facing these issues?
3. Do you have any suggestions on how to speed up the calibration? This would be regardless of the problem I’ve described above. I have seen a couple of suggestions on some different topics although I don’t know whether they would apply on this specific problem.
data {
int<lower=0> n; // number of field data
int<lower=0> m; // number of computer simulation
int<lower=0> p; // number of observable inputs x
int<lower=0> q; // number of calibration parameters t
vector[n] y; // field observations
vector[m] eta; // output of computer simulations
matrix[n, p] xf; // observable inputs corresponding to y
// (xc, tc): design points corresponding to eta
matrix[m, p] xc;
matrix[m, q] tc;
}
// Need to combine y (observations) and eta (simulations)
// into a single vector to establish statistical relation
// Chong & Menberg (2018), Hidgon et al. (2004)
transformed data {
int<lower=1> N;
vector[n+m] z; // z = [y, eta]
vector[n+m] mu; // mean vector
N = n + m;
// set mean vector to zero
for (i in 1:N) {
mu[i] = 0;
}
z = append_row(y, eta);
}
parameters {
// tf: calibration parameters
// rho_eta: reparameterization of beta_eta (the correlation parameters of the simulator)
// rho_delta: reparameterization of beta_delta (the correlation parameter of the model discrepancy)
// lambda_eta: precision parameter for eta
// lambda_delta: precision parameter for bias term
// lambda_e: precision parameter of observation error
row_vector<lower=0,upper=1>[q] tf;
row_vector<lower=0,upper=1>[p+q] rho_eta;
row_vector<lower=0,upper=1>[p] rho_delta;
real<lower=0> lambda_eta;
real<lower=0> lambda_delta;
real<lower=0> lambda_e;
}
transformed parameters {
// beta_delta: correlation parameter for bias term
// beta_eta: correlation parameter of observation error
row_vector[p+q] beta_eta;
row_vector[p] beta_delta;
beta_eta = -4.0 * log(rho_eta);
beta_delta = -4.0 * log(rho_delta);
}
// Create GP model based on the definitions from above
model {
// declare variables
matrix[N, (p+q)] xt;
matrix[N, N] sigma_eta; // simulator covariance
matrix[n, n] sigma_delta; // bias term covariance
matrix[N, N] sigma_z; // covariance matrix
matrix[N, N] L; // cholesky decomposition of covariance matrix
row_vector[p] temp_delta;
row_vector[p+q] temp_eta;
// field observation (xf) and calibration variables (tf)
// are placed together in a matrix with the
// computer observation (xc) and calibration variables (xc)
// xt = [[xf,tf],[xc,tc]]
xt[1:n, 1:p] = xf; // field observations
xt[(n+1):N, 1:p] = xc; // computer observations (assume to be the same as xf)
xt[1:n, (p+1):(p+q)] = rep_matrix(tf, n);
xt[(n+1):N, (p+1):(p+q)] = tc; // computer calibration variables
// diagonal elements of sigma_eta
sigma_eta = diag_matrix(rep_vector((1 / lambda_eta), N));
// off-diagonal elements of sigma_eta
// for the squared covariance (alpha = 2)
// xt[i] is row i and xt[j] is row j
for (i in 1:(N-1)) {
for (j in (i+1):N) {
temp_eta = xt[i] - xt[j]; # Subtract row i from row j
sigma_eta[i, j] = beta_eta .* temp_eta * temp_eta'; #
sigma_eta[i, j] = exp(-sigma_eta[i, j]) / lambda_eta;
sigma_eta[j, i] = sigma_eta[i, j];
}
}
// diagonal elements of sigma_delta
sigma_delta = diag_matrix(rep_vector((1 / lambda_delta), n));
// off-diagonal elements of sigma_delta
for (i in 1:(n-1)) {
for (j in (i+1):n) {
temp_delta = xf[i] - xf[j];
sigma_delta[i, j] = beta_delta .* temp_delta * temp_delta';
sigma_delta[i, j] = exp(-sigma_delta[i, j]) / lambda_delta;
sigma_delta[j, i] = sigma_delta[i, j];
}
}
// computation of covariance matrix sigma_z
sigma_z = sigma_eta;
sigma_z[1:n, 1:n] = sigma_eta[1:n, 1:n] + sigma_delta;
for (i in 1:n) {
sigma_z[i, i] = sigma_z[i, i] + (1.0 / lambda_e);
}
//print(sigma_z)
// Specify hyperparameters here - based on Chong et al.(2018)
rho_eta[1:(p+q)] ~ beta(1.0, 0.3);
rho_delta[1:p] ~ beta(1.0, 0.3);
lambda_eta ~ gamma(10, 10); // gamma (shape, rate)
lambda_delta ~ gamma(10, 0.3);
lambda_e ~ gamma(10, 0.03);
// Specify priors here - these have a physical meaning in the computer software being calibrated
tf[1] ~ weibull(2.724,0.417);
tf[2] ~ uniform(1.737e-05,1);
tf[3] ~ normal(0.472,0.140);
tf[4] ~ gamma(6.90965275109803,19.158);
tf[5] ~ lognormal(-1.983,0.640);
L = cholesky_decompose(sigma_z); // cholesky decomposition
z ~ multi_normal_cholesky(mu, L);
}
Edit
One of my simulations (with 750 iters) just finished and I noticed that one of the chains did not mix at all as can be seen below:
From the pairs plot attached, I’m thinking that there could be something funny going one with beta_eta1 and beta_eta6 although I might be wrong!
Any insights on how to best tackle this? Is this related to the model definition and specifically the priors used?
1 Like
The direct way of implementing GP with covariance matrix built in Stan code, makes the autodiff tree to have n^2 nodes and it’s likely that you are running out of memory. At the moment you have two options to reduce the memory usage
There are some changes coming in the future to Stan that will make GPs faster, but it’s difficult to predict when all the necessary pieces needed are there.
5 Likes
Thanks for your reply! I’ve tried implementing the built in cov_exp_quad function and tested its performance on small dataset. I found the computational time to be longer when built-in function was used compared to my previous formulation. I’m not sure why that’s the case but I’m suspecting that it is due to having to use it multiple times to define multiple length-scales (please see my function definition below for more detail). If there is a more efficient way to use this function when defining multiple length-scales, please let me know.
My problem has more than 3D so I don’t think I would be able to use the paper suggested. However, I think I might be able to make use of the Kronecker Product approach described here: https://sethrf.com/files/fast-hierarchical-GPs.pdf Since the paper is from 2017, is there a built-in Kronecker product implementation of Gaussian Processes that differs from that paper? I couldn’t find one but I just thought I would check.
Computational Time for previous implementation:
warmup sample
chain:1 87.015 49.680
chain:2 92.540 48.463
Computational Time for built-in function:
warmup sample
chain:1 133.363 75.504
chain:2 141.954 73.151
functions {
real c_alpha(real lambda){
real alpha_dem = sqrt(lambda);
real alpha = 1 / alpha_dem;
return alpha;
}
real c_rho(real beta){
real beta_dem = sqrt(2.0 * beta);
real rho = 1 / beta_dem;
return rho;
}
matrix c_sigma7(matrix X, real lambda, row_vector beta, int N){
matrix[N, N] sigma;
return(sigma);
}
matrix c_sigma2(matrix X, real lambda, row_vector beta, int N){
matrix[N, N] sigma;
return(sigma);
}
}
data {
int<lower=0> n; // number of field data
int<lower=0> m; // number of computer simulation
int<lower=0> p; // number of observable inputs x
int<lower=0> q; // number of calibration parameters t
vector[n] y; // field observations
vector[m] eta; // output of computer simulations
matrix[n, p] xf; // observable inputs corresponding to y
// (xc, tc): design points corresponding to eta
matrix[m, p] xc;
matrix[m, q] tc;
}
// Need to combine y (observations) and eta (simulations)
// into a single vector to establish statistical relation
// Chong & Menberg (2018), Hidgon et al. (2004)
transformed data {
int<lower=1> N;
vector[n+m] z; // z = [y, eta]
vector[n+m] mu; // mean vector
N = n + m;
// set mean vector to zero
for (i in 1:N) {
mu[i] = 0;
}
z = append_row(y, eta);
}
parameters {
// tf: calibration parameters
// rho_eta: reparameterization of beta_eta (the correlation parameters of the simulator)
// rho_delta: reparameterization of beta_delta (the correlation parameter of the model discrepancy)
// lambda_eta: precision parameter for eta
// lambda_delta: precision parameter for bias term
// lambda_e: precision parameter of observation error
row_vector<lower=0,upper=1>[q] tf;
row_vector<lower=0,upper=1>[p+q] rho_eta;
row_vector<lower=0,upper=1>[p] rho_delta;
real<lower=0> lambda_eta;
real<lower=0> lambda_delta;
real<lower=0> lambda_e;
}
transformed parameters {
// beta_delta: correlation parameter for bias term
// beta_eta: correlation parameter of observation error
row_vector[p+q] beta_eta;
row_vector[p] beta_delta;
beta_eta = -4.0 * log(rho_eta);
beta_delta = -4.0 * log(rho_delta);
}
// Create GP model based on the definitions from above
model {
// declare variables
matrix[N, (p+q)] xt;
matrix[N, N] sigma_eta; // simulator covariance
matrix[n, n] sigma_delta; // bias term covariance
matrix[N, N] sigma_z; // covariance matrix
matrix[N, N] L; // cholesky decomposition of covariance matrix
row_vector[p] temp_delta;
row_vector[p+q] temp_eta;
// field observation (xf) and calibration variables (tf)
// are placed together in a matrix with the
// computer observation (xc) and calibration variables (xc)
// xt = [[xf,tf],[xc,tc]]
xt[1:n, 1:p] = xf; // field observations
xt[(n+1):N, 1:p] = xc; // computer observations (assume to be the same as xf)
xt[1:n, (p+1):(p+q)] = rep_matrix(tf, n);
xt[(n+1):N, (p+1):(p+q)] = tc; // computer calibration variables
// computeation of covariance matrix sigma_eta
sigma_eta = c_sigma7(xt, lambda_eta, beta_eta, N);
// computeation of covariance matrix sigma_delta (bias)
sigma_delta = c_sigma2(xf, lambda_delta, beta_delta, n);
//for (j in 1:2){
// print("u[", j, "] = ", sigma_delta[j]);
//}
// computation of covariance matrix sigma_z
sigma_z = sigma_eta;
sigma_z[1:n, 1:n] = sigma_eta[1:n, 1:n] + sigma_delta;
for (i in 1:n) {
sigma_z[i, i] = sigma_z[i, i] + (1.0 / lambda_e);
}
//print(sigma_z)
// Specify hyperparameters here
rho_eta[1:(p+q)] ~ beta(1.0, 0.3);
rho_delta[1:p] ~ beta(1.0, 0.3);
lambda_eta ~ gamma(10, 10); // gamma (shape, rate)
lambda_delta ~ gamma(10, 0.3);
lambda_e ~ gamma(10, 0.03);
// Specify priors here
tf[1] ~ weibull(2.724,0.417);
tf[2] ~ uniform(1.737e-05,1);
tf[3] ~ normal(0.472,0.140);
tf[4] ~ gamma(6.90965275109803,19.158);
tf[5] ~ lognormal(-1.983,0.640);
L = cholesky_decompose(sigma_z); // cholesky decomposition
z ~ multi_normal_cholesky(mu, L);
}
Yes, it’s the multiple calls and multiple matrix pointwise multiplications that are now making the autodiff tree big and slow.
@bbbales2 can you remind us about the status of vector length scale for cov_exp_quad?
If that’s not yet in (at least document doesn’t show it), you would get better performance by scaling columns of X first using diag_post_multiply and then calling cov_exp_quad once.
It’s possible that in this approach the autodiff part will again be dominating the computation time.
No.
1 Like
Non-existent, I’m afraid.
Yes do this. Something like:
diag_post_multiply(X, inv(beta))
I think inv is vectorized.
Edit: updated eq
Edit2: unupdated eq. Had it right the first time whoops
2 Likes
Thank you both for the help.
After looking at the definition of the cov_exp_quad I can’t see a version that may be applied to a matrix. In my case, X is a [m, p] matrix and therefore I think I would need to still apply cov_exp_quad. After applying diag_post_multiply(X, inv(beta)) on X, the output would still be a matrix which as far as I can tell, cov_exp_quad can’t handle. I’m probably missing something obvious here, but how could I deal with that?
Oh ouch we never doc’d the functions that do this apparently.
I think something like this should work, might have to do some debugging:
matrix c_sigma7(matrix X, real lambda, row_vector beta, int N) {
matrix[N, N] scaled_X = diag_post_multiply(X, inv(beta));
matrix[N, N] Sigma;
for(i in 1:N) {
Sigma[i, i] = -0.5 * square(lambda);
for(j in (i + 1):N) {
Sigma[i, j] = -0.5 * squared_distance(scaled_X[i,], scale_X[j,]);
Sigma[j, i] = Sigma[i, j];
}
}
return lambda * Sigma;
}
(Edit: forgot lambda)
2 Likes
Many thanks for clarifying! After a couple of changes (e.g. taking the exponential of the squared distance) I got it work. I compared it against my original attempt (original post) and it’s faster! I added the results below for reference.
If it’s not too much trouble, could you briefly explain to me why this has led to a decrease in computational cost, especially in the warmup, even though it’s relying on a for loop?
In addition, I’m using get_elapsed_time to compare the computational time. Is there something similar for memory use?
New Function:
matrix c_sigma(matrix X, real lambda, row_vector beta, int N, int p, int q) {
matrix[N, (p+q)] scaled_X = diag_post_multiply(X, square(beta));
matrix[N, N] Sigma;
real lambda_inv = 1.0 / lambda;
for(i in 1:N) {
Sigma[i, i] = lambda_inv;
for(j in (i + 1):N) {
Sigma[i, j] = exp(-squared_distance(scaled_X[i,], scaled_X[j,])) * lambda_inv;
Sigma[j, i] = Sigma[i, j];
}
}
return Sigma;
}
Original Script:
warmup sample
chain:1 121.344 67.152
chain:2 128.246 68.578
New Script (using diag_post_multiply):
warmup sample
chain:1 72.105 41.812
chain:2 70.929 40.432
2 Likes
Hahaha, I guess this is why debugging code is important.
In terms of memory, the way to think about how automatic differentiation works in Stan, there is an extra little piece of information saved for every mathematical operation. That means every time you add two scalars together (a + b), a record of that operation is saved. For bigger operations, (like the squared_distance operator), we’ve coded this up so that information is small, but it’s still something.
If you’re building an NxN matrix, that’s at O(N^2) of these things. If you build 7 O(N^2) matrices, that’ll be 7x the cost. In this rewrite we put the 7x bit inside squared_distance, which will use less memory than if you broke it into its individual operations (sum(square(x_scaled[i, ] - x_scaled[j, ]))). So that’s the trick.
Anyway this back of the envelope stuff is all very inexact. @rok_cesnovar is working on some tools to debug this.
2 Likes
Hmm, it’s there in C++ and I think it’s in the language too, just not doc’d. Your comments on the docs (https://github.com/stan-dev/docs/pull/272) are good, I just haven’t gotten around to following up yet.
I thought about this but I’m just hesitant to tell people about functions that aren’t doc’d yet. Too confusing. It will hopefully soon be fixed.
Although inexact, it was useful to read - thank you for taking the time to explain this to me!
Thanks for letting me know!
1 Like
Oh yeah hey, follow up, I talked to someone and realized my comments about “Non-existent, I’m afraid” sounds really dismissive of the work you did. I didn’t mean it that way, my apologies (at some point @avehtari had asked me to look at a multiple length scale thing and I never did it – I was really thinking about that). We appreciate your contributions.
3 Likes
Hello, an update and a couple of question, if I may!
Drawing from the Flaxman paper, I implemented a kronecker structure within my model. I have also included one more predictor, a nugget and boundary avoiding priors (invgamma) which allow the model to run fast and converge without any problems (pair plots seem fine as well to my inexperienced eyes).
The problem I’m facing is that the model is overfitting the training data, resulting in poor out of sample predictions. The troubling part for me is that the posteriors of the lengthscales for some predictors are very small (where I wouldn’t expect them to be based on the prior predictive checks I run and my understanding of the simulator being emulated), while the nugget posterior mean is also very small. I tried regularising by using stronger priors (based on my understanding of the simulator being emulated, I expect monotonic relationships for most predictors) and it makes little to no difference. In an extreme example (the code is below), the posterior mean of the nugget was 0.02, when the probability of the prior being <= 0.02 was 10E-218.
Questions:
• Is my model somehow ignoring my priors? . Probably not the case but I’m lost as to why the posterior concentrates on values that highly improbable according to the prior
• Am I justified to add a fixed nugget instead of a parameter with a prior? This reduces the overfitting issue but I’m unsure whether that’s appropriate.
Notes:
• Model run on rstan 2.18.2 which is the only version available at my Universities HPC system
• There are 8 predictors scaled to be within [0, 1].
• I can’t increase the number of functional outputs (xc) but I can increase the number of simulation runs (i.e. the combinations of sampled simulator model inputs captured by tc) although this made no difference so far.
functions {
// return (A \otimes B) v where:
// A is n1 x n1, B = n2 x n2, V = n2 x n1 = reshape(v,n2,n1)
matrix kron_mvprod(matrix A, matrix B, matrix V) {
return transpose(A * transpose(B * V));
}
// A is a length n1 vector, B is a length n2 vector.
// Treating them as diagonal matrices, this calculates:
// v = (A \otimes B + sigma2)ˆ{-1}
// and returns the n1 x n2 matrix V = reshape(v,n1,n2)
matrix calculate_eigenvalues(vector A, vector B, int n1, int n2, real sigma2) {
matrix[n1,n2] e;
for(i in 1:n1) {
for(j in 1:n2) {
e[i,j] = (A[i] * B[j] + sigma2);
}
}
return(e);
}
matrix c_sigma(matrix X, real lambda, row_vector rho, int N, int p, int q) {
matrix[N, (p+q)] scaled_X = diag_post_multiply(X, inv(rho));
matrix[N, N] Sigma;
real lambda_inv = 1.0 / lambda;
for(i in 1:N) {
Sigma[i, i] = lambda_inv;
for(j in (i + 1):N) {
Sigma[i, j] = exp(-0.5*squared_distance(scaled_X[i,], scaled_X[j,])) * lambda_inv;
Sigma[j, i] = Sigma[i, j];
}
}
return Sigma;
}
}
// Try to perform reparametrisation of priors
data {
int<lower=0> m2; // number of functional output points xc
int<lower=0> m1; // number of computer simulation permutations tc
int<lower=0> p; // number of observable inputs x
int<lower=0> q; // number of calibration parameters t
matrix[m2, m1] eta_m; // output of computer simulations
matrix[m2, p] xc;
matrix[m1, q] tc;
}
parameters {
row_vector<lower=0>[q] rho_eta_tc;
row_vector<lower=0>[p] rho_eta_xc;
real<lower=0> lambda_eta;
//real<lower=0> lambda_sim;
real<lower=0> nugget;
}
// Create GP model based on the definitions from above
model {
// declare variables
matrix[m1, m1] sigma_eta_tc; // simulator covariance for calibration parameters
matrix[m2, m2] sigma_eta_xc; // simulator covariance for weather parameters
matrix[m1, m1] Q1;
matrix[m2, m2] Q2;
vector[m1] L1;
vector[m2] L2;
matrix[m1,m2] eigenvalues;
sigma_eta_tc = c_sigma(tc, lambda_eta, rho_eta_tc, m1, 0, q);
sigma_eta_xc = c_sigma(xc, 1, rho_eta_xc, m2, p, 0);
// Specify hyperparameters here
rho_eta_xc[1:(p)] ~ inv_gamma(15.0, 15.0);
rho_eta_tc[1:(q)] ~ inv_gamma(5.0, 5.0);
lambda_eta ~ gamma(10, 10); // gamma (shape, rate)
//lambda_sim ~ gamma(10,0.001); // inverse of nugget component
nugget ~ inv_gamma(1.0, 10.0);
Q1 = eigenvectors_sym(sigma_eta_tc); // tc eigenvector
Q2 = eigenvectors_sym(sigma_eta_xc); // xc eigenvector
L1 = eigenvalues_sym(sigma_eta_tc); // tc eigenvalues
L2 = eigenvalues_sym(sigma_eta_xc); // xc eigenvalues
eigenvalues = calculate_eigenvalues(L1,L2,m1,m2,nugget);
target += ( -0.5 * sum(eta_m .* kron_mvprod(Q1,Q2,
kron_mvprod(transpose(Q1),transpose(Q2),eta_m) ./ transpose(eigenvalues))) //Added transpose to eigenvalues myself
- .5 * sum(log(eigenvalues)));
}
The nugget is added in these situations to make up for what should be a positive definite matrix numerically ending up with negative or zero eigenvalues.
So it should just be a really small constant thing and not a parameter (like 1e-10, or whatever the smallest thing is so that the mathematically positive definite covariance matrix is actually positive definite when you do a decomposition).
If the model works without the nugget too, just do that.
If this still seems to overfit too much, then I guess where are the predictions vs. the training samples? Is anything going outside the training set?
How many dimenions is the emulator on input/output?
1 Like
Because I can’t train the emulator on all of the simulator inputs since there is too many of them, I selected the subset which the model output of interest is most sensitive too. Therefore, I do believe there is some noise in the training data that can’t be fully explained by the predictors alone (given the exact same values for the eight predictors, the output could be slightly different). Hence, I thought that treating it as a noisy GP is appropriate even if it’s an emulation problem. The noise is what I called a “nugget” above.
No I already checked and ensured that the validation data points are all within the training data point range. When trying to reproduce the training period, the trained model performs extremely well (e.g. R^2 > 0.96 and NMBE < 0.6 \% but then fails to reproduce the unseen validation period. I do think that the posterior values for the lengthscale for some of the predictors are too small, leading to high frequency relationships that physically seem counterintuitive - this is where I think the model tries to make up for the noise by overfitting.
For each output, the are 8 inputs (p = 3 and q = 5 in the stan script above).
Oh okay you’d want to estimate that then.
I guess that would mean that between data points the emulator reverts to its prior mean and doesn’t interpolate well and leads to bad predictions.
Can you make marginal predictions for your 8 inputs?
So for any single input, hold the other 7 at some value (maybe their average), and then make a plot of what happens on that input (and put data there too).
I’ll give this a try, thanks! Is this to identify whether indeed some of the lengthscales are inappropriately estimated?
With regards to the posterior of the noise term (and some of the lengthscales), isn’t it surprising to get most of the mass to lie at as such a low probability space according to the prior (probability of 10E-218 for the noise term, 10E-30 for some of the lengthscales)? I would think by using those priors, I’m telling the model that there is essentially no chance of the lengthscale/noise being that small, yet somehow the posterior still gathers there.
If the length scales are indeed getting too small then I expect what you’ll see is that at data points the GP is accurate and between them it reverts to the mean (and interpolates poorly).
Yeah, it’s telling us the data really wants to do something else. Those second numbers seem like densities not masses though (they’d be masses if you integrated over some volume), but I don’t think you need to compute anything exactly here (if the posterior mean is a few prior standard deviations away from the prior mean – that’s worth raising an eyebrow at, not that there are any hard and fast rules on N-standard deviations here).
(Edit: to -> too)
It might be good to then use monotonic functions. Monotonic functions are less likely to overfit.
You can also have a lower limit for a nugget if you have suitabel prior information about the measurement accuracy.
Sorry, I don’t have time to check the model in more detail
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.