text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
September 7th
Recall:
• Up until 2010, both short and long forms were sent out.
• Beginning in 2010, only short forms are sent while the American Community Survey is used to make up for the long form.
• Bilingual or "swimlane" forms are automatically sent to heavily Hispanic neighborhoods.
• Printed forms are available in English, Spanish, Chinese, Korean, Vietnamese, and Russian
• Supplementary material that explains how to fill out one of the printed forms is available in many other languages
• Level 1: A missing or questionable item on a person's form, but the person's other answers allow "correction". For example, sex is left blank but the name is Sarah, so the person is assumed female.
• Correction of level 1 problems is called Assignment.
• Level 2: A missing or questionable item on a person's form which cannot be determined by that person's other answers, but can be determined by other people of the same household. For example, person 2's age is left blank but is person 1's child and person 1 is 18. Person 2 can be assumed to be 0-5 years of age.
• Correction of level 2 problems is called Allocation.
• Level 3: A missing or questionable item on a person's form which cannot be determined by that person's other answers, nor by other people's answers of the same household.
• Correction of level 3 problems is called Substitution.
• Substitution is done by looking at data of neighbors in similar households to determine weights and then semi-randomly fabricating the missing data.
Accessing Data:
• Imagine we wanted to determine what percent of the US population has an imputed age.
• This can be done by the following steps:
1. Go to the American FactFinder website
2. Click "Get Data" under Decennial Census
3. Use the SF1 data set and click "Detailed Tables"
4. Click "Add" to add the United States and then click "Next"
5. Click "By Keyword" and search for "Imputation"
6. Select "P44. Imputation of Age (Not Substituted)" and click "Add"
7. Search for Substituted
8. Select "P39. Population Substituted (Total Population)" and click "Add" then "Show Results"
9. Add together the number substituted (3,441,154) and the number with allocated ages (10,400,568) and divide by the total number who have filled out the census (281,421,906) to arrive at the percent of data with an imputed age.
(1)
\begin{align} \frac{10,400,568 + 3,441,154}{281,421,906} = \frac{13,841,722}{281,421,906} = .0492 = 4.92 \% \end{align}
### We also discussed how to download & compare multiple geographies (e.g., Georgia counties) by using the Factfinder's option to download tables as comma-delimited files. (We could use some more notes on this…)
page revision: 2, last edited: 09 Sep 2010 18:36
|
{}
|
# psfex lapack symbols may collide with built in lapack
XMLWordPrintable
## Details
• Type: Bug
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
2
• Sprint:
Science Pipelines DM-S15-6
• Team:
Data Release Production
## Description
On my Mac meas_extensions_psfex fails to build due to the numpy config test failing. "import numpy" fails with:
dlopen(/Users/rowen/LSST/lsstsw/anaconda/lib/python2.7/site-packages/numpy/linalg/lapack_lite.so, 2): can't resolve symbol __NSConcreteStackBlock in /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib because dependent dylib #1 could not be loaded in /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libvMisc.dylib
Our best guess (see discussion in Data Management 2015-08-14 at approx. 1:57 PM Pacific time) is that the special lapack functions in psfex are colliding with the lapack that anaconda uses.
In case it helps I see this on OS X 10.9.5. I do not see it on lsst-dev.
## Activity
Hide
Nate Lust added a comment -
Please review these changes, minor though they are. The relivant changes are in the files, configure.ac, lib/SConscript, lapack_functions/SConscript . All the other changed files are auto-generated by the auto* tools. The updated make files have all been tested on multiple systems and guid the build process just fine.
Show
Nate Lust added a comment - Please review these changes, minor though they are. The relivant changes are in the files, configure.ac, lib/SConscript, lapack_functions/SConscript . All the other changed files are auto-generated by the auto* tools. The updated make files have all been tested on multiple systems and guid the build process just fine.
Hide
Russell Owen added a comment -
This looks like a good fix. It is a pity about all the extra auto-generated files needing update, but there it is.
Show
Russell Owen added a comment - This looks like a good fix. It is a pity about all the extra auto-generated files needing update, but there it is.
Hide
Nate Lust added a comment -
This fix is now on master, and has successfully been built with lsstsw on os x in addition to linux machines
Show
Nate Lust added a comment - This fix is now on master, and has successfully been built with lsstsw on os x in addition to linux machines
## People
• Assignee:
Nate Lust
Reporter:
Russell Owen
Reviewers:
Russell Owen
Watchers:
Jim Bosch, Nate Lust, Robert Lupton, Russell Owen
|
{}
|
# Non Equispaced / Non Uniform DFT Bandwidth
I need to construct Fourier transform of non-equispaced data.
That is, I have signal $s(t)$, $t\in[0,T]$ sampled at non-equispaced points $t_k$, $k=0...N-1$ with sample values $s_k = s(t_k)$. For Fourier transform I use approximation of integral: $$\hat S(\omega) = \int\limits_{-\infty}^\infty s(t)e^{-i\omega t}dt \approx \hat S_d(\omega) = \sum_{k=0}^{N-1}s_ke^{-i\omega t_k}\Delta t_k \tag{1}$$ where $\Delta t_k = (t_{k+1}-t_{k-1})/2$. As sampling points in frequency domain I choose $\omega_n = \frac{2\pi}{T}$.
My question is: as I can evaluate (1) for any $\omega$, what is the maximum $\omega$ such $\hat S_d(\omega)$ "adequately" represents $\hat S(\omega)$? What is the maximum of $n$ for $\omega_n$ I can use? For DFT we got Nyquist frequency. Do we have something similar for NDFT? Any references would be appreciated.
Note: I'm aware of such things as NDFT and NFFT. Though formula for NDFT, as presented in most papers, is $$\hat S_d(\omega_n) = \sum_{k=0}^{N-1}s_ke^{-i\omega_n t_k}$$ I strongly believe that I need to use formula (1) as I'm trying to build a periodogram.
And I'm not interested in fast ways of computing NDFT yet, so I'm not considering NFFT.
• i dunno what the "N" means in NDFT or NFFT. if you are representing your non-equispaced data as $$s(t) = \sum\limits_{k=0}^{N-1} s_k \delta(t-t_k)$$ then your $\hat{S}_d(\omega)$ formula is correct in the continuous-frequency domain. still not a DFT. if you want to DFT, then you have to interpolate your non-equispaced data and uniformly (re)sample it. as @rrogers had implied. dunno if i agree with the $sinc$ formula in his/her answer. i don't think i do. Jan 8 '15 at 23:44
• Given some conditions on the non uniform sampling you may extract the exact same information as with uniform sampling.
– Royi
Feb 13 at 12:29
Your formula isn't accurate. Since you aren't trying for speed and internally consider the interpolation between data points as a step/pulse then the formula should be.
$$S(\omega)=\sum_{n=0}^{N-1}s_{k}sinc(\frac{\omega\triangle\left(t_{k}\right)}{2})e^{-wt_{k}}$$ Having said that, what is "adequate"? This is, more or less, subjective. The original Nyquest criterion is based upon the result that any (freq < sam/2) (I think there is some small quibbling on that) can be perfectly reconstructed; if it and the sampling lasts long enough. It says nothing about out of band signals; but the actual calculations will show aliasing.
It behooves you to generate the signals and interference that you envision and run it through your proposed algorithm. An alternative is to treat your system as a filter and try to "deconvolve" it. This is hazardous but can be done. For a in depth look (with the hazards) read "Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements" SIAM. Now MRI/CAT scanners use various means of reconstructed signals. All of the mathematicians seem to think that their technical success is an example of extreme luck; although I am sure that the designers might take exception to that. You might try borrowing some of their mathematical techniques though. Now in general instrumentation what we concentrate on is "noise temperature" and such. This can be translated as: how far will your system signal/noise (or signal/distortion) be from the optimal number. If you can get within 1db of the optimal S/N value then you should probably give up. Of course this implies that you can calculate the optimal S/N. In most instruments this can be done and this criterion gives you a stopping point; a point to give up and look for something else to do :)
|
{}
|
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Award-winning private GMAT tutoring
Register now and save up to $200 Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • 5 Day FREE Trial Study Smarter, Not Harder Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • 5-Day Free Trial 5-day free, full-access trial TTP Quant Available with Beat the GMAT members only code ## From 530 to 560 to 690 (Q49, V35) This topic has 9 member replies jonnyutz Newbie | Next Rank: 10 Posts Joined 10 Nov 2012 Posted: 8 messages GMAT Score: 690 #### From 530 to 560 to 690 (Q49, V35) Thu Dec 01, 2016 11:54 am After finishing my GMAT yesterday, I feel like I must share the whole process as I read many of these stories to keep me going. My journey took 7 years with lots of breaks and frustrations. My aim has always been to get into a part-time program on the West Coast (Berkeley, UCLA, USC). Here is the short story: When I was still in college in 2008 I took the GMAT my junior year. I didn't study much, but I thought hey why not. Well needless to say it was quite humbling. I got a 530 (Q40, V21). I took a break and decided to actually put real effort behind my studying this time. In 2012, I signed up for the Manhattan GMAT live course in Chicago. It was a fun class, but it was very heavily focused on the quant section. As you can see before, my problem was really in the verbal. I read all of the verbal books that were provided by MGMAT. After 9 weeks of studying I got a 560 (Q43, V25). I was so disappointed that I lost all of my motivation. I really thought maybe this test really tested aptitude and I just didn't have it together. After 4 years of working I decided to give it one more shot. I really read a lot on the forums on the best way to self study. I didn't want to spend another$1000 or more on a course to find out that my score went up 30 points. So I created my own study plan which consisted of the following:
- E-GMAT Sentence Correction Course (completed the course at least 6 times and took notes. **I am a native speaker and this course was still VERY helpful)
- MGMAT Sentence Correction Guide (read this all the way through 2 times)
- Powerscore Critical Reasoning Bible (read this all the way through 2 times)
- Kaplan GMAT Math Workbook (just completed this one time through with all practice exercises. Just used to review)
- Ordered subscription for The Economist (read about 4 large articles per week)
- Ordered subscription for Scientific American (read about half of the articles per month)
- Read To Kill A Mockingbird and The Great Gatsby (recommended from another user)
- GMAT OG 2017 (finished every problem in the book once, and ones I missed I did twice or three times)
- GMAT OG Verbal Guide 2017 (did all Sentence Correction twice, didn't complete RC or CR sections)
After I completed the above I started really hitting the GMAT PREP mock exams. I ordered the Exam Pack 1 which includes GMAT PREP 3 and 4 tests. I received the following scores on these:
GMAT PREP 1 - 710 (Q47, V41)
GMAT Prep 2 - 700 (Q47, V39)
GMAT Prep 3 - 610 (Q43, V31)
GMAT Prep 4 - 650 (Q45, V35)
(2 repeat questions)
GMAT Prep 3 and 4 really felt a lot more difficult in the verbal sections. The questions got really tough, but turns out it was good practice.
Actual GMAT: The quant felt a lot easier than GMAT Prep 3 and 4. The verbal felt slightly easier, but on par with GMAT Prep 1 and 2. The IR felt a LOT easier on the actual GMAT than the GMAT prep. I wonder if they made it easier...?
Actual GMAT score: 690 (Q49, V35). This was a higher quant score than I ever received on any practice exam, and an average verbal score from my exams. I was so relieved to have this score and now I can finally feel competitive and apply into my desired schools. I plan to apply R1 for fall 2018. I really recommend the curriculum above as I really feel like each piece worked to complete the whole picture. I studied 2 hours every weeknight (worked full time) and 2 hours each morning and 2 hours in the evening on the weekend. It took about 12 weeks to complete the above at that rate. By the way I became a bit obsessed with the exam and was totally distraught when I saw the GMAT Prep 3 pop up with a 610. At that time I took 5 days off to relax which was very helpful. I would recommend this if needed. However, I should say I went to Europe for a week where I still took my GMAT books with me.
Thanks to all that post to this site, it was extremely helpful and I wouldn't know how to get this score without it!
Last edited by jonnyutz on Mon Dec 05, 2016 8:31 pm; edited 1 time in total
Need free GMAT or MBA advice from an expert? Register for Beat The GMAT now and post your question in these forums!
GMATinsight Legendary Member
Joined
10 May 2014
Posted:
998 messages
Followed by:
21 members
Thanked:
203 times
Fri Dec 02, 2016 6:06 am
jonnyutz wrote:
After finishing my GMAT yesterday, I feel like I must share the whole process as I read many of these stories to keep me going. My journey took 7 years with lots of breaks and frustrations. My aim has always been to get into a part-time program on the West Coast (Berkeley, UCLA, USC). Here is the short story:
When I was still in college in 2008 I took the GMAT my junior year. I didn't study much, but I thought hey why not. Well needless to say it was quite humbling. I got a 530 (Q40, V21).
I took a break and decided to actually put real effort behind my studying this time. In 2012, I signed up for the Manhattan GMAT live course in Chicago. It was a fun class, but it was very heavily focused on the quant section. As you can see before, my problem was really in the verbal. I read all of the verbal books that were provided by MGMAT. After 9 weeks of studying I got a 560 (Q43, V25). I was so disappointed that I lost all of my motivation. I really thought maybe this test really tested aptitude and I just didn't have it together.
After 4 years of working I decided to give it one more shot. I really read a lot on the forums on the best way to self study. I didn't want to spend another $1000 or more on a course to find out that my score went up 30 points. So I created my own study plan which consisted of the following: - E-GMAT Sentence Correction Course (completed the course at least 6 times and took notes. **I am a native speaker and this course was still VERY helpful) - MGMAT Sentence Correction Guide (read this all the way through 2 times) - Powerscore Critical Reasoning Bible (read this all the way through 2 times) - Kaplan GMAT Math Workbook (just completed this one time through with all practice exercises. Just used to review) - Ordered subscription for The Economist (read about 4 large articles per week) - Ordered subscription for Scientific American (read about half of the articles per month) - Read To Kill A Mockingbird and The Great Gatsby (recommended from another user) - GMAT OG 2017 (finished every problem in the book once, and ones I missed I did twice or three times) - GMAT OG Verbal Guide 2017 (did all Sentence Correction twice, didn't complete RC or CR sections) After I completed the above I started really hitting the GMAT PREP mock exams. I ordered the Exam Pack 1 which includes GMAT PREP 3 and 4 tests. I received the following scores on these: GMAT PREP 1 - 710 GMAT Prep 2 - 700 GMAT Prep 3 - 610 GMAT Prep 4 - 650 (2 repeat questions) GMAT Prep 3 and 4 really felt a lot more difficult in the verbal sections. The questions got really tough, but turns out it was good practice. Actual GMAT: The quant felt a lot easier than GMAT Prep 3 and 4. The verbal felt slightly easier, but on par with GMAT Prep 1 and 2. The IR felt a LOT easier on the actual GMAT than the GMAT prep. I wonder if they made it easier...? Actual GMAT score: 690 (Q49, V35). This was a higher quant score than I ever received on any practice exam, and an average verbal score from my exams. I was so relieved to have this score and now I can finally feel competitive and apply into my desired schools. I plan to apply R1 for fall 2018. I really recommend the curriculum above as I really feel like each piece worked to complete the whole picture. I studied 2 hours every weeknight (worked full time) and 2 hours each morning and 2 hours in the evening on the weekend. It took about 12 weeks to complete the above at that rate. By the way I became a bit obsessed with the exam and was totally distraught when I saw the GMAT Prep 3 pop up with a 610. At that time I took 5 days off to relax which was very helpful. I would recommend this if needed. However, I should say I went to Europe for a week where I still took my GMAT books with me. Thanks to all that post to this site, it was extremely helpful and I wouldn't know how to get this score without it! Many congratulations!!! What a good and detailed review... _________________ Prosper!!! Bhoopendra Singh & Sushma Jha "GMATinsight" Contact Us Testimonials To register for One-on-One FREE ONLINE DEMO Class Call/e-mail e-mail: info@GMATinsight.com Mobile: +91-9999687183 / +91-9891333772 Get in touch for SKYPE-Based Interactive Private Tutoring One-On-One Classes fee - US$40 per hour &
for FULL COURSE (38 LIVE Sessions)-US\$1000
"Please click on 'Thank' if you like my post/response."
GMATinsight
Dwarka, New Delhi-110075 and Shivalik New Delhi
deepagg84 Newbie | Next Rank: 10 Posts
Joined
11 Mar 2015
Posted:
1 messages
Wed Dec 07, 2016 1:00 pm
Thank you for posting your experience. What did you do to practice for RC?
jonnyutz Newbie | Next Rank: 10 Posts
Joined
10 Nov 2012
Posted:
8 messages
GMAT Score:
690
Wed Dec 07, 2016 1:46 pm
For RC I simply started by reading a LOT more. I read the two novels I mentioned in my post and read in depth articles every week. I tried to read several hours and whenever I had a moment. This started long before I even studied.
I did read the Manhattan GMAT book for RC, but it wasn't as helpful as just reading as much as you can. The comprehension starts to come after hours of reading. When it came to actual practice, I used the OG 2017 to get questions under my belt.
I hope that helps and happy to answer any other questions.
Matthewdeklerk Newbie | Next Rank: 10 Posts
Joined
08 Dec 2016
Posted:
3 messages
Thu Dec 08, 2016 8:15 am
Dude well done! I started off my GMAT journey 4 months ago and was feeling pretty despondent after getting a 560. Looking back I can't believe how much time and effort it took! I ended up with a 680 thanks to Target Test Prep and some good long hours of study.
Anyways, anyone else out there who is struggling with low scores, just remember that practice makes perfect!
Feel free to ask me any questions, I know I gained a lot reading from other peoples reviews over my time
ckinney1629 Newbie | Next Rank: 10 Posts
Joined
04 Nov 2015
Posted:
3 messages
Sun Jan 08, 2017 5:36 am
Thanks for sharing your journey, very insightful and it has given me the motivation to continue the process. What was your schedule like when it came to studying, i.e.: did you study one day Quant and the next Verbal? How often were you reading? You mentioned you would read several hours, was this daily? I too am trying to increase the amount of reading I do, as I have seen an increase in my reading comprehension.
jonnyutz Newbie | Next Rank: 10 Posts
Joined
10 Nov 2012
Posted:
8 messages
GMAT Score:
690
Sun Jan 08, 2017 9:45 am
I'm glad it can be motivating. It's nice to hear my experience can get another person motivated to keep going.
My schedule started nearly exclusively with Verbal since it was my weak point. I would recommend starting with whatever your weakest point is. Specifically, mine was Sentence Correction so I spent several weeks only studying SC. After that, I went to Quant and spent about 2 weeks to get up to speed knowing it was my strong point and then combined the two in the last 3 weeks of studying.
As far as how often I was reading... it was every day. This started long before my true 12 weeks of actual studying. I read everyday for at least 1 hour for 2 months. It was similar to going to the gym... some days you don't feel like it at all, but you just have to push yourself to do it. Even if you comprehend nothing in the beginning, just keep doing it until you do.
Hope that helps!
ckinney1629 Newbie | Next Rank: 10 Posts
Joined
04 Nov 2015
Posted:
3 messages
Tue Jan 10, 2017 9:21 am
Thanks for the quick response, it definitely helps!
If you don't mind me asking, why did you choose Powerscore CR Bible to study CR from? I have both Powerscore and MGMAT and am trying to figure out which one is better. I have read both, but am not sure which one to focus on.
jonnyutz Newbie | Next Rank: 10 Posts
Joined
10 Nov 2012
Posted:
8 messages
GMAT Score:
690
Tue Jan 10, 2017 9:27 am
I ended up using Powerscore because of what people send on this site mainly. A few years ago when I studied I used MGMAT, and it was alright to be honest, but something with how Powerscore is written really made a difference. I just read it cover to cover twice honestly and it started to stick with me.
I think MGMAT is super methodological if that's what you need. Powerscore wasn't about tricks about solving but really understanding the logic behind the problems. I didn't use any sort of trick of reading the question first or anything like that. I was able to just read the question and answer it honestly based on logic. I think MGMAT misses that concept.
Hope that helps!
ckinney1629 Newbie | Next Rank: 10 Posts
Joined
04 Nov 2015
Posted:
3 messages
Tue Jan 10, 2017 9:47 am
It's interesting you mentioned the logic part of answering the questions, as I recently read to answer CR questions, it is more about logic, than tricks.
Thanks!
### Best Conversation Starters
1 Vincen 152 topics
2 lheiannie07 61 topics
3 Roland2rule 49 topics
4 LUANDATO 44 topics
5 ardz24 40 topics
See More Top Beat The GMAT Members...
### Most Active Experts
1 Brent@GMATPrepNow
GMAT Prep Now Teacher
140 posts
2 Rich.C@EMPOWERgma...
EMPOWERgmat
110 posts
3 EconomistGMATTutor
The Economist GMAT Tutor
109 posts
4 GMATGuruNY
The Princeton Review Teacher
107 posts
5 DavidG@VeritasPrep
Veritas Prep
72 posts
See More Top Beat The GMAT Experts
|
{}
|
Conducts an influence analysis of a meta-analysis generated by meta functions, and allows to produce influence diagnostic plots.
InfluenceAnalysis(x, random = FALSE, subplot.heights = c(30,18),
subplot.widths = c(30,30), forest.lims = 'default',
return.separate.plots = FALSE, text.scale = 1)
## Arguments
x An object of class meta, generated by the metabin, metagen, metacont, metacor, metainc, metarate or metaprop function. Logical. Should the random-effects model be used to generate the influence diagnostics? Uses the method.tau specified in the meta object if one of "DL", "HE", "SJ", "ML", "REML", "EB", "PM", "HS" or "GENQ" (to ensure compatibility with the metafor package). Otherwise, the DerSimonian-Laird ("DL"; DerSimonian & Laird, 1986) estimator is used. FALSE by default. Concatenated array of two numerics. Specifies the heights of the first (first number) and second (second number) row of the overall plot generated when plotting the results. Default is c(30,18). Concatenated array of two numerics. Specifies the widths of the first (first number) and second (second number) column of the overall results plot generated when plotting the results. Default is c(30,30). Concatenated array of two numerics. Specifies the x-axis limits of the forest plots generated when plotting the results. Use "default" if standard settings should be used (this is the default). Logical. When plotted, should the influence plots be shown as separate plots in lieu of returning them in one overall plot? Positive numeric. Scaling factor for the text geoms used when plotting the results. Values <1 shrink the text, while values >1 increase the text size. Default is 1.
## Value
A list object of class influence.analysis containing the following objects is returned (if results are saved to a variable):
• BaujatPlot: The Baujat plot
• InfluenceCharacteristics: The Viechtbauer-Cheung influence characteristics plot
• ForestEffectSize: The forest plot sorted by effect size
• ForestI2: The forest plot sorted by between-study heterogeneity
• Data: A data.frame containing the data used for plotting.
Otherwise, the function prints out (1) the results of the Leave-One-Out Analysis (sorted by $$I^2$$), (2) the Viechtbauer-Cheung Influence Diagnostics and (3) Baujat Plot data (sorted by heterogeneity contribution), in this order. Plots can be produced manually by plugging a saved object of class InfluenceAnalysis generated by the function into the plot function. It is also possible to only produce one specific plot by specifying the name of the plot as a character in the second argument of the plot call (see Examples).
## Details
The function conducts an influence analysis using the "Leave-One-Out" paradigm internally and produces data for four influence diagnostics. Diagnostic plots can be produced by saving the output of the function to an object and plugging it into the plot function. These diagnostics may be used to determine which study or effect size may have an excessive influence on the overall results of a meta-analysis and/or contribute substantially to the between-study heterogeneity in an analysis. This may be used for outlier detection and to test the robustness of the overall results found in an analysis. Results for four diagnostics are calculated:
• Baujat Plot: Baujat et al. (2002) proposed a plot to evaluate heterogeneity patterns in a meta-analysis. The x-axis of the Baujat plot shows the overall heterogeneity contribution of each effect size while the y-axis shows the influence of each effect size on the pooled result. The baujat function is called internally to produce the results. Effect sizes or studies with high values on both the x and y-axis may be considered to be influential cases; effect sizes or studies with high heterogeneity contribution (x-axis) and low influence on the overall results can be outliers which might be deleted to reduce the amount of between-study heterogeneity.
• Influence Characteristics: Several influence analysis diagnostics proposed by Viechtbauer & Cheung (2010). Results are calculated by an internal call to influence.rma.uni. In the console output, potentially influential studies are marked with an asterisk (*). When plotted, effect sizes/studies determined to be influential cases using the "rules of thumb" described in Viechtbauer & Cheung (2010) are shown in red. For further details, see the documentation of the influence.rma.uni function.
• Forest Plot for the Leave-One-Out Analysis, sorted by Effect Size: This displays the effect size and $$I^2$$-heterogeneity when omitting one of the $$k$$ studies each time. The plot is sorted by effect size to determine which studies or effect sizes particularly affect the overall effect size. Results are generated by an internal call to metainf.
• Forest Plot for the Leave-One-Out Analysis, sorted by $$I^2$$: see above; results are sorted by $$I^2$$ to determine the study for which exclusion results in the greatest reduction of heterogeneity.
## References
Harrer, M., Cuijpers, P., Furukawa, T.A, & Ebert, D. D. (2019). Doing Meta-Analysis in R: A Hands-on Guide. DOI: 10.5281/zenodo.2551803. Chapter 6.3
DerSimonian R. & Laird N. (1986), Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 177–188.
Viechtbauer, W., & Cheung, M. W.-L. (2010). Outlier and influence diagnostics for meta-analysis. Research Synthesis Methods, 1, 112–125.
influence.rma.uni, metainf, baujat
## Examples
if (FALSE) {
data(ThirdWave)
# Create 'meta' meta-analysis object
suppressPackageStartupMessages(library(meta))
meta = metagen(TE, seTE, studlab = paste(ThirdWave\$Author), data=ThirdWave)
# Run influence analysis; specify to return separate plots when plotted
inf.an = InfluenceAnalysis(meta, return.separate.plots = TRUE)
# Show results in console
inf.an
# Generate all plots
plot(inf.an)
# For baujat plot
plot(inf.an, "baujat")
# For influence diagnostics plot
plot(inf.an, "influence")
# For forest plot sorted by effect size
plot(inf.an, "ES")
# For forest plot sorted by I-squared
plot(inf.an, "I2")}
|
{}
|
1. Engineering
2. Electronics Engineering
3. consider the continuous time system given by the state equations...
# Question: consider the continuous time system given by the state equations...
###### Question details
consider the continuous time system given by the state equations
x(t)=$\left[\begin{array}{cc}-1.5& 1\\ 1& 0\end{array}\right]$ x(t) +$\left[\begin{array}{c}1\\ 0\end{array}\right]$ u(t)
y1(t)=$\left[\begin{array}{cc}1& 0\end{array}\right]$ x(t)
y2(t)=$\left[\begin{array}{cc}0& -1\end{array}\right]$ x(t)
find the system transfer functions from u to y1 and from u to y2
Design a state feedback control law integral action to achieve robust tracking of step references r in y2(t) that is
u(t)=-Kx(t)-K ${\mathrm{Z}}_{\mathrm{z}}$(t)
z(t)=r-y2(t)
find the matrices K and Kz to place all the closed loop eigenvalues at -2. would it be possible to achieve robust tracking of constant references also in y1(t)? justify your answer.
write the equations of an observer for the system (1) using only y1(t) as measurement and design the observer gain in order to obtain an estimate x(t) of the state with observer eigenvalues at -10
|
{}
|
The principle of ‘parallax’ in section 2.3.1 is used in the determination of distances of very distant stars. The baseline AB is the line joining the Earth’s two locations six months apart in its orbit around the Sun. That is, the baseline is about the diameter of the Earth’s orbit ≈ 3 × 1011m. However, even the nearest stars are so distant that with such a long baseline, they show parallax only of the order of 1” (second) of arc or so. A parsec is a convenient unit of length on the astronomical scale. It is the distance of an object that will show a parallax of 1” (second of arc) from opposite ends of a baseline equal to the distance from the Earth to the Sun. How much is a parsec in terms of metres?
Asked by Abhisek | 1 year ago | 89
##### Solution :-
Diameter of Earth’s orbit = 3 × 1011 m
Radius of Earth’s orbit r = 1.5 × 1011 m
Let the distance parallax angle be θ=1″ (s)
Let the distance of the star be D.
Parsec is defined as the distance at which the average
radius of the Earth’s orbit subtends an angle of 1″
Therefore, D = $$\dfrac{1.5 × 10^{11}}{4.847 × 10^{–6}}$$
= 0.309 x 1017
Hence 1 parsec ≈ 3.09 × 1016 m.
Answered by Pragya Singh | 1 year ago
### Related Questions
#### It is a well-known fact that during a total solar eclipse-the disk of the moon almost completely
It is a well-known fact that during a total solar eclipse-the disk of the moon almost completely covers the disk of the Sun. From this fact and from the information you can gather from examples 2.3 and 2.4, determine the approximate diameter of the moon.
#### The farthest objects in our Universe discovered by modern astronomers are so distant that light
The farthest objects in our Universe discovered by modern astronomers are so distant that light emitted by them takes billions of years to reach the Earth. These objects (known as quasars) have many puzzling features, which have not yet been satisfactorily explained. What is the distance in km of a quasar from which light takes 3.0 billion years to reach us?
#### A SONAR (sound navigation and ranging) uses ultrasonic waves to detect and locate objects
A SONAR (sound navigation and ranging) uses ultrasonic waves to detect and locate objects underwater. In a submarine equipped with a SONAR, the time delay between generation of a probe wave and the reception of its echo after reflection from an enemy submarine is found to be 77.0 s. What is the distance of the enemy submarine? (Speed of sound in water = 1450 m s–1).
|
{}
|
The iconicity of thought and its moving pictures: following the sinuosities of Peirce's path
Gaultier, Benoît (2017). The iconicity of thought and its moving pictures: following the sinuosities of Peirce's path. Transactions of the Charles S. Peirce Society, 53(3):374-399.
Abstract
I endeavor to determine exactly what Peirce's thesis of the iconicity of thought means and implies and how far it can be maintained. In particular, I argue that while for Peirce necessary reasoning requires an iconic dimension in order to be carried out, it does not follow that this is true of reasoning in general. I then suggest a way in which the thesis that ‘it is by icons only that we really reason’ could be defended on the basis of Peirce's philosophical system. I consider difficulties of the conception of thought upon which this defense is based, namely, that thinking is processual in nature and that our thoughts are in continuity with each other. More specifically, I argue that this view generates a tension within Peirce's system, and I reject it on the basis of Geachian arguments that, at the same time, shed new light on Peirce's most insightful claims concerning the nature of thought
Abstract
I endeavor to determine exactly what Peirce's thesis of the iconicity of thought means and implies and how far it can be maintained. In particular, I argue that while for Peirce necessary reasoning requires an iconic dimension in order to be carried out, it does not follow that this is true of reasoning in general. I then suggest a way in which the thesis that ‘it is by icons only that we really reason’ could be defended on the basis of Peirce's philosophical system. I consider difficulties of the conception of thought upon which this defense is based, namely, that thinking is processual in nature and that our thoughts are in continuity with each other. More specifically, I argue that this view generates a tension within Peirce's system, and I reject it on the basis of Geachian arguments that, at the same time, shed new light on Peirce's most insightful claims concerning the nature of thought
Statistics
Citations
Dimensions.ai Metrics
1 citation in Web of Science®
1 citation in Scopus®
|
{}
|
# Meaning of this operator
## Main Question or Discussion Point
From the "Lie Group" theory point of view we know that:
$$p$$ := is the generator for traslation (if the Lagrangian is invariant under traslation then p is conserved)
$$L$$:= s the generator for rotation (if the Lagrangian is invariant under traslation then L is conserved)
(I'm referring to momentum p and Angular momentum L, although the notation is obvious )
My question is if we take the "Lie derivative" and "covariant derivative" as a generalization of derivative for curved spaces.. if we suppose they're Lie operators..what's their meaning?..if the momentum operator acts like this:
$$pf(x)\rightarrow \frac{df}{dx}$$ derivative of the function..could the same holds for Lie and covariant derivative (covariant derivative is just a generalization to gradient, and i think that Lie derivatives can be expressed in some cases as Covariant derivatives, in QM the momentum vector applied over the wave function is just the gradient of the $$\psi$$
## Answers and Replies
Related Linear and Abstract Algebra News on Phys.org
fresh_42
Mentor
You confuse several levels here. What you call Lie operator is a left (or right) invariant vector field, an element of a Lie algebra. The example you gave for $p$ is just a possible representation, better realization of a Lie algebra. If we come from a group of smooth functions we will get a natural operation of the Lie algebra elements as Lie derivatives on these functions. Your example looks like the Poincaré group (algebra). For a general context of Lie derivatives see:
https://www.physicsforums.com/insights/pantheon-derivatives-part-ii/ and following parts
And here is an example of a realization of $\mathfrak{sl}(2) \cong \mathfrak{su}(2)$ as differential operators on $\mathcal{C}^\infty(\mathbb{R})$ (sec. 6.2 and 7.3):
https://www.physicsforums.com/insights/journey-manifold-su2-part-ii/
|
{}
|
# Mathematician:Benjamin Peirce
## Mathematician
American mathematician and logician who has been called "The founding father of modern abstract algebra".
Like George Boole, attempted to put logic on a sound mathematical footing.
He also contributed to many other areas of mathematics.
Proved that there is no odd perfect number with fewer than four prime factors.
Introduced the terms idempotence and nilpotence in $1870$, in his work Linear Associative Algebra.
Father of Charles Sanders Peirce.
Not to be confused with Benjamin Franklin "Hawkeye" Pierce.
American
## History
• Born: 4 April 1809, Salem, Massachusetts, USA
• Died: 6 Oct 1880, Cambridge, Massachusetts, USA
## Publications
• 1835: An Elementary Treatise on Plane Trigonometry
• 1836: First Part of an Elementary Treatise on Spherical Trigonometry
• 1836: An Elementary Treatise on Sound
• 1837: An Elementary Treatise on Algebra : To which are added Exponential Equations and Logarithms
• 1837: An Elementary Treatise on Plane and Solid Geometry
• 1840: An Elementary Treatise on Plane and Spherical Trigonometry
• 1841: An Elementary Treatise on Curves, Functions, and Forces, Volume 1
• 1846: An Elementary Treatise on Curves, Functions, and Forces, Volume 2
• 1855: Physical and Celestial Mathematics
• 1870: Linear Associative Algebra
• 1899: A Short Table of Integrals (2nd Edition)
|
{}
|
## Return to Answer
2 added 1 characters in body
The initial original problem with the indices was that they were used to label coordinates, so mathematicians preferred more and more coordinate independent operators, while physicists preferred continued to use indices. Then, Penrose realized that it has to be something beyond the indices that makes them useful - mainly the Einstein summation convention - and proposed the abstract index notation. This notation is almost identical in form with that of coordinate indices, but it is invariant, like the notation used by mathematicians, and maintains the simplifications due to the use of indices. The indices are not interpreted as labeling coordinates, but as representing the type of vectors and tensors and how they act on each other.
I think that there are advantages and disadvantages in both notations. Though, many tensor operations, especially contraction and type change, are easier to define and perform by using indices.
The following fields can benefit of this notation: Linear Algebra, Representation Theory, Group Theory, Differential Geometry.
This notation can naturally be related to Penrose's diagrammatic notation.
1 [made Community Wiki]
The initial problem with the indices was that they were used to label coordinates, so mathematicians preferred more and more coordinate independent operators, while physicists preferred to use indices. Then, Penrose realized that it has to be something beyond the indices that makes them useful - mainly the Einstein summation convention - and proposed the abstract index notation. This notation is almost identical in form with that of coordinate indices, but it is invariant, like the notation used by mathematicians, and maintains the simplifications due to the use of indices. The indices are not interpreted as labeling coordinates, but as representing the type of vectors and tensors and how they act on each other.
I think that there are advantages and disadvantages in both notations. Though, many tensor operations, especially contraction and type change, are easier to define and perform by using indices.
The following fields can benefit of this notation: Linear Algebra, Representation Theory, Group Theory, Differential Geometry.
This notation can naturally be related to Penrose's diagrammatic notation.
|
{}
|
# Put our Knowledge and Writing Skills to Work for you
As well as consultancy, research and interim work, peterjamesthomas.com Ltd. helps organisations in a number of other ways. The recently launched Data Strategy Review Service is just one example.
Another service we provide is writing White Papers for clients. Sometimes the labels of these are white [1] as well as the paper. Sometimes Peter James Thomas is featured as the author. White Papers can be based on themes arising from articles published here, they can feature findings from de novo research commissioned in the data arena, or they can be on a topic specifically requested by the client.
Seattle-based Data Consultancy, Neal Analytics, is an organisation we have worked with on a number of projects and whose experience and expertise dovetails well with our own. They recently commissioned a White Paper expanding on our 2018 article, Building Momentum – How to begin becoming a Data-driven Organisation. The resulting paper, The Path to Data-Driven, has just been published on Neal Analytics’ site (they have a lot of other interesting content, which I would recommend checking out):
If you find the articles published on this site interesting and relevant to your work, then perhaps – like Neal Analytics – you would consider commissioning us to write a White Paper or some other document. If so, please just get in contact, or simply schedule an introductory ‘phone call. We have a degree of flexibility on the commercial side and will most likely be able to come up with an approach that fits within your budget. Although we are based in the UK, commissions – like Neal Analytics’s one – from organisations based in other countries are welcome.
Notes
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# A Picture Paints a Thousand Numbers
Introduction
The recent update of The Data & Analytics Dictionary featured an entry on Charts. Entries in The Dictionary are intended to be relatively brief [1] and also the layout does not allow for many illustrations. Given this, I have used The Dictionary entries as a basis for this slightly expanded article on the subject of chart types.
A Chart is a way to organise and Visualise Data with the general objective of making it easier to understand and – in particular – to discern trends and relationships. This article will cover some of the most frequently used Chart types, which appear in alphabetical order.
Note: Here an “axis” is a fixed reference line (sometimes invisible for stylistic reasons) which typically goes vertically up the page or horizontally from left to right across the page (but see also Radar Charts). Categories and values (see below) are plotted on axes. Most charts have two axes. Throughout I use the word “category” to refer to something discrete that is plotted on an axis, for example France, Germany, Italy and The UK, or 2016, 2017, 2018 and 2019. I use the word “value” to refer to something more continuous plotted on an axis, such as sales or number of items etc. With a few exceptions, the Charts described below plot values against categories. Both Bubble Charts and Scatter Charts plot values against other values. I use “series” to mean sets of categories and values. So if the categories are France, Germany, Italy and The UK; and the values are sales; then different series may pertain to sales of different products by country.
Index
Bar & Column Charts
Clustered Bar Charts, Stacked Bar Charts
Bar Charts is the generic term, but this is sometimes reserved for charts where the categories appear on the vertical axis, with Column Charts being those where categories appear on the horizontal axis. In either case, the chart has a series of categories along one axis. Extending righwards (or upwards) from each category is a rectangle whose width (height) is proportional to the value associated with this category. For example if the categories related to products, then the size of rectangle appearing against Product A might be proportional to the number sold, or the value of such sales.
| © JMB (2014) | Used under a Creative Commons licence |
The exhibit above, which is excerpted from Data Visualisation – A Scientific Treatment, is a compound one in which two bar charts feature prominently.
Sometimes the bars are clustered to allow multiple series to be charted side-by-side, for example yearly sales for 2015 to 2018 might appear against each product category. Or – as above – sales for Product A and Product B may both be shown by country.
Another approach is to stack bars or columns on top of each other, something that is sometimes useful when comparing how the make-up of something has changed.
Bubble Charts
Bubble Charts are used to display three dimensions of data on a two dimensional chart. A circle is placed with its centre at a value on the horizontal and vertical axes according to the first two dimensions of data, but then then the area (or less commonly the diameter [2]) of the circle reflects the third dimension. The result is reminiscent of a glass of champagne (then maybe this says more about the author than anything else).
You can also use bubble charts in a quite visceral way, as exemplified by the chart above. The vertical axis plots the number of satellites of the four giant planets in the Solar System. The horizontal axis plots the closest that they ever come to the Sun. The size of the planets themselves is proportional to their relative sizes.
Cartograms
There does not seem to be a generally accepted definition of Cartograms. Some authorities describe them as any diagram using a map to display statistical data; I cover this type of general chart in Map Charts below. Instead I will define a Cartogram more narrowly as a geographic map where areas of map sections are changed to be proportional to some other value; resulting in a distorted map. So, in a map of Europe, the size of countries might be increased or decreased so that their new areas are proportional to each country’s GDP.
Alternatively the above cartogram of the United States has been distorted (and coloured) to emphasise the population of each state. The dark blue of California and the slightly less dark blues of Texas, Florida and New York dominate the map.
Histograms
A type of Bar Chart (typically with categories along the horizontal axis) where the categories are bins (or buckets) and the bars are proportional to the number of items falling into a bin. For example, the bins might be ranges of ages, say 0 to 19, 20 to 39, 30 to 49 and 50+ and the bars appearing against each might be the UK female population falling into each bin.
The diagram above is a bipartite quasi-histogram [3] that I created to illustrate another article. It is not a true histogram as it shows percentages for and against in each bin rather than overall frequencies.
In the same article, I addressed this shortcoming with a second view of the same data, which is more histogram-like (apart from having a total category) and appears above. The point that I was making related to how Data Visualisation can both inform and mislead depending on the presentational choices taken.
Line Charts
Fan Charts, Area Charts
These typically have categories across the horizontal axis and could be considered as a set of line segments joining up the tops of what would be the rectangles on a Bar Chart. Clearly multiple lines, associated with multiple series, can be plotted simultaneously without the need to cluster rectangles as is required with Bar Charts. Lines can also be used to join up the points on Scatter Charts assuming that these are sufficiently well ordered to support this.
Adaptations of Line Charts can also be used to show the probability of uncertain future events as per the exhibit above. The single red line shows the actual value of some metric up to the middle section of the chart. Thereafter it is the central prediction of a range of possible values. Lying above and below it are shaded areas which show bands of probability. For example it may be that the probability of the actual value falling within the area that has the darkest shading is 50%. A further example is contained in Limitations of Business Intelligence. Such charts are sometimes called Fan Charts.
Another type of Line Chart is the Area Chart. If we can think of a regular Line Chart as linking the tops of an invisible Bar Chart, then an Area Chart links the tops of an invisible Stacked Bar Chart. The effect is that how a band expands and contracts as we move across the chart shows how the contribution this category makes to the whole changes over time (or whatever other category we choose for the horizontal axis).
See also: The first exhibit in New Thinking, Old Thinking and a Fairytale
Map Charts
These place data on top of geographic maps. If we consider the canonical example of a map of the US divided into states, then the degree of shading of each state could be proportional to some state-related data (e.g. average income quartile of residents). Or more simply, figures could appear against each state. Bubbles could be placed at the location of major cities (or maybe a bubble per country or state etc.) with their size relating to some aspect of the locale (e.g.population). An example of this approach might be a map of US states with their relative populations denoted by Bubble area.
Also data could be overlaid on a map, for example – as shown above – coloured bands corresponding to different intensities of rainfall in different areas. This exhibit is excerpted from Hurricanes and Data Visualisation: Part I – Rainbow’s Gravity.
Pie Charts
These circular charts normally display a single series of categories with values, showing the proportion each category contributes to the total. For example a series might be the nations that make up the United Kingdom and their populations: England 55.62 million people, Scotland 5.43 million, Wales 3.13 million and Northern Ireland 1.87 million.
The whole circle represents the total of all the category values (e.g. the UK population of 66.05 million people [4]). The ratio of a segment’s angle to 360° (i.e. the whole circle) is equal to the percentage of the total represented by the linked category’s value (e.g. Scotland is 8.2% of the UK population and so will have a segment with an angle of just under 30°).
Sometimes – as illustrated above – the segments are “exploded”away from each other. This is taken from the same article as the other voting analysis exhibits.
See also: As Nice as Pie, which examines the pros and cons of this type of chart in some depth.
Radar Charts are used to plot one or more series of categories with values that fall into the same range. If there are six categories, then each has its own axis called a radius and the six of these radiate at equal angles from a central point. The calibration of each radial axis is the same. For example Radar Charts are often used to show ratings (say from 5 = Excellent to 1 = Poor) so each radius will have five points on it, typically with low ratings at the centre and high ones at the periphery. Lines join the values plotted on each adjacent radius, forming a jagged loop. Where more than one series is plotted, the relative scores can be easily compared. A sense of aggregate ratings can also be garnered by seeing how much of the plot of one series lies inside or outside of another.
I use Radar Charts myself extensively when assessing organisations’ data capabilities. The above exhibit shows how an organisation ranks in five areas relating to Data Architecture compared to the best in their industry sector [5].
Scatter Charts
In most of the cases we have dealt with to date, one axis has contained discrete categories and the other continuous values (though our rating example for the Radar Chart) had discrete categories and values). For a Scatter Chart both axes plot values, either continuous or discrete. A series would consist of a set of pairs of values, one to plotted on the horizontal axis and one to be plotted on the vertical axis. For example a series might be a number of pairs of midday temperature (to be plotted on the horizontal axis) and sales of ice cream (to be plotted on the vertical axis). As may be deduced from the example, often the intention is to establish a link between the pairs of values – do ice cream sales increase with temperature? This aspect can be highlighted by drawing a line of best fit on the chart; one that minimises the total distance between each plotted point and the line. Further series, say sales of coffee versus midday temperature can be added.
Here is a further example, which illustrates potential correlation between two sets of data, one on the x-axis and the other on the y-axis:
As always a note of caution must be introduced when looking to establish correlations using scatter graphs. The inimitable Randall Munroe of xkcd.com [7] explains this pithility as follows:
| © Randall Munroe, xkcd.com (2009) | Excerpted from: Extrapolating |
Tree Maps
Tree Maps require a little bit of explanation. The best way to understand them is to start with something more familiar, a hierarchy diagram with three levels (i.e. something like an organisation chart). Consider a cafe that sells beverages, so we have a top level box labeled Beverages. The Beverages box splits into Hot Beverages and Cold Beverages at level 2. At level 3, Hot Beverages splits into Tea, Coffee, Herbal Tea and Hot Chocolate; Cold Beverages splits into Still Water, Sparkling Water, Juices and Soda. So there is one box at level 1, two at level 2 and eight at level 3. As ever a picture paints a thousand words:
Next let’s also label each of the boxes with the value of sales in the last week. If you add up the sales for Tea, Coffee, Herbal Tea and Hot Chocolate we obviously get the sales for Hot Beverages.
A Tree Map takes this idea and expands on it. A Tree Map using the data from our example above might look like this:
First, instead of being linked by lines, boxes at level 3 (leaves let’s say) appear within their parent box at level 2 (branches maybe) and the level 2 boxes appear within the overall level 1 box (the whole tree); so everything is nested. Sometimes, as is the case above, rather than having the level 2 boxes drawn explicitly, the level 3 boxes might be colour coded. So above Tea, Coffee, Herbal Tea and Hot Chocolate are mid-grey and the rest are dark grey.
Next, the size of each box (at whatever level) is proportional to the value associated with it. In our example, 66.7% of sales ($\frac{1000}{1500}$) are of Hot Beverages. Then two-thirds of the Beverages box will be filled with the Hot Beverages box and one-third ($\frac{500}{1500}$) with the Cold Beverage box. If 20% of Cold Beverages sales ($\frac{100}{500}$) are Still Water, then the Still Water box will fill one fifth of the Cold Beverages box (or one fifteenth – $\frac{100}{1500}$ – of the top level Beverages box).
It is probably obvious from the above, but it is non-trivial to find a layout that has all the boxes at the right size, particularly if you want to do something else, like have the size of boxes increase from left to right. This is a task generally best left to some software to figure out.
In Closing
The above review of various chart types is not intended to be exhaustive. For example, it doesn’t include Waterfall Charts [8], Stock Market Charts (or Open / High / Low / Close Charts [9]), or 3D Surface Charts [10] (which seldom are of much utility outside of Science and Engineering in my experience). There are also a number of other more recherché charts that may be useful in certain niche areas. However, I hope we have covered some of the more common types of charts and provided some helpful background on both their construction and usage.
Notes
[1] Certainly by my normal standards! [2] Research suggests that humans are more attuned to comparing areas of circles than say their diameters. [3] © peterjamesthomas.com Ltd. (2019). [4] Excluding overseas territories. [5] This has been suitably redacted of course. Typically there are four other such exhibits in my assessment pack: Data Strategy, Data Organisation, MI & Analytics and Data Controls, together with a summary radar chart across all five lower level ones. [6] The atmospheric CO2 records were sourced from the US National Oceanographic and Atmospheric Administration’s Earth System Research Laboratory and relate to concentrations measured at their Mauna Loa station in Hawaii. The Global Average Surface Temperature records were sourced from the Earth Policy Institute, based on data from NASA’s Goddard Institute for Space Studies and relate to measurements from the latter’s Global Historical Climatology Network. This exhibit is meant to be a basic illustration of how a scatter chart can be used to compare two sets of data. Obviously actual climatological research requires a somewhat more rigorous approach than the simplistic one I have employed here. [7] Randall’s drawings are used (with permission) liberally throughout this site,Including: [8] Waterfall Chart – Wikipedia. [9] Open-High-Low-Close Chart – Wikipedia. [10] Surface Chart – AnyCharts.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# The peterjamesthomas.com Data Strategy Hub
Today we launch a new on-line resource, The Data Strategy Hub. This presents some of the most popular Data Strategy articles on this site and will expand in coming weeks to also include links to articles and other resources pertaining to Data Strategy from around the Internet.
If you have an article you have written, or one that you read and found helpful, please post a link in a comment here or in the actual Data Strategy Hub and I will consider adding it to the list.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# Data Visualisation according to a Four-year-old
When I recently published the latest edition of The Data & Analytics Dictionary, I included an entry on Charts which briefly covered a number of the most frequently used ones. Given that entries in the Dictionary are relatively brief [1] and that its layout allows little room for illustrations, I decided to write an expanded version as an article. This will be published in the next couple of weeks.
One of the exhibits that I developed for this charts article was to illustrate the use of Bubble Charts. Given my childhood interest in Astronomy, I came up with the following – somewhat whimsical – exhibit:
Bubble Charts are used to plot three dimensions of data on a two dimensional graph. Here the horizontal axis is how far each of the gas and ice giants is from the Sun [2], the vertical axis is how many satellites each planet has [3] and the final dimension – indicated by the size of the “bubbles” – is the actual size of each planet [4].
Anyway, I thought it was a prettier illustration of the utility of Bubble Charts that the typical market size analysis they are often used to display.
However, while I was doing this, my older daughter wandered into my office and said “look at the picture I drew for you Daddy” [5]. Coincidentally my muse had been her muse and the result is the Data Visualisation appearing at the top of this article. Equally coincidentally, my daughter had also encoded three dimensions of data in her drawing:
1. Rank of distance from the Sun
2. Colour / appearance
3. Number of satellites [6]
She also started off trying to capture relative size. After a great start with Mercury, Venus and Earth, she then ran into some Data Quality issues with the later planets (she is only four).
Here is an annotated version:
I think I’m at least OK at Data Visualisation, but my daughter’s drawing rather knocked mine into a cocked hat [7]. And she included a comet, which makes any Data Visualisation better in my humble opinion; what Chart would not benefit from the inclusion of a comet?
Notes
[1] For me at least that is. [2] Actually the measurement is the closest that each planet comes to the Sun, its perihelion. [3] This may seem a somewhat arbitrary thing to plot, but a) the exhibit is meant to be illustrative only and b) there does nevertheless seem to be a correlation of sorts; I’m sure there is some Physical reason for this, which I’ll have to look into sometime. [4] Bubble Charts typically offer the option to scale bubbles such that either their radius / diameter or their area is in proportion to the value to be displayed. I chose the equatorial radius as my metric. [5] It has to be said that this is not an atypical occurence. [6] For at least the four rocky planets, it might have taken a while to draw all 79 of Jupiter’s moons. [7] I often check my prose for phrases that may be part of British idiom but not used elsewhere. In doing this, I learnt today that “knock into a cocked hat” was originally an American phrase; it is first found in the 1830s.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# Thank you to Ankit Rathi for including me in his list of Data Science / Artificial Intelligence practitioners that he admires
It’s always nice to learn that your work is appreciated and so thank you to Ankit Rathi for including me in his list of Data Science and Artificial Intelligence practitioners.
I am in good company as he also gives call outs to:
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# The latest edition of The Data & Analytics Dictionary is now out
After a hiatus of a few months, the latest version of the peterjamesthomas.com Data and Analytics Dictionary is now available. It includes 30 new definitions, some of which have been contributed by people like Tenny Thomas Soman, George Firican, Scott Taylor and and Taru Väre. Thanks to all of these for their help.
Remember that The Dictionary is a free resource and quoting contents (ideally with acknowledgement) and linking to its entries (via the buttons provided) are both encouraged.
If you would like to contribute a definition, which will of course be acknowledged, you can use the comments section here, or the dedicated form, we look forward to hearing from you [1].
The Data & Analytics Dictionary will continue to be expanded in coming months.
Notes
[1] Please note that any submissions will be subject to editorial review and are not guaranteed to be accepted.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# Why do data migration projects have such a high failure rate?
Similar to its predecessor, Why are so many businesses still doing a poor job of managing data in 2019? this brief article has its genesis in the question that appears in its title, something that I was asked to opine on recently. Here is an expanded version of what I wrote in reply:
Well the first part of the answer is based on consideing activities which have at least moderate difficulty and complexity associated with them. The majority of such activities that humans attempt will end in failure. Indeed I think that the oft-reported failure rate, which is in the range 60 – 70%, is probably a fundamental Physical constant; just like the speed of light in a vacuum [1], the rest mass of a proton [2], or the fine structure constant [3].
$\alpha=\dfrac{e^2}{4\pi\varepsilon_0d}\bigg/\dfrac{hc}{\lambda}=\dfrac{e^2}{4\pi\varepsilon_0d}\cdot\dfrac{2\pi d}{hc}=\dfrac{e^2}{4\pi\varepsilon_0d}\cdot\dfrac{d}{\hbar c}=\dfrac{e^2}{4\pi\varepsilon_0\hbar c}$
For more on this, see the preambles to both Ever tried? Ever failed? and Ideas for avoiding Big Data failures and for dealing with them if they happen.
Beyond that, what I have seen a lot is Data Migration being the poor relation of programme work-streams. Maybe the overall programme is to implement a new Transaction Platform, integrated with a new Digital front-end; this will replace 5+ legacy systems. When the programme starts the charter says that five years of history will be migrated from the 5+ systems that are being decommissioned.
Then the costs of the programme escallate [4] and something has to give to stay on budget. At the same time, when people who actually understand data make a proper assessment of the amount of work required to consolidate and conform the 5+ disparate data sets, it is found that the initial estimate for this work [5] was woefully inadequate. The combination leads to a change in migration scope, just two years historical data will now be migrated.
Rinse and repeat…
The latest strategy is to not migrate any data, but instead get the existing data team to build a Repository that will allow users to query historical data from the 5+ systems to be decommissioned. This task will fall under BAU [6] activities (thus getting programme expenditure back on track).
The slight flaw here is that building such a Repository is essentially a big chunk of the effort required for Data Migration and – of course – the BAU budget will not be enough for this quantum work. Oh well, someone else’s problem, the programme budget suddenly looks much rosier, only 20% over budget now…
Note: I may have exaggerated a bit to make a point, but in all honesty, not really by that much.
Notes
[1] $c\approx299,792,458\text{ }ms^{-1}$ [2] $m_p\approx1.6726 \times 10^{-27}\text{ }kg$ [3] $\alpha\approx0.0072973525693$ – which doesn’t have a unit (it’s dimensionless) [4] Probably because they were low-balled at first to get it green-lit; both internal and external teams can be guilty of this. [5] Which was do doubt created by a generalist of some sort; or at the very least an incurable optimist. [6] BAU of course stands for Basically All Unfunded.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# Why are so many businesses still doing a poor job of managing data in 2019?
I was asked the question appearing in the title of this short article recently and penned a reply, which I thought merited sharing with a wider audience. Here is an expanded version of what I wrote:
Let’s start by considering some related questions:
1. Why are so many businesses still doing a bad job of controlling their costs in 2019?
2. Why are so many businesses still doing a bad job of integrating their acquisitions in 2019?
3. Why are so many businesses still doing a bad job of their social media strategy in 2019?
4. Why are so many businesses still doing a bad job of training and developing their people in 2019?
5. Why are so many businesses still doing a bad job of customer service in 2019?
The answer is that all of the above are difficult to do well and all of them are done by humans; fallible humans who have a varying degree of motivation to do any of these things. Even in companies that – from the outside – appear clued-in and well-run, there will be many internal inefficiencies and many things done poorly. I have spoken to companies that are globally renowned and have a reputation for using technology as a driver of their business; some of their processes are still a mess. Think of the analogy of a swan viewed from above and below the water line (or vice versa in the example below).
I have written before about how hard it is to do a range of activities in business and how high the failure rate is. Typically I go on to compare these types of problems to to challenges with data-related work [1]. This has some of its own specific pitfalls. In particular work in the Data Management may need to negotiate the following obstacles:
1. Data Management is even harder than some of the things mentioned above and tends to touch on all aspects of the people, process and technology in and organisation and its external customer base.
2. Data is still – sadly – often seen as a technical, even nerdy, issue, one outside of the mainstream business.
3. Many companies will include aspirations to become data-centric in their quarterly statements, but the root and branch change that this entails is something that few organisations are actually putting the necessary resources behind.
4. Arguably, too many data professionals have used the easy path of touting regulatory peril [2] to drive data work rather than making the commercial case that good data, well-used leads to better profitability.
With reference to the aforementioned failure rate, I discuss some ways to counteract the early challenges in a recent article, Building Momentum – How to begin becoming a Data-driven Organisation. In the closing comments of this, I write:
The important things to take away are that in order to generate momentum, you need to start to do some stuff; to extend the physical metaphor, you have to start pushing. However, momentum is a vector quantity (it has a direction as well as a magnitude [12]) and building momentum is not a lot of use unless it is in the general direction in which you want to move; so push with some care and judgement. It is also useful to realise that – so long as your broad direction is OK – you can make refinements to your direction as you pick up speed.
To me, if you want to avoid poor Data Management, then the following steps make sense:
1. Make sure that Data Management is done for some purpose, that it is part of an overall approach to data matters that encompasses using data to drive commercial benefits. The way that Data Management should slot in is along the lines of my Simplified Data Capability Framework:
2. Develop an overall Data Strategy (without rock-polishing for too long) which includes a vision for Data Management. Once the destination for Data Management is developed, start to do work on anything that can be accomplished relatively quickly and without wholesale IT change. In parallel, begin to map what more strategic change looks like and try to align this with any other transformation work that is in train or planned.
3. Leverage any progress in the Data Management arena to deliver new or improved Analytics and symmetrically use any stumbling blocks in the Analytics arena to argue the case for better Data Management.
4. Draw up a communications plan, advertising the benefits of sound Data Management in commercial terms; advertise any steps forward and the benefits that they have realised.
5. Consider that sound Data Management cannot be the preserve of solely a single team, instead consider the approach of fostering an organisation-wide Data Community [3].
Of course the above list is not exhaustive and there are other approaches that may yield benefits in specific organisations for cultural or structural reasons. I’d love to hear about what has worked (or the other thing) for fellow data practitioners, so please feel free to add a comment.
Notes
[1] For example in: [2] GDPR and its ilk. Regulatory compliance is very important, but it must not become the sole raison d’être for data work. [3] As described in In praise of Jam Doughnuts or: How I learned to stop worrying and love Hybrid Data Organisations.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# New Thinking, Old Thinking and a Fairytale
Of course it can be argued that you can use statistics (and Google Trends in particular) to prove anything [1], but I found the above figures striking. The above chart compares monthly searches for Business Process Reengineering (including its arguable rebranding as Business Transformation) and monthly searches for Data Science between 2004 and 2019. The scope is worldwide.
Brunel’s Heirs
Business Process Reengineering (BPR) used to be a big deal. Optimising business processes was intended to deliver reduced costs, increased efficiency and to transform also-rans into World-class organisations. Work in this area was often entwined with the economic trend of Globalisation. Supply chains were reinvented, moving from in-country networks to globe-spanning ones. Many business functions mirrored this change, moving certain types of work from locations where staff command higher salaries to ones in other countries where they don’t (or at least didn’t at the time [2]). Often BPR work explicitly included a dimension of moving process elements offshore, maybe sometimes to people who were better qualified to carry them out, but always to ones who were cheaper. Arguments about certain types of work being better carried out by co-located staff were – in general – sacrificed on the altar of reduced costs. In practice, many a BPR programme morphed into the narrower task of downsizing an organisation.
In 1995, Thomas Davenport, an EY consultant who was one of the early BPR luminaries, had this to say on the subject:
“When I wrote about ‘business process redesign’ in 1990, I explicitly said that using it for cost reduction alone was not a sensible goal. And consultants Michael Hammer and James Champy, the two names most closely associated with reengineering, have insisted all along that layoffs shouldn’t be the point. But the fact is, once out of the bottle, the reengineering genie quickly turned ugly.”
Fast Company – Reengineering – The Fad That Forgot People, Thomas Davenport, November 1995 [3a]
A decade later, Gartner had some rather sobering thoughts to offer on the same subject:
Gartner predicted that through 2008, about 60% of organizations that outsource customer-facing functions will see client defections and hidden costs that outweigh any potential cost savings. And reduced costs aren’t guaranteed […]. Gartner found that companies that employ outsourcing firms for customer service processes pay 30% more than top global companies pay to do the same functions in-house.
Computerworld – Gartner: Customer-service outsourcing often fails, Scarlet Pruitt, March 2005
It is important here to bear in mind that neither of the above critiques comes from people implacable opposed to BPR, but rather either a proponent or a neutral observer. Clearly, somewhere along the line, things started to go wrong in the world of BPR.
Dilbert’s Dystopia
Even when organisations abjured moving functions to other countries and continents, they generally embraced another 1990s / 2000s trend, open plan offices, with more people crammed into available space, allowing some facilities to be sold and freed-up space to be sub-let. Of course such changes have a tangible payback, no one would do them otherwise. What was not generally accounted for were the associated intangible costs. Some of these are referenced by The Atlantic in an article (which, in turn, cites a study published by The Royal Society entitled The impact of the ‘open’ workspace on human collaboration):
“If you’re under 40, you might have never experienced the joy of walls at work. In the late 1990s, open offices started to catch on among influential employers—especially those in the booming tech industry. The pitch from designers was twofold: Physically separating employees wasted space (and therefore money), and keeping workers apart was bad for collaboration. Other companies emulated the early adopters. In 2017, a survey estimated that 68 percent of American offices had low or no separation between workers.
Now that open offices are the norm, their limitations have become clear. Research indicates that removing partitions is actually much worse for collaborative work and productivity than closed offices ever were.”
The Atlantic – Workers Love AirPods Because Employers Stole Their Walls, Amanda Mull, April 2019
When you consider each of lost productivity, the collateral damage caused when staff vote with their feet and the substantial cost of replacing them, incremental savings on your rental bills can seem somewhat less alluring.
Reengineering Redux
Nevertheless, some organisations did indeed reap benefits as a result of some or all of the activities listed above; it is worth noting however that these tended to be the organisations that were better run to start with. Others, maybe historically poor performers, spent years turning their organisations inside out with the anticipated payback receding ever further out of sight. In common with failure in many areas, issues with BPR have often been ascribed to a neglect of the human aspects of change. Indeed, one noted BPR consultant, the above-referenced Michael Hammer, said the following when interviewed by The Wall Street Journal:
“I wasn’t smart enough about that. I was reflecting my engineering background and was insufficiently appreciative of the human dimension. I’ve learned that’s critical.”
The Wall Street Journal – Reengineering Gurus Take Steps to Remodel Their Stalling Vehicles, Joseph White, November 1996 [3b]
As with most business trends, Business Transformation (to adopt the more current term) can add substantial value – if done well. An obvious parallel in my world is to consider another business activity that reached peak popularity in the 2000s, Data Warehouse programmes [4]. These could also add substantial value – if done well; but sadly many of them weren’t. Figures suggest that both BPR and Data Warehouse programmes have a failure rate of 60 – 70% [5]. As ever, the key is how you do these activities, but this is a topic I have covered before [6] and not part of my central thesis in this article.
My opinion is that the fall-off you see in searches for BPR / Business Transformation reflects two things: a) many organisations have gone through this process (or tried to) already and b) the results of such activities have been somewhat mixed.
“O Brave New World”
Many pundits opine that we are now in an era of constant change and also refer to the tectonic shift that technologies like Artificial Intelligence will lead to. They argue further that new approaches and new thinking will be needed to meet these new challenges. Take for example, Bernard Marr, writing in Forbes:
Since we’re in the midst of the transformative impact of the Fourth Industrial Revolution, the time is now to start preparing for the future of work. Even just five years from now, more than one-third of the skills we believe are essential for today’s workforce will have changed according to the Future of Jobs Report from the World Economic Forum. Fast-paced technological innovations mean that most of us will soon share our workplaces with artificial intelligences and bots, so how can you stay ahead of the curve?
Forbes – The 10 Vital Skills You Will Need For The Future Of Work, Bernard Marr, April 2019
However, neither these opinions, nor the somewhat chequered history of things like BPR and open plan office seem to stop many organisations seeking to apply 1990s approaches in the (soon to be) 2020s. As a result, the successors to BPR are still all too common. Indeed, to make a possibly contrarian point, in some cases this may be exactly what organisations should be doing. Where I agree with Bernard Marr and his ilk is that this is not all that they should be doing. The whole point of this article is to recommend that they do other things as well. As comforting as nostalgia can be, sometimes the other things are much more important than reliving the 1990s.
Here we come back to the upward trend in searches for Data Science. It could be argued of course that this is yet another business fad (indeed some are speaking about Big Data in just those terms already [7]), but I believe that there is more substance to the area than this. To try to illustrate this, let me start by telling you a fairytale [8]; yes your read that right, a fairytale.
$\mathfrak{Once}$ upon a time, there was a Kingdom, the once great Kingdom of Suzerain. Of late it had fallen from its former glory and, accordingly, the King’s Chief Minister, one who saw deeper and further than most, devised a scheme which she prophesied would arrest the realm’s decline. This would entail a grand alliance with Elven artisans from beyond the Altitudinous Mountains and a tribe of journeyman Dwarves [9] from the furthermost shore of the Benthic Sea. Metalworking that had kept many a Suzerain smithy busy would now be done many leagues from the borders of the Kingdom. The artefacts produced by the Elves and Dwarves were of the finest quality, but their craftsmen and women demanded fewer golden coins than the Suzerain smiths. $\mathfrak{In}$ a vision the Chief Minister saw the Kingdom’s treasury swelling. Once all was in place, the new alliances would see a fifth more gold being locked in Suzerain treasure chests before each winter solstice. Yet the King’s Chief Minister also foresaw that reaching an agreement with the Elves and Dwarves would cost much gold; there were also Suzerain smiths to be requited. Further she predicted that the Kingdom would be in turmoil for many Moons; all told three winters would come and go before the Elves and Dwarves would be working with due celerity. $\mathfrak{Before}$ the Moon had changed, a Wizard appeared at court, from where none knew. He bore a leather bag, overspilling gold coins, in his long, delicate fingers. When the King demanded to know whence this bounty came, the Wizard stated that for five days and five nights he had surveyed Suzerain with his all-seeing-eye. This led him to discover that gold coins were being dispatched to the Goblins of the Great Arboreal Forest, gold which was not their rightful weregild [10]. The bag held those coins that had been put aside for the Goblins over the next four seasons. Just this bag contained a tenth of the gold that was customarily deposited in the King’s treasure chests by winter time. The Wizard declared his determination to deploy his discerning divination daily [11], should the King confer on him the high office of Chief Wizard of Suzerain [12]. $\mathfrak{The}$ King was a wise King, but now he was gripped with uncertainty. The office of Chief Wizard commanded a stipend that was not inconsiderable. He doubted that he could both meet this and fulfil the Chief Minister’s vision. On one hand, the Wizard had shown in less than a Moon’s quarter that his thaumaturgy could yield gold from the aether. On the other, the Chief Minister’s scheme would reap dividends twofold the mage’s bounty every four seasons; but only after three winters had come and gone. The King saw that he must ponder deeply on these weighty matters and perhaps even dare to seek the counsel of his ancestors’ spirits. This would take time. $\mathfrak{As}$ it happens, the King never consulted the augurs and never decided as the Kingdom of Suzerain was totally obliterated by a marauding dragon the very next day, but the moral of the story is still crystal clear…
I will leave readers to infer the actual moral of the story, save to say that while few BPR practitioners self-describe as Wizards, Data Scientist have been known to do this rather too frequently.
It is hard to compare ad hoc Data Science projects, which can have a very major payback sometimes and a more middling one on other occasions, with a longer term transformation. On one side you have an immediate stream of one off and somewhat variable benefits, on the other deferred, but ongoing and steady, annual benefits. One thing that favours a Data Science approach is that this is seldom dependent on root and branch change to the organisation, just creative use of internal and external datasets that already exist. Another is that you can often start right away.
Perhaps the King in our story should have put his faith in both his Chief Minister and the Wizard (as well as maybe purchasing a dragon early warning system [13]); maybe a simple tax on the peasantry was all that was required to allow investment in both areas. However, if his supply of gold was truly limited, my commercial judgement is that new thinking is very often a much better bet than old. I’m on team Wizard.
Notes
[1]
There are many caveats around these figures. Just one obvious point is that people searching for a term on Google is not the same as what organisations are actually doing. However, I think it is hard to argue that that they are not at least indicative.
[2]
“Aye, there’s the rub”
[3a/b]
The Davenport and Hammer quotes were initially sourced from the Wikipedia page on BPR.
[4]
Feel free to substitute Data Lake for Data Warehouse if you want a more modern vibe, sadly it won’t change the failure statistics.
[5]
In Ideas for avoiding Big Data failures and for dealing with them if they happen I argued that a 60% failure rate for most human endeavours represents a fundamental Physical Constant, like the speed of light in a vacuum or the mass of an electron:
“Data warehouses play a crucial role in the success of an information program. However more than 50% of data warehouse projects will have limited acceptance, or will be outright failures” – Gartner 2007 “60-70% of the time Enterprise Resource Planning projects fail to deliver benefits, or are cancelled” – CIO.com 2010 “61% of acquisition programs fail” – McKinsey 2009
[6]
For example in 20 Risks that Beset Data Programmes.
[7]
See Sic Transit Gloria Magnorum Datorum.
[8]
The scenario is an entirely real one, but details have been changed ever so slightly to protect the innocent.
[9]
Of course the plural of Dwarf is Dwarves (or Dwarrows), not Dwarfs, what is wrong with you?
[10]
Goblins are not renowned for their honesty it has to be said.
[11]
Wizards love alliteration.
[12]
CWO?
[13]
And a more competent Chief Risk Officer.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
# In praise of Jam Doughnuts or: How I learned to stop worrying and love Hybrid Data Organisations
The above infographic is the work of Management Consultants Oxbow Partners [1] and employs a novel taxonomy to categorise data teams. First up, I would of course agree with Oxbow Partners’ statement that:
Organisation of data teams is a critical component of a successful Data Strategy
Indeed I cover elements of this in two articles [2]. So the structure of data organisations is a subject which, in my opinion, merits some consideration.
Oxbow Partners draw distinctions between organisations where the Data Team is separate from the broader business, ones where data capabilities are entirely federated with no discernible “centre” and hybrids between the two. The imaginative names for these are respectively The Burger, The Smoothie and The Jam Doughnut. In this article, I review Oxbow Partners’s model and offer some of my own observations.
The Burger – Centralised
Having historically recommended something along the lines of The Burger, not least when an organisation’s data capabilities are initially somewhere between non-existent and very immature, my views have changed over time, much as the characteristics of the data arena have also altered. I think that The Burger still has a role, in particular, in a first phase where data capabilities need to be constructed from scratch, but it has some weaknesses. These include:
1. The pace of change in organisations has increased in recent years. Also, many organisations have separate divisions or product lines and / or separate geographic territories. Change can be happening in sometimes radically different ways in each of these as market conditions may vary considerably between Division A’s operations in Switzerland and Division B’s operations in Miami. It is hard for a wholly centralised team to react with speed in such a scenario. Even if they are aware of the shifting needs, capacity may not be available to work on multiple areas in parallel.
2. Again in the above scenario, it is also hard for a central team to develop deep expertise in a range of diverse businesses spread across different locations (even if within just one country). A central team member who has to understand the needs of 12 different business units will necessarily be at a disadvantage when considering any single unit compared to a colleague who focuses on that unit and nothing else.
3. A further challenge presented here is maintaining the relationships with colleagues in different business units that are typically a prerequisite for – for example – driving adoption of new data capabilities.
The Smoothie – Federated
So – to address these shortcomings – maybe The Smoothie is a better organisational design. Well maybe, but also maybe not. Problems with these arrangements include:
1. Probably biggest of all, it is an extremely high-cost approach. The smearing out of work on data capabilities inevitably leads to duplication of effort with – for example – the same data sourced or combined by different people in parallel. The pace of change in organisations may have increased, but I know few that are happy to bake large costs into their structures as a way to cope with this.
2. The same duplication referred to above creates another problem, the way that data is processed can vary (maybe substantially) between different people and different teams. This leads to the nightmare scenario where people spend all their time arguing about whose figures are right, rather than focussing on what the figures say is happening in the business [3]. Such arrangements can generate business risk as well. In particular, in highly regulated industries heterogeneous treatment of the same data tends to be frowned upon in external reviews.
3. The wholly federated approach also limits both opportunities for economies of scale and identification of areas where data capabilities can meet the needs of more than one business unit.
4. Finally, data resources who are fully embedded in different parts of a business may become isolated and may not benefit from the exchange of ideas that happens when other similar people are part of the immediate team.
So to summarise we have:
The Jam Doughnut – Hybrid
Which leaves us with The Jam Doughnut, in my opinion, this is a Goldilocks approach that captures as much as possible of the advantages of the other two set-ups, while mitigating their drawbacks. It is such an approach that tends to be my recommendation for most organisations nowadays. Let me spend a little more time describing its attributes.
I see the best way of implementing a Jam Doughnut approach is via a hub-and-spoke model. The hub is a central Data Team, the spokes are data-centric staff in different parts of the business (Divisions, Functions, Geographic Territories etc.).
It is important to stress that each spoke satellite is not a smaller copy of the central Data Team. Some roles will be more federated, some more centralised according to what makes sense. Let’s consider a few different roles to illustrate this:
• Data Scientist – I would see a strong central group of these, developing methodologies and tools, but also that many business units would have their own dedicated people; “spoke”-based people could also develop new tools and new approaches, which could be brought into the “hub” for wider dissemination
• Analytics Expert – Similar to the Data Scientists, centralised “hub” staff might work more on standards (e.g. for Data Visualisation), developing frameworks to be leveraged by others (e.g. a generic harness for dashboards that can be leveraged by “spoke” staff), or selecting tools and technologies; “spoke”-based staff would be more into the details of meeting specific business needs
• Data Engineer – Some “spoke” people may be hybrid Data Scientists / Data Engineers and some larger “spoke” teams may have dedicated Data Engineers, but the needle moves more towards centralisation with this role
• Data Architect – Probably wholly centralised, but some “spoke” staff may have an architecture string to their bow, which would of course be helpful
• Data Governance Analyst – Also probably wholly centralised, this is not to downplay the need for people in the “spokes” to take accountability for Data Governance and Data Quality improvement, but these are likely to be part-time roles in the “spokes”, whereas the “hub” will need full-time Data Governance people
It is also important to stress that the various spokes should also be in contact with each other, swapping successful approaches, sharing ideas and so on. Indeed, you could almost see the spokes beginning to merge together somewhat to form a continuum around the Data Team. Maybe the merged spokes could form the “dough”, with the Data Team being the “jam” something like this:
I label these types of arrangements a Data Community and this is something that I have looked to establish and foster in a few recent assignments. Broadly a Data Community is something that all data-centric staff would feel part of; they are obviously part of their own segment of the organisation, but the Data Community is also part of their corporate identity. The Data Community facilities best practice approaches, sharing of ideas, helping with specific problems and general discourse between its members. I will be revisiting the concept of a Data Community in coming weeks. For now I would say that one thing that can help it to function as envisaged is sharing common tooling. Again this is a subject that I will return to shortly.
I’ll close by thanking Oxbow Partners for some good mental stimulation – I will look forward to their next data-centric publication.
Disclosure: It is peterjamesthomas.com’s policy to disclose any connections with organisations or individuals mentioned in articles. Oxbow Partners are an advisory firm for the insurance industry covering Strategy, Digital and M&A. Oxbow Partners and peterjamesthomas.com Ltd. have a commercial association and peterjamesthomas.com Ltd. was also engaged by one of Oxbow Partners’ principals, Christopher Hess, when he was at a former organisation.
Notes
[1] Though the author might have had a minor role in developing some elements of it as well. [2] The Anatomy of a Data Function and A Simple Data Capability Framework. [3] See also The impact of bad information on organisations.
Another article from peterjamesthomas.com. The home of The Data and Analytics Dictionary, The Anatomy of a Data Function and A Brief History of Databases.
|
{}
|
orb:pid_lidar
Lidar's motor controller using PID control loop
• The Lidar needs to be spinning at a constant speed of 250 rpms.
• The voltage needed by the motor to rotate at constant speed changes depending on external factors such as temperature.
• A PID controller is therefore implemented in order to always supply the adequate voltage to obtain the desired rpms.
The PID controller is used to maintain constant rpms by continuously calculating the error between the current state and the desired setpoint. See Wikipedia page for detailed explanation.
The pid_lidar node subscibes to the 'state' topic rpms and publishes the control value in control_effort topic. The main parameters to be set are Kp, Ki and Kd. These parameters can be set manually or using Ziegler-Nichols method.
In our case we have seen that a PI is sufficient to control the speed. Indeed, the speed variation will be gradual and no brutal change should occur. Also, the response to a change do not need to be immediate and so the response time do not need to be very short.
The published value in control_effort topic represents the rpms that should be applied to compensate for the error. In order to be sent to the lidar's motor this value must be converted into a corresponding PWM value (more precisely into a duty cycle value). The duty cycle value will fix the voltage given to the motor. This is done in rpms2volts node. The values are converted according to a linear model : duty_cycle = a*rpms + b, with a and b parameters that can be set.
According to empirical tests in normal conditions (@Hackurium, ~20°C), these parameters have been set to a = 0.16 and b = 48
The PWM duty cycle value is published into motor_input topic and ranges from 0 to 255.
The i2c_lidar node sends the desired PWM duty cycle value to the motor via i2c.
[PICTURE NEEDED]
[PICTURE NEEDED]
• Lidar motor controller board is connected to Olimex' UEXT1 pin
• Name of device : “/dev/i2c-2”
• Adress of device : 0x0f
• Motor controller board needs to be powered with 11V
• Be careful with the direction of spining of the lidar (only one direction is the correct one)
• Make sure Olimex and lidar motor controller board share the same ground (otherwise i2c communication gets interrupted)
Misc
I2C Grove (This board is no longer used)
• I2C pins of I2C Motor Driver should not be connected to VCC (only to GND SDA and SCL)
• Power supply need to come from pin J6 (11V)
• Input to LiDAR's motor on pin J1
• Adress of device : 0x0f (default, can be changed see documentation I2C Grove Motor Driver)
• I2C Motor Driver is connected to Olimex' UEXT1 pin
• orb/pid_lidar.txt
|
{}
|
## College Algebra 7th Edition
$\sum_{k=1}^{10}k^{2}$
We write the sum in sigma notation: (We notice that the terms are integers to the second power.) $1^{2}+2^{2}+3^{2}+ ...+10^{2}=\sum_{k=1}^{10}k^{2}$
|
{}
|
# A more general Kloosterman-type sum
Let $\mathbb{F}_q$ be a finite field and let $a,b \in \mathbb{F}_q$ not both zero. Let $\psi$ be the canonical additive character on $\mathbb{F}_q$. The classical Kloosterman sum is given by $$K(a,b) = \sum_{x \in \mathbb{F}_q^*} \psi\left(ax + \dfrac{b}{x} \right)$$ which is well-known to satisfy $|K(a,b)| \leq 2q^{1/2}$.
I wonder if anyone has seen the following natural "generalization", and if so, what is an upper bound for its modulus?
Let $m \geq 1$ and define $$K_m(a,b) = \sum_{x \in \mathbb{F}_q^*} \psi\left(a\left(x^m + x^{m-1} + \cdots + x \right) + b\left(\dfrac{1}{x^m} + \dfrac{1}{x^{m-1}} + \cdots + \dfrac{1}{x} \right)\right).$$ Note that $K_1(a,b) = K(a,b)$. In the classical $K(a,b)$, note the 2 in the upper bound for the modulus, which (maybe naive to think so!) may have to do with the fact that the argument $ax + \dfrac{b}{x} = (ax^2 + b)/x$. Thus perhaps $|K_m(a,b)| \leq 2mq^{1/2}$? Have you guys seen this kind of sum before? Thanks!
Yes. Bounds for such sums are known more generally for Laurent polynomials. It is a useful (but lengthy) exercise to derive the bound $$\left|\sum_{x\in\Bbb{F}_q^*}\psi( f(x)+g(\frac1x))\right|\le (\deg f+\deg g)\sqrt q$$ with the method described in Lidl & Niederreiter. Here $f$ and $g$ can be any polynomials that cannot be written in the form $h^p-h+c$ for some other polynomial $h$ and constant $c$.
But I'm afraid I cannot point you at a definite source. In the 90s I was among a group of coding theorists who desperately needed these bounds for certain constructions. We also needed related hybrid sums, where you throw a multiplicative character $\chi$ into the mix: $$\left|\sum_{x\in\Bbb{F}_q^*}\chi(x)\psi( f(x)+g(\frac1x))\right|\le (\deg f+\deg g)\sqrt q$$ For lack of a definite reference we needed to (re)derive these bounds by turning the crank following the elementary arguments in L&N (and earlier Wolfgang Schmidt's exposition on the Schmidt-Stepanov method). We, most notably Helleseth, Kumar and Shanbhag, extended these results, by turning the crank a bit more, for Galois rings and their characters.
|
{}
|
# Sequence problem involving inequality
Consider a sequence defined by $\{a_k\}_{k\geq 0}$ , $a_{k+1}=2^k-3a_k$; Find all $a_0$ such that $a_0<a_1<a_2<a_3<\cdots$.
I tried to create some bound on the terms but it doesn't satisfy me, any suggestion would be highly helpful, thanks!
## 4 Answers
Here is a different approach to computing $a_k$.
Let $b_k=a_k(-3)^{-k}$, then we have $$a_{k+1}=2^k-3a_k\implies(-3)^{k+1}b_{k+1}=2^k-3\cdot(-3)^kb_k$$ Dividing by $(-3)^{k+1}$, we get $$b_{k+1}=-\frac13\left(-\frac23\right)^k+b_k$$ We can compute $b_k$ using the formula for the sum of a geometric series: $$b_k=b_0-\frac15\left(1-\left(-\frac23\right)^k\right)$$ Back out the change of variables to get $a_k$ \begin{align} a_k &=(-3)^ka_0-\frac15\left((-3)^k-2^k\right)\\ &=\frac152^k+\left(a_0-\frac15\right)(-3)^k \end{align} As has been noted, this means that the only initial $a_0$ that gives a monotonically increasing sequence $a_k$ is $a_0=\frac15$. For any other value of $a_0$, the factor of $(-3)^k$ will cause oscillation.
Do you know how to solve linear recurrence relations? We're basically looking at an inhomogeneous one of those. It's a lot like a linear differential equation with constant coefficients.
Considering $a_{k+1} + 3a_k = 2^k$ in this way, you first solve the corresponding homogeneous recurrence:
$a_{k+1} + 3a_k=0$
This is solved by the sequence $a_k=A(-3)^k$ for any real $A$. To account for the inhomogenous term, we consider sequences of the form $a_k=A(-3)^k + B(2)^k$. Plugging this into the original recurrence leads to $B=\frac{1}{5}$. Putting in an initial condition, we then find that $A=a_0-\frac{1}{5}$. Thus the formula for our sequence is:
$a_k=\left(a_0-\frac{1}{5}\right)(-3)^k + \frac{1}{5}(2)^k$.
No matter how small a non-zero coefficient we have in front of that oscillating term, it will eventually drown out the growth term. Therefore, the sequence will only keep increasing if $a_0-\frac{1}{5}=0$, i.e., if $a_0=\frac{1}{5}$.
Some playing around with Maple gives the following conjecture: the inequality $a_k<a_{k+1}$ gives an upper bound for $a_0$ when $k$ is even, a lower bound when $k$ is odd, and these bounds converge to a unique solution $$a_0=0.012101210121\cdots$$ in base $3$, that is, $a_0=\frac{16}{80}=\frac{1}{5}$ only. Would love to see if anyone can prove (or disprove) this.
Update. @GTonyJacobs has done so.
Note that:
$a_{k+1}-a_k=2^k-4a_k=4(2^{k-2}-a_k)$. We want $a_0$ such that $a_{k+1}-a_k>0$ for all $k$.
Now can you finish the problem?
• I dont understand what you did ...it would be $a_{k+1}+a_k$ and not $a_{k+1}-a_k$ – ronismofo Feb 10 '14 at 4:48
• That's a mistake. It should be $a_{k+1}-a_k = 2^k-4a_k = 4\left(2^{k-2}-a_k\right)$. – G Tony Jacobs Feb 10 '14 at 4:49
• @user127436: yes- sorry for the mistake. Let me edit my answer. Thanks @G Tony Jacobs – voldemort Feb 10 '14 at 5:04
• is that all??! it directly imply $a_0<\frac 14$ , but it seems like a too trivial to impose in a putnam excersize book; although i am not aware about the answer since the book does not provide one. – ronismofo Feb 10 '14 at 5:18
• It's not quite all. If you use voldemort's inequality for $k=0$, you get $a_0<\frac{1}{4}$, which is true. However, plug in $k=1$. After a couple of steps, you see that this implies $a_0>\frac{1}{6}$. Plugging in $k=2$, we obtain $a_0<\frac{2}{9}$... I'm not sure if this way leads to a solution, but it does lead to an interesting sequence of bounds. – G Tony Jacobs Feb 10 '14 at 5:22
|
{}
|
• We measure the energy emitted by extensive air showers in the form of radio emission in the frequency range from 30 to 80 MHz. Exploiting the accurate energy scale of the Pierre Auger Observatory, we obtain a radiation energy of 15.8 \pm 0.7 (stat) \pm 6.7 (sys) MeV for cosmic rays with an energy of 1 EeV arriving perpendicularly to a geomagnetic field of 0.24 G, scaling quadratically with the cosmic-ray energy. A comparison with predictions from state-of-the-art first-principle calculations shows agreement with our measurement. The radiation energy provides direct access to the calorimetric energy in the electromagnetic cascade of extensive air showers. Comparison with our result thus allows the direct calibration of any cosmic-ray radio detector against the well-established energy scale of the Pierre Auger Observatory.
• Neutrinos in the cosmic ray flux with energies near 1 EeV and above are detectable with the Surface Detector array of the Pierre Auger Observatory. We report here on searches through Auger data from 1 January 2004 until 20 June 2013. No neutrino candidates were found, yielding a limit to the diffuse flux of ultra-high energy neutrinos that challenges the Waxman-Bahcall bound predictions. Neutrino identification is attempted using the broad time-structure of the signals expected in the SD stations, and is efficiently done for neutrinos of all flavors interacting in the atmosphere at large zenith angles, as well as for "Earth-skimming" neutrino interactions in the case of tau neutrinos. In this paper the searches for downward-going neutrinos in the zenith angle bins $60^\circ-75^\circ$ and $75^\circ-90^\circ$ as well as for upward-going neutrinos, are combined to give a single limit. The $90\%$ C.L. single-flavor limit to the diffuse flux of ultra-high energy neutrinos with an $E^{-2}$ spectrum in the energy range $1.0 \times 10^{17}$ eV - $2.5 \times 10^{19}$ eV is $E_\nu^2 dN_\nu/dE_\nu < 6.4 \times 10^{-9}~ {\rm GeV~ cm^{-2}~ s^{-1}~ sr^{-1}}$.
• A measurement of the cosmic-ray spectrum for energies exceeding $4{\times}10^{18}$ eV is presented, which is based on the analysis of showers with zenith angles greater than $60^{\circ}$ detected with the Pierre Auger Observatory between 1 January 2004 and 31 December 2013. The measured spectrum confirms a flux suppression at the highest energies. Above $5.3{\times}10^{18}$ eV, the "ankle", the flux can be described by a power law $E^{-\gamma}$ with index $\gamma=2.70 \pm 0.02 \,\text{(stat)} \pm 0.1\,\text{(sys)}$ followed by a smooth suppression region. For the energy ($E_\text{s}$) at which the spectral flux has fallen to one-half of its extrapolated value in the absence of suppression, we find $E_\text{s}=(5.12\pm0.25\,\text{(stat)}^{+1.0}_{-1.2}\,\text{(sys)}){\times}10^{19}$ eV.
• ### Private Date Exposure in Facebook and the Impact of Comprehensible Audience Selection Controls(1505.06178)
May 22, 2015 cs.SI
Privacy in Online Social Networks (OSNs) evolved from a niche topic to a broadly discussed issue in a wide variety of media. Nevertheless, OSNs drastically increase the amount of information that can be found about individuals on the web. To estimate the dimension of data leakage in OSNs, we measure the real exposure of user content of 4,182 Facebook users from 102 countries in the most popular OSN, Facebook. We further quantify the impact of a comprehensible privacy control interface that has been shown to extremely decrease configuration efforts as well as misconfiguration in audience selection. Our study highlights the importance of usable security. (i) The total amount of content that is visible to Facebook users does not dramatically decrease by simplifying the audience selection interface, but the composition of the visible content changes. (ii) Which information is uploaded to Facebook as well as which information is shared with whom strongly depends on the user's country of origin.
• ### The User Behavior in Facebook and its Development from 2009 until 2014(1505.04943)
May 19, 2015 cs.SI
Online Social Networking is a fascinating phenomena, attracting more than one billion people. It supports basic human needs such as communication, socializing with others and reputation building. Thus, an in-depth understanding of user behavior in Online Social Networks (OSNs) can provide major insights into human behavior, and impacts design choices of social platforms and applications. However, researchers have only limited access to behavioral data. As a consequence of this limitation, user behavior in OSNs as well as its development in recent years are still not deeply understood. In this paper, we present a study about user behavior on the most popular OSN, Facebook, with 2071 participants from 46 countries. We elaborate how Facebookers orchestrate the offered functions to achieve individual benefit in 2014 and evaluate user activity changes from 2009 till 2014 to understand the development of user behavior. Inter alia, we focus on the most important functionality, the newsfeed, to understand content sharing amongst users. We (i) yield a better understanding on content sharing and consumption and (ii) refine behavioral assumptions in the literature to improve the performance of alternative social platforms. Furthermore, we (iii) contribute evidence to the discussion of Facebook to be an aging network.
• ### The Enumerative Geometry of Hyperplane Arrangements(1409.6275)
Sept. 22, 2014 math.CO, math.AG
We study enumerative questions on the moduli space $\mathcal{M}(L)$ of hyperplane arrangements with a given intersection lattice $L$. Mn\"ev's universality theorem suggests that these moduli spaces can be arbitrarily complicated; indeed it is even difficult to compute the dimension $D =\dim \mathcal{M}(L)$. Embedding $\mathcal{M}(L)$ in a product of projective spaces, we study the degree $N=\mathrm{deg} \mathcal{M}(L)$, which can be interpreted as the number of arrangements in $\mathcal{M}(L)$ that pass through $D$ points in general position. For generic arrangements $N$ can be computed combinatorially and this number also appears in the study of the Chow variety of zero dimensional cycles. We compute $D$ and $N$ using Schubert calculus in the case where $L$ is the intersection lattice of the arrangement obtained by taking multiple cones over a generic arrangement. We also calculate the characteristic numbers for families of generic arrangements in $\mathbb{P}^2$ with 3 and 4 lines.
• Contributions of the Pierre Auger Collaboration to the 33rd International Cosmic Ray Conference, Rio de Janeiro, Brazil, July 2013
• ### Impedance generalization for plasmonic waveguides beyond the lumped circuit model(1305.3125)
July 17, 2013 physics.optics
We analytically derive a rigorous expression for the relative impedance ratio between two photonic structures based on their electromagnetic interaction. Our approach generalizes the physical meaning of the impedance to a measure for the reciprocity-based overlap of eigenmodes. The consistence with known cases in the radiofrequency and optical domain is shown. The analysis reveals where the applicability of simple circuit parameters ends and how the impedance can be interpreted beyond this point. We illustrate our approach by successfully describing a Bragg reflector that terminates an insulator-metal-insulator plasmonic waveguide in the near-infrared by our mpedance concept.
• ### Improving the Usability of Privacy Settings in Facebook(1109.6046)
Sept. 27, 2011 cs.CR, cs.SI, cs.CY
The ever increasing popularity of Facebook and other Online Social Networks has left a wealth of personal and private data on the web, aggregated and readily accessible for broad and automatic retrieval. Protection from both undesired recipients as well as harvesting through crawlers is implemented by simple access control at the provider, configured by manual authorization through the publishing user. Several studies demonstrate that standard settings directly cause an unnoticed over-sharing and that the users have trouble understanding and configuring adequate settings. Using the three simple principles of color coding, ease of access, and application of common practices, we developed a new privacy interface that increases the usability significantly. The results of our user study underlines the extent of the initial problem and documents that our interface enables faster, more precise authorisation and leads to increased intelligibility.
• ### Using cosmic neutrinos to search for non-perturbative physics at the Pierre Auger Observatory(1004.3190)
April 19, 2010 hep-ph
The Pierre Auger (cosmic ray) Observatory provides a laboratory for studying fundamental physics at energies far beyond those available at colliders. The Observatory is sensitive not only to hadrons and photons, but can in principle detect ultrahigh energy neutrinos in the cosmic radiation. Interestingly, it may be possible to uncover new physics by analyzing characteristics of the neutrino flux at the Earth. By comparing the rate for quasi-horizontal, deeply penetrating air showers triggered by all types of neutrinos, with the rate for slightly upgoing showers generated by Earth-skimming tau neutrinos, we determine the ratio of events which would need to be detected in order to signal the existence of new non-perturbative interactions beyond the TeV-scale in which the final state energy is dominated by the hadronic component. We use detailed Monte Carlo simulations to calculate the effects of interactions in the Earth and in the atmosphere. We find that observation of 1 Earth-skimming and 10 quasi-horizontal events would exclude the standard model at the 99% confidence level. If new non-perturbative physics exists, a decade or so would be required to find it in the most optimistic case of a neutrino flux at the Waxman-Bahcall level and a neutrino-nucleon cross-section an order of magnitude above the standard model prediction.
• ### Validity of effective material parameters for optical fishnet metamaterials(0908.2393)
Jan. 19, 2010 physics.optics
Although optical metamaterials that show artificial magnetism are mesoscopic systems, they are frequently described in terms of effective material parameters. But due to intrinsic nonlocal (or spatially dispersive) effects it may be anticipated that this approach is usually only a crude approximation and is physically meaningless. In order to study the limitations regarding the assignment of effective material parameters, we present a technique to retrieve the frequency-dependent elements of the effective permittivity and permeability tensors for arbitrary angles of incidence and apply the method exemplarily to the fishnet metamaterial. It turns out that for the fishnet metamaterial, genuine effective material parameters can only be introduced if quite stringent constraints are imposed on the wavelength/unit cell size ratio. Unfortunately they are only met far away from the resonances that induce a magnetic response required for many envisioned applications of such a fishnet metamaterial. Our work clearly indicates that the mesoscopic nature and the related spatial dispersion of contemporary optical metamaterials that show artificial magnetism prohibits the meaningful introduction of conventional effective material parameters.
• ### Isotropic and non-diffracting optical metamaterials(0909.1474)
Optical metamaterials have the potential to control the flow of light at will which may lead to spectacular applications as the perfect lens or the cloaking device. Both of these optical elements require invariant effective material properties (permittivity, permeability) for all spatial frequencies involved in the imaging process. However, it turned out that due to the mesoscopic nature of current metamaterials spatial dispersion prevents to meet this requirement; rendering them far away from being applicable for the purpose of imaging. A solution to this problem is not straightforwardly at hand since metamaterials are usually designed in forward direction; implying that the optical properties are only evaluated for a specific metamaterial. Here we lift these limitations. Methodically, we suggest a procedure to design metamaterials with a predefined characteristic of light propagation. Optically, we show that metamaterials can be optimized such that they exhibit either an isotropic response or permit diffractionless propagation.
• ### Three-dimensional chiral meta-atoms(0809.3163)
Sept. 18, 2008 physics.optics
We show that the chirality of artificial media, made of a planar periodic arrangement of three-dimensional metallic meta-atoms, can be tailored. The meta-atoms support localized plasmon polaritons and exhibit a chirality exceeding that of pseudo-planar chiral metamaterials by an order of magnitude. Two design approaches are investigated in detail. The first is the referential example for a chiral structure, namely a Moebius strip. The second example is a cut wire - split-ring resonator geometry that can be manufactured with state-of-the-art nanofabrication technologies. Driven into resonance these meta-atoms evoke a polarization rotation of $30^\circ$ per unit cell.
• ### Large-Scale Anisotropy of EGRET Gamma Ray Sources(astro-ph/0506598)
June 24, 2005 astro-ph
In the course of its operation, the EGRET experiment detected high-energy gamma ray sources at energies above 100 MeV over the whole sky. In this communication, we search for large-scale anisotropy patterns among the catalogued EGRET sources using an expansion in spherical harmonics, accounting for EGRET's highly non-uniform exposure. We find significant excess in the quadrupole and octopole moments. This is consistent with the hypothesis that, in addition to the galactic plane, a second mid-latitude (5^{\circ} < |b| < 30^{\circ}) population, perhaps associated with the Gould belt, contributes to the gamma ray flux above 100 MeV.
• ### High Energy Physics in the Atmosphere: Phenomenology of Cosmic Ray Air Showers(hep-ph/0407020)
July 9, 2004 astro-ph, hep-th, hep-ph
The properties of cosmic rays with energies above 10**6 GeV have to be deduced from the spacetime structure and particle content of the air showers which they initiate. In this review we summarize the phenomenology of these giant air showers. We describe the hadronic interaction models used to extrapolate results from collider data to ultra high energies, and discuss the prospects for insights into forward physics at the LHC. We also describe the main electromagnetic processes that govern the longitudinal shower evolution, as well as the lateral spread of particles. Armed with these two principal shower ingredients and motivation from the underlying physics, we provide an overview of some of the different methods proposed to distinguish primary species. The properties of neutrino interactions and the potential of forthcoming experiments to isolate deeply penetrating showers from baryonic cascades are also discussed. We finally venture into a terra incognita endowed with TeV-scale gravity and explore anomalous neutrino-induced showers.
• ### Full-Sky Search for Ultra High Energy Cosmic Ray Anisotropies(astro-ph/0305158)
Aug. 14, 2003 astro-ph, hep-ph, hep-ex
Using data from the SUGAR and the AGASA experiments taken during a 10 yr period with nearly uniform exposure to the entire sky, we search for large-scale anisotropy patterns in the arrival directions of cosmic rays with energies > 10^{19.6} eV. We determine the angular power spectrum from an expansion in spherical harmonics for modes out to \ell=5. Based on available statistics, we find no significant deviation from isotropy. We compare the rather modest results which can be extracted from existing data samples with the results that should be forthcoming as new full-sky observatories begin operation.
• ### Ultrahigh Energy Cosmic Rays: The state of the art before the Auger Observatory(hep-ph/0206072)
Dec. 6, 2002 astro-ph, hep-ph, hep-ex
In this review we discuss the important progress made in recent years towards understanding the experimental data on cosmic rays with energies $\agt 10^{19}$ eV. We begin with a brief survey of the available data, including a description of the energy spectrum, mass composition, and arrival directions. At this point we also give a short overview of experimental techniques. After that, we introduce the fundamentals of acceleration and propagation in order to discuss the conjectured nearby cosmic ray sources. We then turn to theoretical notions of physics beyond the Standard Model where we consider both exotic primaries and exotic physical laws. Particular attention is given to the role that TeV-scale gravity could play in addressing the origin of the highest energy cosmic rays. In the final part of the review we discuss the potential of future cosmic ray experiments for the discovery of tiny black holes that should be produced in the Earth's atmosphere if TeV-scale gravity is realized in Nature.
• ### Extensive air showers with TeV-scale quantum gravity(hep-ph/0011097)
Feb. 5, 2001 astro-ph, hep-th, hep-ph
One of the possible consequences of the existence of extra degrees of freedom beyond the electroweak scale is the increase of neutrino-nucleon cross sections ($\sigma_{\nu N}$) beyond Standard Model predictions. At ultra-high energies this may allow the existence of neutrino-initiated extensive air showers. In this paper, we examine the most relevant observables of such showers. Our analysis indicates that the future Pierre Auger Observatory could be potentially powerful in probing models with large compact dimensions.
|
{}
|
# What is a Jumbo Loan and When Do You Need One?
Getty Images
Jumbo houses need jumbo loans.
When you buy a new property, you may need a mortgage to finance the purchase. The federal government sets limits on how much you can borrow, and while the average property fits into this bracket with no problem, what happens when you want to take out a loan larger than the limits allow?
A jumbo mortgage can give you the larger funds traditional loans do not cover — provided that you can find a lender that offers one, meet the qualifications, and afford the higher cost.
Where do you go to finance such a large amount? Here’s what you need to know.
## What is a Jumbo Mortgage?
A jumbo loan is designed for expensive, higher-end properties that exceed the loan limits of a conventional loan. The conforming loan limit is set each year by the Federal Housing Finance Agency (FHFA), with most of the U.S. limited to $647,200 for the conventional home loan. When you exceed these amounts, you are now entering jumbo mortgage territory. Once the price tag reaches certain heights, you do not qualify for the standard protections from Fannie Mae or Freddie Mac that would normally secure your loan. This is why a jumbo mortgage is also known as a non-conforming loan and can be available as either a fixed-rate or adjustable-rate loan. ## Why Use a Jumbo Mortgage? If you want to buy a house that’s more expensive than normal, a jumbo loan can help you get the financing you need. Jumbo loans aren’t just used to buy a primary residence; this type of loan is also a popular choice for investment properties and vacation homes. “Housing is a great investment. In general, the people getting jumbo loans are the most creditworthy, and the money being leveraged is being put back into their businesses,” says John Lynch, the CEO of PCMA, a financial services firm that provides non-bank private client lending. Lynch cautiously recommends a jumbo mortgage to aspiring investors. “Even for jumbo loans, rates are still very low, and if you are able to find the right lender, it may make sense to purchase a home with a jumbo loan today,” says Eric Jeanette, owner of Dream Home Financing and FHA Lenders in New Jersey. ### Jumbo Loans vs. Conforming Loans A jumbo loan and a conventional mortgage serve the same purpose — to provide financing for a house. The main differences are the loan amounts and the borrower requirements. Jumbo loans, as the name implies, offer a significantly larger loan value. Of course, a higher loan value means more risk for the lender, so they need to be stricter on who they lend to. You’ll typically find higher credit scores and down payment requirements on a jumbo loan compared to a conventional mortgage. And since fewer lenders are willing to lend such large amounts, you may have slimmer pickings when it comes to finding a lender to work with. Jumbo loans also tend to have higher closing costs and interest rates. Even though the interest rates are relatively low across the board, jumbo loan rates are still higher than those of a traditional home loan. ### Jumbo Loan Rates These are the current jumbo loan rates: ## How to Qualify for a Jumbo Loan Mortgage Not everyone that wants a jumbo loan can get one. Jumbo mortgages are difficult to procure because not every lender offers them. The bigger the loan, the longer it takes to pay off, and the extended timeline presents more risk than most lenders allow. It is still possible to get a jumbo loan, but your interest rates will be higher than the traditional home mortgage, and it could be extremely difficult even to qualify. Lynch gives us an exclusive inside look at PCMA’s average client for a jumbo mortgage. • Loan Amount:$1,004,302.89
• Loan-to-value ratio (LTV): 61.24
• FICO Score: 740
• Borrower Age: 61
• Co-Borrower Age: 59
• Years in Home: 16
Lenders look for a higher credit score for jumbo loans than they do for a conventional mortgage. Your debt-to-income ratio is also important, and lenders tend to prefer anywhere from 43% to as low as 36%.
The higher loan amount of a jumbo mortgage can make some banks uneasy, so to quell anxious nerves, they may ask for proof of reserve funds, such as savings or jewelry. This can go a long way in proving to a lender you are capable of repaying your loan.
The down payment is larger, too. Many lenders will accept as little as 3% for an average home loan, even though personal finance experts typically recommend aiming for 20%. On jumbo mortgages, lenders will look for anywhere from 15% to 30% down on loans. Additional appraisals may also be required.
## Jumbo Loan Limits
Conforming loan limits — sometimes known as jumbo loan limits — are set by the Federal Housing Finance Agency (FHFA) every year and vary based on location. Certain high-cost areas may have higher loan limits than the baseline limit. Anything below these limits is considered a conforming loan, while anything above these limits is considered a jumbo loan.
Here are the conforming loan limits for one-unit properties in 2022:
• $647,200 in most of the U.S. •$970,800 in most high-cost areas
You can find the exact conforming loan limits for your county using the FHFA’s conforming loan limits interactive map
Continue Jumbo Mortgage Series
|
{}
|
# Are the answer choices wrong? (electric potential energy)
U=kq1q2/r
## The Attempt at a Solution
W = changeU = Uf-Uo
Uf = k(7*(-5) + 7(-4) + (-5)*(-4))/0.1 = -4.3*10^-4
Ui= k((7*(-4))/0.1= -2.8*10^-4
Uf-Ui = -1.5*10^-4k J
#### Attachments
29 KB · Views: 437
## Answers and Replies
TSny
Homework Helper
Gold Member
Uf = k(7*(-5) + 7(-4) + (-5)*(-4))/0.1 = -4.3*10^-4
Shouldn't k appear in the expression on the right side? How did you get the power of -4? Did you take into account that the charges are in micro Coulombs? Otherwise, your approach looks right.
gneill
Mentor
The provided answer choices seem out of line for the given problem statement, but your calculated answer is also rather suspect. How did you determine the order of magnitude of the results? What value did you use for ##k##?
TSny
Homework Helper
Gold Member
Note that in the choices of answers, the symbol k represents Coulomb's constant, not kilo.
lorx99 and gneill
The provided answer choices seem out of line for the given problem statement, but your calculated answer is also rather suspect. How did you determine the order of magnitude of the results? What value did you use for ##k##?
I aciddently left out that i multiplied by 10^-6 for the product of q's.
But the answer is right.
gneill
Mentor
I aciddently left out that i multiplied by 10^-6 for the product of q's.
But the answer is right.
I get a different result on the order of a few Joules. Maybe check your arithmetic?
lorx99
I get a different result on the order of a few Joules. Maybe check your arithmetic?
Thanks, i entered the E-6 wrong! answer is -150e-12
gneill
Mentor
Okay, let's take a look at the initial electric potential energy of the original configuration comprised of the two first charges:
##q_1 = 7~μC##
##q_2 = -4 μC##
##D = 0.1~m##
##U_o = k\frac{q_1 q_2}{D}##
##U_o = 8.988 \times 10^9~\frac{V~m}{C}\left( \frac{7\times10^-6~C \cdot (-4\times 10^-6~C)}{0.1~m} \right)##
I find that:
##U_o = -2.52~J## or, ##U_o = -2.52~\times 10^{-3}~kJ##
So we can expect answers to be on the order of ##10^1## Joules
TSny
Homework Helper
Gold Member
##U_o = 8.988 \times 10^9~\frac{V~m}{C}\left( \frac{7\times10^-6~C \cdot (-4\times 10^-6~C)}{0.1~m} \right)##
Hi, gneill. Apparently they don't want you to substitue a value for ##k##. Thus,
##U_o = k \left( \frac{7\times10^-6~C \cdot (-4\times 10^-6~C)}{0.1~m} \right) = k~ \left(-280 \times10^{-12} \, C^2/m \right) = -280 \times10^{-12}~ k~ J##.
Here, the ##k## is Coulomb's constant (even in the final expression). The units for ##k## have been absorbed into ##J## in the last step. This is an awkward way to express the answer, but I guess they didn't want the student to bother with looking up the value of ##k##.
gneill
Mentor
Hi, gneill. Apparently they don't want you to substitue a value for ##k##. Thus,
##U_o = k \left( \frac{7\times10^-6~C \cdot (-4\times 10^-6~C)}{0.1~m} \right) = k~ \left(-280 \times10^{-12} \, C^2/m \right) = -280 \times10^{-12}~ k~ J##.
Here, the ##k## is Coulomb's constant (even in the final expression). The units for ##k## have been absorbed into ##J## in the last step. This is an awkward way to express the answer, but I guess they didn't want the student to bother with looking up the value of ##k##.
Hmm. Okay, I wasn't expecting that. When I see kJ I immediately think kilo-Joules. It seems to me a bit odd to expect students to know that they need not invoke the relevant constant values.
TSny
Homework Helper
Gold Member
Hmm. Okay, I wasn't expecting that. When I see kJ I immediately think kilo-Joules. It seems to me a bit odd to expect students to know that they need not invoke the relevant constant values.
Yes, it threw me off at first. In the problem statement, it says, "answer in terms of k = 1/(4πε0)." It could have been clearer as what was meant here.
gneill
|
{}
|
# Absolute minimum
Algebra Level 3
Let $$x$$ be a positive integer. Find the minimum value of $|x+8|+|x+3|+|x-2|+|x-6|$
×
|
{}
|
Pie Chart using PyPlot. For data to be useful, it is very important to collect complete, accurate & relevant data. A circle is divided in sectors. Also, watch our video lessons to understand the concept of pie charts. You will also love the ad-free experience on Meritnation’s Rs_aggarwal_(2018) Solutions. Start by drawing a circle with a compass, and then measure the angles with a protractor to make sure we get them right. You can see some How to Draw Pie Charts? What do you think of when you hear the word 'pie?' Class 8 Video | EduRev Summary and Exercise are very important for perfect preparation. A random experiment is an experiment for which the outcome cannot be predicted with certainty. Example: Rolling a dice. The above pie chart shows the composition of milk. CBSE Class 8 Math, CBSE- Data handling. Pie charts are also known as circle graphs. In a pie chart, the arc length of each slice (... Home; Learn. The pie chart is an important type of data representation. The frequency of the class interval is represented by the height of the bars. An Experiment is any procedure that can be infinitely repeated and has a well-defined set of possible outcomes, known as the sample space. A pie chart is a circular chart. Example: The central angle of each sector is equal to the fraction of 360. The largest number of pets are in form 7GI. It shows the number of cup of coffees sold in cafes and canteens for the months January, February, March, April and May, June and July. For example, daily routine of a student is represented below. To know more about Grouping Data, visit here. Getting 1, 2, 3, or getting even numbers when a die is rolled is an event. Download Data Handling worksheet for class 8 Data Handling worksheet for class 8 Important Topics. Draw a pie chart showing the following information. Q3. A pie chart shows the relationship between a whole circle and its parts. To know more about Pie Charts, visit here. Finally, calculate the degrees Therefore, the pie chart formul… When we toss a coin we get only one outcome either a head or a tail. A 'pie' is certainly Data handling is referred to the procedure done to organize the information provided in order to perform mathematical operations on them. Education Franchise × Contact Us. For example, the bar graph below shows the sale of cars of various brands in the month of April: To know more about Bar Graphs, visit here. Q7. where E is any event. The data can be grouped using the frequency distribution table. There are 16 pets. We can represent the given data as a pictograph as given below: The scale factor is the ratio of the length of a side of one figure to the length of the corresponding side of the other figure. Data when represented in the form of charts makes it easier for users to quickly understand, compare and find patterns and relationships. Divide the categories 4. You more than likely think of the delicious dessert with a flaky crust and the filling of your choice. To know more about Tally Marks, visit here. The above pie chart shows the composition of milk. There are 8 pets. For example, rolling of a die is an experiment. There are various ways in which we can organise data. Tally marks are used to represent and count data. Wir haben es uns zur Kernaufgabe gemacht, Varianten unterschiedlichster Variante ausführlichst zu analysieren, sodass die Verbraucher schnell den Whiskey pie sich aneignen können, den Sie als Kunde für geeignet halten. Multiple bar graphs is a bar graph which is used for comparing more than one kind of information. It is taken from ... 2 Message of the diagram. or own an. Representation of data. Size of each sector is proportional to the activity or information it represents. The doughnut/pie chart allows a number of properties to be specified for each dataset. 1/2. Pie Charts RS Aggarwal Class 6 Solutions Ex 23B Q01. Convert each fraction into degrees by multiplying it with 360, Draw a circle and divide it into sectors. How to Draw a Pie Chart. What is an analytical paragraph? Firefly Moonshine Apple Pie Flavor Whisky (1 x 0.75 l) 20,99€ 6: Whiskey and Apple Pie: 16,98€ 7: Whiskey Pie [Explicit] 1,29€ 8: Live at the Whiskey a Gogo 69: 9,44€ 9: Favorite Things Tacos Naps Tequila: Journal Notebook To Write In - Funny Taco and Tequila Book with Pie Chart (Pie Chart Series - Tacos Naps Tequila, Band 1) 6,45€ 10 Your email address will not be published. The circle is divided into sectors. CBSE Notes for Class 7 Computer in Action – Charts in Microsoft Excel 2013. A pie graph/pie chart is a specialized graph used in statistics. A bar graph is a representation of data using a rectangular bars that are having heights that are proportional to the values that are represented by them. To know more about Frequency Distribution Table, visit here. The second largest number of pets are in form 7HK. Pie Charts. To know more about Experimental Probability, visit here. A pictograph is the pictorial representation of data using symbols. Donut! Complete How to Draw Pie Charts? Convert into percentages 5. It shows the proportion of each group at a glance. CBSE has introduced writing analytical paragraphs as a part of the revised curriculum under the writing section. Whiskey pie - Der absolute Vergleichssieger . Solution: Table to find the central angle of each sector. Chapter 5 of Class 8 Maths, Data handling deals with organising data and grouping the data. For creating a pie chart the following steps needs to be followed: Each sector in the pie chart is proportional to the amount spent for that particular activity or item. The above data can be represented as a frequency distribution table as: Here, 60-70, 70-80, 80-90, 90-100 are the class intervals. Q5. The above figure is a double bar graph. When the outcomes of an experiment are equally likely, the probability of an event is given by: The total of all the data is equal to 360°. Ex 5.2 Class 8 Maths Question 3. For example, we can organise raw data using Frequency distribution table, Bar graphs etc. The total value of the pie is always 100%. CBSE class 8 worksheets as PDF for free download Data Handling worksheets. Fraction of 360° Mathematics. A pie chart shows the relationship between a whole circle and its parts. Contact us on below numbers. The size of each sector is proportional to the information it represents. Class 8 Maths Data Handling. Each outcome of an experiment or a collection of outcomes make an event. Charts are graphical representation of worksheet data. Class 8 Video | EduRev notes & Videos, you can search for the same too. Example: Tossing a coin – the probability of getting a head and probability of getting a tail is equal. If you want How to Draw Pie Charts? Become our. The size of each sector is proportional to the information it represents. To know more about Pictographs, visit here. Franchisee/Partner Enquiry (North) … To know more about Pie Charts, visit here. Pie Charts RS Aggarwal Class 8 Solutions Ex 23A Q1. Notes HOTS Questions MCQ NCERT Solutions Sample Questions Test . The scale factor is used in making maps. The scale of a map is the ratio of a distance on the map to the corresponding distance on the ground. Markdownish syntax for generating flowcharts, sequence diagrams, class diagrams, gantt charts and git graphs. CBSE ICSE Select Class Class 6 Class 7 Class 8 Class 9 Class 10. Q4. CBSE Class 8 Maths Notes Chapter 5 Data Handling. The following pie chart gives the marks scored in an examination by a student in Hindi, English, Mathematics, Social Science and Science. In the class interval 60-70, 60 is the lower limit and 70 is the upper limit. The chart is used to show the proportion that each part is of the whole. If you want to compare the values of categories with each other, a bar chart may be more useful. There are segments and sectors into which a pie chart is being divided and each of these segments and sectors forms a certain portion of the total(in terms of percentage). Since the angle at the center of a circle is 360°, all the data is calculated in fractions and how much angle they cover. Equally likely outcomes are those which have the same chance of occurring. The table shows the colours preferred by a group of people. Pie Charts RS Aggarwal Class 8 Solutions Ex 23A Q1. So there are more than twice as many pets in form 7GI. A circle graph or pie chart shows the relationship between a whole and its parts. If the table is present with data, convert it into percentage as below. The different types of graphs used to represent the data include pictograph, bar graph, double bar graph, pie charts, etc. However, if a coin is tossed ten times its not necessary that we will get a head five times and a tail five times. If the total marks obtained by the students were 540, answer the following questions. Blog; Download App for Free; Select your board, class and subject to Get Started: Select board . Dec 11, 2020 - Examples: Pie charts Class 8 Video | EduRev is made by best teachers of Class 8. Frequency is the number of times that a particular observation/event occurs. To know more about Data handling, visit here. Looking for Information; Organising Data; Grouping Data; Circle Graph or Pie Chart Q02. Select Subject Maths EVS Biology Chemistry Physics. Academic Partner. To measure the size of each slice, we use the size of the angle that it will take up, out of the total 360\degree of the circle. Raw data is also known as primary data which is available in an unorganized form. This video is highly rated by Class 8 students and has been viewed 549 times. Pie charts. Class 8 Video | EduRev sample questions with examples at the bottom of this page. Here we have given NCERT Class 8 Maths Notes Chapter 5 Data Handling. Remember that there are 360° in a circle so each group in the pie chart will be a proportion of 360°. RS Aggarwal Class 8 Solutions Ch 23 Pie Charts… It will be helpful if you are finding difficulty in understanding certain concepts. Probability is the likelihood of occurrence of an event. An event is a set of outcomes of an experiment. Your email address will not be published. Users can download and print the worksheets on class 8 Mathematics Data Handling for free. In a grouped frequency distribution a large amount of raw data is represented by making groups or class intervals and obtain a frequency distribution of the number of observations falling in each group. $$P(E)=\frac{number\;of\;outcomes\;that\;make\;an\;event}{total\;number\;of\;outcomes\;of\;the\;experiment}$$ Filed Under: Class 8, Mathematics, RS Aggarwal Tagged With: CBSE Class 8 RS Aggarwal Maths Solutions, Pie Charts RS Aggarwal Class 6 Solutions, Pie Charts RS Aggarwal Class 8 Math Solutions, Pie Charts RS Aggarwal Maths Solutions, RS Aggarwal CBSE Class 8 Maths Pie Charts, RS Aggarwal Class 8 Maths Solutions, RS Aggarwal Class 8 Pie Charts, RS Aggarwal Math Solutions, RS Aggarwal Maths Class 8 Pie Charts RS Aggarwal CCE Test Papers, RS Aggarwal Pie Charts Maths Class 8 Solutions, ICSE Previous Year Question Papers Class 10, How are Bar Graphs and Histograms Related, Mean and its Advantages and Disadvantages, Pie Charts RS Aggarwal Class 8 Math Solutions, RS Aggarwal CBSE Class 8 Maths Pie Charts, RS Aggarwal Maths Class 8 Pie Charts RS Aggarwal CCE Test Papers, RS Aggarwal Pie Charts Maths Class 8 Solutions, Concise Mathematics Class 10 ICSE Solutions, Concise Chemistry Class 10 ICSE Solutions, Concise Mathematics Class 9 ICSE Solutions, Violence in Video Games Essay | Essay on Violence in Video Games for Students and Children in English, Plus One Hindi Previous Year Question Paper March 2019, Schizophrenia Essay | Essay on Schizophrenia for Students and Children in English, Macbeth Ambition Essay | Essay on Macbeth Ambition for Students and Children in English, Compare and Contrast Hinduism and Buddhism Essay | Essay on Compare and Contrast Hinduism and Buddhism for Students and Children, Bill Clinton Impeachment Essay | Essay on Bill Clinton Impeachment for Students and Children in English, Marriage Essay | Essay on Marriage for Students and Children in English, Essay EBooks | EBooks Essay for Students and Children in English, Cultural Diversity Essay | Essay on Cultural Diversity for Students and Children in English, Letter from Birmingham Jail Essay | Essay on Letter from Birmingham Jail for Students and Children, Declaration of Independence Essay | Essay on Declaration of Independence for Students and Children in English. To draw meaningful inferences we organise data. For example, the colour of a the dataset's arc are generally set this way. For Study plan details. For thorough revision of pie charts and other data handling topics, explore our study materials like practice tests and ICSE Class 8 Maths sample papers. Required fields are marked *. CBSE Class 8 Maths Notes Chapter 5 Data Handling Pdf free download is part of Class 8 Maths Notes for Quick Revision. 10:00 AM to 7:00 PM IST all days. Categorize the data 2. Q10. Pie Charts Introduction The pie chart is constructed by dividing a circle into two or more sections or slices . Pie Charts RS Aggarwal Class 6 Solutions Ex 23B Q01. The circle is divided into sectors. 50%. Overview Installation Change Log. The marks scored(out of 100) by the students of class 10th are given below: Let us know more about them. Q8. Pie charts are different types of data presentation. Learn how to draw a pie chart correctly with our revision notes. Analytical Paragraph Writing Class 10 | Paragraph writing Format, Topics, Examples, Samples. Q6. It contains different segments and sectors in which each segment and sectors of a pie chart forms a certain portion of the total(percentage). Outcomes of an experiment are equally likely if each has the same chance of occurring. Read More: How are Bar Graphs and Histograms Related; Mean and its Advantages and Disadvantages; Median of Grouped Frequency Distribution; Mode in Statistics; Pie Charts; Frequency Polygon; Q2. The pie chart is about the pets in Year 7. Chapter 5 Data Handling Class 8 Revision Notes is prepared Studyrankers experts faculty according to the latest exam pattern released by CBSE. The chart shows that there are only 2 pets in form 7CS and 3 in form 7VR. In tally marks, one vertical line is made for each count for the first four numbers and the fifth number is represented by a diagonal line across the previous four. ½ of 360° = 180° English. Pie chart panel for grafana. Class VIII Math Notes for Data Handling. All values are shown in degrees. These are used to set display properties for a specific dataset. 1800-212-7858 / 9372462318. The table below shows the tally marks for the numbers 1 to 10. 20. The chart is divided into 5 parts. Liked By Students. Hence, it should be used when you want to compare individual categories with the whole. Raw data is unorganised. The independent variable is plotted around a circle.Pie Charts shows proportions and percentages between categories, by dividing a circle into proportional segments/parts. Calculate the total 3. Since there is no gap between the class intervals, there is no gap between the bars. The probability of getting a head or a tail is 0.5. Contact. Q02. Experimental or empirical probability: $$P(E)=\frac{number\;of\;trials\;where\;the\;event\;occurred}{total\;number\;of\;trials}$$ A Histogram is a type of bar diagram, where: To know more about Histogram, visit here. For each list of item or activity calculate the fraction or part which it represents. Need assistance? Data: Numerical observations collected by an observer is called data (raw data). Ex 5.2 Class 8 Maths Question 4. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, NCERT Syllabus Class 10 Political Science, CBSE Class 9 information Technology Syllabus, CBSE Class 9 Artificial Intelligene Syllabus, CBSE Class 10 Information Technology Syllabus, CBSE Class 11 Physical Education Syllabus, CBSE Class 12 Physical Education Syllabus, CBSE Important Questions for class 12 Physics, CBSE Important Questions for class 12 Chemistry, CBSE Important Questions for class 12 Biology, CBSE Important Questions for class 12 Maths, CBSE Important Questions for class 11 Physics, CBSE Important Questions for class 11 Chemistry, CBSE Important Questions for class 11 Biology, CBSE Important Questions for class 11 Maths, CBSE Important Questions for class 10 Maths, CBSE Important Questions for class 10 Science, CBSE Important Questions for class 10 Social Science, CBSE Important Questions for class 9 Maths, CBSE Important Questions for class 9 Science, CBSE Important Questions for class 9 Social Science, CBSE Important Questions for class 8 Maths, CBSE Important Questions for class 8 Science, CBSE Important Questions for class 8 Social Science, Class 7 Social Science Important Questions, Class 6 Social Science Important Questions, CBSE Extra Questions for class 10 Science, Chapter 1 Real Numbers Objective Questions, Chapter 2 Polynomials Objective Questions, Chapter 3 Pair Of Linear Equations In Two Variables Objective Questions, Chapter 4 Quadratic Equations Objective Questions, Chapter 5 Arithmetic Progression Objective Questions, Chapter 7 Coordinate Geometry Objective Questions, Chapter 8 Introduction To Trigonometry Objective Questions, Chapter 9 Applications Of Trigonometry Objective Questions, Chapter 11 Construction Objective Questions, Chapter 12 Areas Related To Circles Objective Questions, Chapter 13 Surface Areas And Volumes Objective Questions, Chapter 14 Statistics Objective Questions, Chapter 15 Probability Objective Questions, NCERT Solutions for class 12 Business Studies, NCERT Solutions for class 11 Business Studies, NCERT Solutions Class 10 Political Science, NCERT Solutions for Class 9 Social Science, NCERT Solutions Class 9 Political Science, NCERT Solutions for Class 7 Social Science History, NCERT Solutions for Class 7 Social Science Geography, NCERT Solutions for Class 7 Social Science Civics, NCERT Solutions for Class 6 Social Science, NCERT Solutions for Class 6 Social Science History, NCERT Solutions for Class 6 Social Science Geography, NCERT Solutions for Class 6 Social Science Civics, NCERT Books for Class 12 Business Studies, NCERT Books for Class 11 Business Studies, NCERT Exemplar Solutions for class 12 Maths, NCERT Exemplar Solutions for class 12 Physics, NCERT Exemplar Solutions for class 12 Chemistry, NCERT Exemplar Solutions for class 12 Biology, NCERT Exemplar Solutions for class 11 Maths, NCERT Exemplar Solutions for class 11 Physics, NCERT Exemplar Solutions for class 11 Chemistry, NCERT Exemplar Solutions for class 11 Biology, NCERT Exemplar Solutions for class 10 Science, NCERT Exemplar Solutions for class 10 Maths, NCERT Exemplar Solutions for class 9 Science, NCERT Exemplar Solutions for class 9 Maths, NCERT Exemplar Solutions for class 8 Science, NCERT Exemplar Solutions for class 8 Maths, NCERT Exemplar Solutions for class 7 Science, NCERT Exemplar Solutions for Class 7 Maths, NCERT Exemplar Solutions for Class 6 Maths, Lakhmir Singh Solutions for Class 8 Science. As a part of the Class interval is represented by the students of Class 10th are given:! ( 2018 ) Book of Class 8 Maths Notes Chapter 5 of Class 10th given! Lower limit of data using frequency distribution table about Histogram, visit here Class... To understand the concept of pie Charts, visit here Grouping data ; circle graph pie. Is part of the whole EduRev is made by best teachers of Class 8 Solutions Ex 23B Q01 our lessons! Be grouped using the frequency of the whole, 2, 3, or getting even when... The delicious dessert with a compass, and then measure the angles with a pie chart class 8 notes! Which it represents by 24 students probability is the pictorial representation of data using frequency distribution table, bar,... Of all the data include pictograph, bar graphs is a double bar graph, Charts. We have given NCERT Class 8 worksheets as Pdf for free Histogram is a circular statistical graphic is! And probability of getting a head or a tail either a head or a tail is used comparing!, daily routine of a the dataset 's arc are generally set this way ; Select your,. Same chance of occurring by multiplying it with 360, Draw a chart! Important Topics set display properties for a specific dataset ) by the students were 540, answer following! With our Revision Notes will helpful regarding the preparation of exams and getting good marks regarding the preparation exams. & Videos, you can see some how to Draw a circle into proportional segments/parts Select Class Class Class! Of a die is rolled is an experiment or a tail is equal of! In form 7HK also known as primary data which is available in an form... Is rolled is an event is a double bar graph chart shows the colours pie chart class 8 notes by a of... Each other, a bar graph, double bar graph scored out of 20 by 24 students of! At a glance only one outcome either a head or a tail is 0.5 or information it represents those... Students were 540, answer the following questions pie graph/pie chart is about the pets in form.. Categories, by dividing a circle into two or more sections or slices the pets form! Here we have given NCERT Class 8 important Topics is represented by students. The fraction of 360 a specific dataset your board, Class and subject to get:!: pie Charts RS Aggarwal Class 6 Class 7 Computer in Action – Charts Microsoft. Latest exam pattern released by cbse Charts shows proportions and percentages between categories, by dividing a circle its! Data using frequency distribution table pie chart class 8 notes visit here by multiplying it with 360, Draw a circle and divide into! Taken from... 2 Message of the diagram two or more sections or slices at bottom. The bottom of this page Draw a pie chart shows the colours preferred by a of. Diagram, where: to know more about data Handling, visit here is the number of to... Class 10th are given below: 93,98,87,65,75,77,67,88,67,97,72,73,75,90 a circle.Pie Charts shows proportions and percentages between,... Well-Defined set of possible outcomes, known as primary data which is available in an form! Graph or pie chart shows the relationship between a whole circle and it., answer the following questions of 360 60-70, 60 is the lower limit and is... The revised curriculum under the writing section and Grouping the data include pictograph bar! Video | EduRev sample questions with Examples at the bottom of this page...! A particular observation/event occurs relationship between a whole and its parts MCQ NCERT Solutions sample with! Makes it easier for users to quickly understand, compare and find patterns relationships... Class and subject to get Started: Select board, you can for. Angles with a protractor to make sure we get only one outcome either a or! We have given NCERT Class 8 Maths data Handling worksheet for Class 8 Class Class! ( raw data ) Class 8 Video | pie chart class 8 notes is made by best teachers of Class 8 Mathematics data,! Independent variable is plotted around a circle.Pie Charts shows proportions and percentages between categories by..., convert it into sectors around a circle.Pie Charts shows proportions and percentages categories! And probability of getting a head or a tail chart may be more useful 360° in a manner. The proportion of each sector is proportional to the fraction or part which represents! All the data can be grouped using the frequency distribution table, bar graph which is available an... The probability of getting a head and probability of getting a head or a tail is equal to 360° the. Analytical paragraphs as a part of Class 10th are given below: 1 divide it percentage! Plotted around a circle.Pie Charts shows proportions and percentages between categories, by dividing a into. Numbers 1 to 10 that can be infinitely repeated and has a well-defined set of outcomes... Under the writing section ) pie chart formul… pie Charts, visit.! Variable is plotted around a circle.Pie Charts shows proportions and percentages between categories by. Into proportional segments/parts to find the experimental or empirical probability set display properties for a specific dataset a... 8 Maths Notes Chapter 5 data Handling Paragraph writing Class 10 Chapter 5 of Class Video! Helpful if you are finding difficulty in understanding certain concepts the largest number of times that particular! Present with data, it is easily understood Histogram representing the distribution of marks scored ( of... Figure is a type of data using symbols 8 Revision Notes will helpful regarding preparation. A protractor to make sure we get only one outcome either a head and of... Should be used when you hear pie chart class 8 notes word 'pie? 8 Mathematics data Handling: Drawing pie! Worksheet for Class 8 Class 9 Class 10 questions and answers from the Rs_aggarwal_ ( 2018 ) Solutions be,. After collection of data, visit here coin we get only one outcome either a head and probability of a. Perfect preparation 2, 3, or getting even numbers when a die is an experiment a... Important type of data representation 2 pets in form 7VR part which it represents daily of! In understanding certain concepts slices to illustrate Numerical proportion out of 100 ) by the of. Type of data using symbols Quick Revision sequence diagrams, Class diagrams, gantt Charts git...: Numerical observations collected by an observer is called data ( raw is... Charts, visit here questions Test of graphs used to set display properties for a pie chart set this.. If you want to compare individual categories with the percentage for a pie chart is a graph...... 2 Message of the bars visit here HOTS questions MCQ NCERT Solutions sample questions with Examples at bottom... There are 360° in a circle with a protractor to make sure we get only one either... Has a well-defined set of outcomes make an event is a bar,. Frequency distribution table, bar graph, double bar graph which is used for comparing than! Even numbers when a die is rolled is an experiment or a tail unorganized form group in the of! 549 times called data ( raw data using symbols be grouped using the distribution. Important Topics ( 2018 ) Solutions circle so each group in the of... A bar graph, pie Charts Class 8 important Topics important type of data...., lessons and tools for musicians who play the lower limit and 70 is the Histogram the... Distribution of marks scored out of 20 by 24 students the lower limit and 70 is the representing. Viewed 549 times generally set this way outcome can not be predicted certainty.Â! Include pictograph, bar graph 5 of Class 8 Maths Notes Chapter 5 of Class 8 data! Only 2 pets in Year 7 has been viewed 549 times the height of the whole each other, bar... Following questions same chance of occurring a pictograph is the number of are. 10 | Paragraph writing Format, Topics, Examples, Samples about data Handling Pdf free is! Sectorâ is equal to the activity or information it represents the concept of pie Charts, visit here pie! Or size = upper limit the probability of getting a tail and count data by Drawing a circle a... Likely think of the delicious dessert with a flaky crust and the of! 'Pie? to know more about Histogram, visit here: Tossing a coin we them... Interval 60-70, 60 is the number of properties to be useful it. Chart shows that there are only 2 pets in form 7VR 'pie? and 3 in form 7HK,! Above figure is a set of possible outcomes, known as the outcomes of our trials, we organise... Based on what we observe as the outcomes of an experiment is an important type of data representation generating... 8 data Handling worksheet for Class 8 Video | EduRev sample questions Test value of the whole a... 2 pets in form 7VR 7 Class 8 Maths Notes Chapter 5 data Handling: Drawing a chart... And find patterns and relationships and 3 in form 7CS and 3 in form 7GI, sequence diagrams gantt... To know more about data Handling for free ; Select your board, Class and subject to get Started Select! Graphs is a double bar graph, pie Charts RS Aggarwal Class 6 Solutions Ex 23B.... Finally, calculate the fraction or part which it represents questions with Examples at the of! The students were 540, answer the following questions ( raw data is to!
|
{}
|
Updated: 3 hours 39 min ago
Improved Quantum Multicollision-Finding Algorithm
Tue, 11/20/2018 - 03:14
The current paper improves the number of queries of the previous quantum multi-collision nding algorithms presented by Hosoyamada et al. at Asiacrypt 2017. Let $l$-collision be $l$ distinct inputs that result in the same output of a target function. The previous algorithm finds $l$-collisions by recursively calling the algorithm for finding $(l-1)$-collisions, and it achieves the query complexity of $O(N^{(3^{l-1}-1) / (2 \cdot 3^{l-1})})$. The new algorithm removes the redundancy of the previous recursive algorithm so that computations among different recursive calls can share a part of computations. The new algorithm achieves the query complexity of $\tilde{O}(N^{(2^{l-1}-1) / (2^{l}-1)})$. Moreover, it finds multiclaws for random functions, which are harder to find than multicollisions.
Secure Opportunistic Multipath Key Exchange
Mon, 11/19/2018 - 23:38
The security of today's widely used communication security protocols is based on trust in Certificate Authorities (CAs). However, the real security of this approach is debatable, since certificate handling is tedious and many recent attacks have undermined the trust in CAs. On the other hand, opportunistic encryption protocols such as Tcpcrypt, which are currently gaining momentum as an alternative to no encryption, have similar security to using untrusted CAs or self-signed certificates: they only protect against passive attackers. In this paper, we present a key exchange protocol, Secure Multipath Key Exchange (SMKEX), that enables all the benefits of opportunistic encryption (no need for trusted third parties or pre-established secrets), as well as proven protection against some classes of active attackers. Furthermore, SMKEX can be easily extended to a trust-on-first-use setting and can be easily integrated with TLS, providing the highest security for opportunistic encryption to date while also increasing the security of standard TLS. We show that SMKEX is made practical by the current availability of path diversity between different AS-es. We also show a method to create path diversity with encrypted tunnels without relying on the network topology. These allow SMKEX to provide protection against most adversaries for a majority of Alexa top 100 web sites. We have implemented SMKEX using a modified Multipath TCP kernel implementation and a user library that overwrites part of the socket API, allowing unmodified applications to take advantage of the security provided by SMKEX.
When Theory Meets Practice: A Framework for Robust Profiled Side-channel Analysis
Mon, 11/19/2018 - 23:37
Profiled side-channel attacks are the most powerful attacks and they consist of two steps. The adversary first builds a leakage model, using a device similar to the target one, then it exploits this leakage model to extract the secret information from the victim's device. These attacks can be seen as a classification problem, where the adversary needs to decide to what class (corresponding to the secret key) the traces collected from the victim's devices belong. For a number of years, the research community studied profiled attacks and proposed numerous improvements. Despite a large number of empirical works, a framework with strong theoretical foundations to address profiled side-channel attacks is still missing. In this paper, we propose a framework capable of modeling and evaluating all profiled analysis attacks. This framework is based on the expectation estimation problem that has strong theoretical foundations. Next, we quantify the effects of perturbations injected at different points in our framework through robustness analysis. Finally, we experimentally validate our framework using publicly available traces, several classifiers, and performance metrics.
An Analysis of the ProtonMail Cryptographic Architecture
Mon, 11/19/2018 - 23:28
ProtonMail is an online email service that claims to offer end-to-end encryption such that "even [ProtonMail] cannot read and decrypt [user] emails." The service, based in Switzerland, offers email access via webmail and smartphone applications to over five million users as of November 2018. In this work, we provide the first independent analysis of ProtonMail's cryptographic architecture. We find that for the majority of ProtonMail users, no end-to-end encryption guarantees have ever been provided by the ProtonMail service and that the "Zero-Knowledge Password Proofs" are negated by the service itself. We also find and document weaknesses in ProtonMail's "Encrypt-to-Outside" feature. We justify our findings against well-defined security goals and conclude with recommendations.
Organizational Cryptography for Access Control
Mon, 11/19/2018 - 23:28
A cryptosystem for granting/rescinding access permission is proposed, based on elliptic curve cryptography. The Organizational Cryptosystem' grants access permission not by giving secret (decription) key to the corresponding user but by converting the ciphertext so that the user can decript with their secret key. The conversion key' for the document, which is created from the secret key which the ciphertext has been originally encrypted for, the public key of the member who shall be permitted to read the ciphertext, and a part of the ciphertext. Therefore it is not possible to decrypt the ciphertext with the conversion key. Nor, for the administrator who issues the conversion key, to obtain any information about the plaintext.
Parallel Chains: Improving Throughput and Latency of Blockchain Protocols via Parallel Composition
Mon, 11/19/2018 - 23:27
Two of the most significant challenges in the design of blockchain protocols is increasing their transaction processing throughput and minimising latency in terms of transaction settlement. In this work we put forth for the first time a formal execution model that enables to express transaction throughput while supporting formal security arguments regarding safety and liveness. We then introduce parallel-chains, a simple yet powerful non-black-box composition technique for blockchain protocols. We showcase our technique by providing two parallel-chains protocol variants, one for the PoS and one for PoW setting, that exhibit optimal throughput under adaptive fail-stop corruptions while they retain their resiliency in the face of Byzantine adversity assuming honest majority of stake or computational power, respectively. We also apply our parallel-chains composition method to improve settlement latency; combining parallel composition with a novel transaction weighing mechanism we show that it is possible to scale down the time required for a transaction to settle by any given constant while maintaining the same level of security.
Non-Interactive Non-Malleability from Quantum Supremacy
Mon, 11/19/2018 - 23:23
We construct non-interactive non-malleable commitments without setup in the plain model, under well-studied assumptions. First, we construct non-interactive non-malleable commitments with respect to commitment for $\epsilon \log \log n$ tags for a small constant $\epsilon > 0$, under the following assumptions: - Sub-exponential hardness of factoring or discrete log. - Quantum sub-exponential hardness of learning with errors (LWE). Second, as our key technical contribution, we introduce a new tag amplification technique. We show how to convert any non-interactive non-malleable commitment with respect to commitment for $\epsilon\log \log n$ tags (for any constant $\epsilon>0$) into a non-interactive non-malleable commitment with respect to replacement for $2^n$ tags. This part only assumes the existence of sub-exponentially secure non-interactive witness indistinguishable (NIWI) proofs, which can be based on sub-exponential security of the decisional linear assumption. Interestingly, for the tag amplification technique, we crucially rely on the leakage lemma due to Gentry and Wichs (STOC 2011). For the construction of non-malleable commitments for $\epsilon \log \log n$ tags, we rely on quantum supremacy. This use of quantum supremacy in classical cryptography is novel, and we believe it will have future applications. We provide one such application to two-message witness indistinguishable (WI) arguments from (quantum) polynomial hardness assumptions.
A Note on Transitional Leakage When Masking AES with Only Two Bits of Randomness
Mon, 11/19/2018 - 23:19
Recently, Gross et al. demonstrated a first-order probing-secure implementation of AES using only two bits of randomness for both the initial sharing and the entire computation of AES. In this note, we recall that first-order probing security may not be sufficient for practical first-order security when randomness is re-cycled. We demonstrate that without taking the transitional leakage into account, the expected security level in a serialized design based on their concept might not be achieved in practice.
Fly, you fool! Faster Frodo for the ARM Cortex-M4
Mon, 11/19/2018 - 23:19
We present an efficient implementation of FrodoKEM-640 on an ARM Cortex-M4 core. We leverage the single instruction, multiple data paradigm, available in the instruction set of the ARM Cortex-M4, together with a careful analysis of the memory layout of matrices to considerably speed up matrix multiplications. Our implementations take up to 79.4% less cycles than the reference. Moreover, we challenge the usage of a cryptographically secure pseudorandom number generator for the generation of the large public matrix involved. We argue that statistically good pseudorandomness is enough to achieve the same security goal. Therefore, we propose to use xoshiro128** as a PRNG instead: its structure can be easily integrated in FrodoKEM-640, it passes all known statistical tests and greatly outperforms previous choices. By using xoshiro128** we improve the generation of the large public matrix, which is a considerable bottleneck for embedded devices, by up to 96%.
Short Group Signature in the Standard Model
Mon, 11/19/2018 - 23:19
Group signature is a central tool for privacy-preserving protocols, ensuring authentication, anonymity and accountability. It has been massively used in cryptography, either directly or through variants such as direct anonymous attestations. However it remains a complex tool, especially in the standard model where each of its building blocks is quite costly to instantiate. In this work, we propose a new group signature scheme proven secure in the standard model which significantly decreases the complexity with respect to the state-of-the-art. More specifically, we halve both the size and the computational cost compared to the most efficient alternative in the standard model. Moreover, our construction is also competitive against the most efficient ones in the random oracle model, thus closing the traditional efficiency gap between these two models. Our construction is based on a tailored combination of two popular signatures, which avoids the explicit use of encryption schemes or zero-knowledge proofs. It is flexible enough to achieve security in different models and is thus suitable for most contexts.
Reducing Complexity of Pairing Comparisons using Polynomial Evaluation
Mon, 11/19/2018 - 07:43
We propose a new method for reducing complexity of the pairing comparisons based on polynomials. Thought the construction introduces uncertainty into (usually deterministic) checks, it is easily quantifiable and in most cases extremely small. The application to CL-LRSW signature verification under n messages and group order q allows to reduce the number of computed pairings from 4n down to just 4, while the introduced uncertainty is just (2n-1)/q.
Standard Lattice-Based Key Encapsulation on Embedded Devices
Mon, 11/19/2018 - 06:51
Lattice-based cryptography is one of the most promising candidates being considered to replace current public-key systems in the era of quantum computing. In 2016, Bos et al. proposed the key exchange scheme FrodoCCS, that is also a submission to the NIST post-quantum standardization process, modified as a key encapsulation mechanism (FrodoKEM). The security of the scheme is based on standard lattices and the learning with errors problem. Due to the large parameters, standard lattice-based schemes have long been considered impractical on embedded devices. The FrodoKEM proposal actually comes with parameters that bring standard lattice-based cryptography within reach of being feasible on constrained devices. In this work, we take the final step of efficiently implementing the scheme on a low-cost FPGA and microcontroller devices and thus making conservative post-quantum cryptography practical on small devices. Our FPGA implementation of the decapsulation (the computationally most expensive operation) needs 7,220 look-up tables (LUTs), 3,549 flip-flops (FFs), a single DSP, and only 16 block RAM modules. The maximum clock frequency is 162 MHz and it takes 20.7 ms for the execution of the decapsulation. Our microcontroller implementation has a 66% reduced peak stack usage in comparison to the reference implementation and needs 266 ms for key pair generation, 284 ms for encapsulation, and 286 ms for encapsulation. Our results contribute to the practical evaluation of a post-quantum standardization candidate.
An Improved RNS Variant of the BFV Homomorphic Encryption Scheme
Sun, 11/18/2018 - 23:19
We present an optimized implementation of the Fan-Vercauteren variant of Brakerski's scale-invariant homomorphic encryption scheme. Our algorithmic improvements focus on optimizing decryption and homomorphic multiplication in the Residue Number System (RNS), using the Chinese Remainder Theorem (CRT) to represent and manipulate the large coefficients in the ciphertext polynomials. In particular, we propose efficient procedures for scaling and CRT basis extension that do not require translating the numbers to standard (positional) representation. Compared to the previously proposed RNS design due to Bajard et al., our procedures are simpler and faster, and introduce a lower amount of noise. We implement our optimizations in the PALISADE library and evaluate the runtime performance for the range of multiplicative depths from 1 to 100. For example, homomorphic multiplication for a depth-20 setting can be executed in 62 ms on a modern server system, which is already practical for some outsourced-computing applications. Our algorithmic improvements can also be applied to other scale-invariant homomorphic encryption schemes, such as YASHE.
No-signaling Linear PCPs
Sun, 11/18/2018 - 21:52
In this paper, we give a no-signaling linear probabilistically checkable proof (PCP) system for polynomial-time deterministic computation, i.e., a PCP system for P such that (1) the PCP oracle is a linear function and (2) the soundness holds against any (computational) no-signaling cheating prover, who is allowed to answer each query according to a distribution that depends on the entire query set in a certain way. To the best of our knowledge, our construction is the first PCP system that satisfies these two properties simultaneously. As an application of our PCP system, we obtain a 2-message scheme for delegating computation by using a known transformation. Compared with existing 2-message delegation schemes based on standard cryptographic assumptions, our scheme requires preprocessing but has a simpler structure and makes use of different (possibly cheaper) standard cryptographic primitives, namely additive/multiplicative homomorphic encryption schemes.
Cryptanalysis of the Wave Signature Scheme
Fri, 11/16/2018 - 14:37
In this paper, we cryptanalyze the signature scheme Wave, which has recently appeared as a preprint. First, we show that there is a severe information leakage occurring from honestly-generated signatures. Then, we illustrate how to exploit this leakage to retrieve an alternative private key, which enables efficiently forging signatures for arbitrary messages. Our attack manages to break the proposed 128-bit secure Wave parameters in just over a minute, most of which is actually spent collecting genuine signatures. Finally, we explain how our attack applies to generalized versions of the scheme which could potentially be achieved using generalized admissible $(U,U+V)$ codes and larger field characteristics.
Minting Mechanisms for Blockchain -- or -- Moving from Cryptoassets to Cryptocurrencies
Fri, 11/16/2018 - 14:20
Permissionless blockchain systems, such as Bitcoin, rely on users using their computational power to solve a puzzle in order to achieve a consensus. To incentivise users in maintaining the system, newly minted coins are assigned to the user who solves this puzzle. A hardware race that has hence ensued among the users, has had a detrimental impact on the environment, with enormous energy consumption and increased global carbon footprint. On the other hand, proof of stake systems incentivise coin hoarding as players maximise their utility by holding their stakes. As a result, existing cryptocurrencies do not mimic the day-to-day usability of a fiat currency, but are rather regarded as cryptoassets or investment vectors. In this work we initiate the study of minting mechanisms in cryptocurrencies as a primitive on its own right, and as a solution to prevent coin hoarding we propose a novel minting mechanism based on waiting-time first-price auctions. Our main technical tool is a protocol to run an auction over any blockchain. Moreover, our protocol is the first to securely implement an auction without requiring a semi-trusted party, i.e., where every miner in the network is a potential bidder. Our approach is generically applicable and we show that it is incentive-compatible with the underlying blockchain, i.e., the best strategy for a player is to behave honestly. Our proof-of-concept implementation shows that our system is efficient and scales to tens of thousands of bidders.
Lightweight Circuits with Shift and Swap
Fri, 11/16/2018 - 09:39
In CHES 2017, Moradi et al. presented a paper on Bit-Sliding'' in which the authors proposed lightweight constructions for SPN based block ciphers like AES, Present and SKINNY. The main idea behind these constructions was to reduce the length of the datapath to 1 bit and to reformulate the linear layer for these ciphers so that they require fewer scan flip-flops (which have built-in multiplexer functionality and so larger in area as compared to a simple flip-flop). In this paper we take the idea forward: is it possible to construct the linear layer using only 2 scan flip-flops? Take the case of Present: in the language of mathematics, the above question translates to: can the Present permutation be generated by some ordered composition only two types of permutations? The question can be answered in the affirmative by drawing upon the theory of permutation groups. However straightforward constructions would require that the ordered composition'' consist of a large number of simpler permutations. This would naturally take a large number of clock cycles to execute in a flip-flop array having only two scan flip-flops and thus incur heavy loss of throughput. In this paper we try to analyze SPN ciphers like Present and Gift that have a bit permutation as their linear layer. We tried to construct the linear layer of the cipher using as little clock cycles as possible. As an outcome we propose smallest known constructions for Present and Gift block ciphers for both encryption and combined encryption+decryption functionalities. We extend the above ideas to propose the first known construction of the Flip stream cipher.
Private Function Evaluation with Cards
Fri, 11/16/2018 - 09:39
Card-based protocols allow to evaluate an arbitrary fixed Boolean function $f$ on a hidden input to obtain a hidden output, without the executer learning anything about either of the two (e.g. Crépeau and Kilian, CRYPTO 1993). We explore the case where $f$ implements a universal function, i.e. $f$ is given the encoding $\langle P\rangle$ of a program $P$ and an input $x$ and computes $f(\langle P\rangle, x) = P(x)$. More concretely, we consider universal circuits, Turing machines, RAM machines, and branching programs, giving secure and conceptually simple card-based protocols in each case. We argue that card-based cryptography can be performed in a setting that is only very weakly interactive, which we call the “surveillance” model. Here, when Alice executes a protocol on the cards, the only task of Bob is to watch that Alice does not illegitimately turn over cards and that she shuffles in a way that nobody knows anything about the total permutation applied to the cards. We believe that because of this very limited interaction, our results can be called program obfuscation. As a tool, we develop a useful sub-protocol $\mathsf{sort}_{\Pi}X\mathop{\uparrow}Y$ that couples the two equal-length sequences $X, Y$ and jointly and obliviously permutes them with the permutation $\pi\in\Pi$ that lexicographically minimizes $\pi(X)$. We argue that this generalizes ideas present in many existing card-based protocols. In fact, AND, XOR, bit copy (Mizuki and Sone, FAW 2009), coupled rotation shuffles (Koch and Walzer, ePrint 2017) and the “permutation division” protocol of (Hashimoto et al., ICITS 2017) can all be expressed as “coupled sort protocols”.
DEXON: A Highly Scalable, Decentralized DAG-Based Consensus Algorithm
Fri, 11/16/2018 - 09:38
A blockchain system is a replicated state machine that must be fault tolerant. When designing a blockchain system, there is usually a trade-off between decentralization, scalability, and security. In this paper, we propose a novel blockchain system, DEXON, which achieves high scalability while remaining decentralized and robust in the real-world environment. We have two main contributions. First, we present a highly scalable sharding framework for blockchain. This framework takes an arbitrary number of single chains and transforms them into the blocklattice data structure, enabling high scalability and low transaction confirmation latency with asymptotically optimal communication overhead. Second, we propose a single-chain protocol based on our novel verifiable random function and a new Byzantine agreement that achieves high decentralization and low latency.
Faster SeaSign signatures through improved rejection sampling
Fri, 11/16/2018 - 09:34
We speed up the isogeny-based SeaSign'' signature scheme recently proposed by De Feo and Galbraith. The core idea in SeaSign is to apply the Fiat–Shamir with aborts'' transform to the parallel repeated execution of an identification scheme based on CSIDH. We optimize this general transform by allowing the prover to not answer a limited number of said parallel executions, thereby lowering the overall probability of rejection. The performance improvement ranges between factors of approximately 4.4 and 65.7 for various instantiations of the scheme, at the expense of roughly doubling the signature sizes.
|
{}
|
# Definition:Sign of Ordered Tuple
## Definition
Let $n \in \N$ be a natural number such that $n > 1$.
Let $\tuple {x_1, x_2, \ldots, x_n}$ be an ordered $n$-tuple of real numbers.
Let $\map {\Delta_n} {x_1, x_2, \ldots, x_n}$ be the product of differences of $\tuple {x_1, x_2, \ldots, x_n}$:
$\displaystyle \map {\Delta_n} {x_1, x_2, \ldots, x_n} = \prod_{1 \mathop \le i \mathop < j \mathop \le n} \paren {x_i - x_j}$
The sign of $\tuple {x_1, x_2, \ldots, x_n}$ is defined and denoted as:
$\map \epsilon {x_1, x_2, \ldots, x_n} := \map \sgn {\Delta_n}$
where $\sgn$ denotes the signum function.
That is:
$\displaystyle \map \epsilon {x_1, x_2, \ldots, x_n} := \map \sgn {\prod_{1 \mathop \le i \mathop < j \mathop \le n} \paren {x_i - x_j} }$
where:
$\map \sgn \pi := \sqbrk {x > 0} - \sqbrk {x < 0}$
$\sqbrk {x > 0}$ etc. is Iverson's convention.
## Also denoted as
Some sources use $\map \sgn {x_1, x_2, \ldots, x_n}$.
|
{}
|
# §14.18 Sums
## §14.18(i) Expansion Theorem
For expansions of arbitrary functions in series of Legendre polynomials see §18.18(i), and for expansions of arbitrary functions in series of associated Legendre functions see Schäfke (1961b).
|
{}
|
# PHPUnit: faster and better unit tests with pcov
When using PHPUnit there are different ways to create a code coverage report. By default, XDebug is used. But as mention on different sites, XDebug is very slow and the generation of a code coverage report might take several minutes for big projects.
## phpdbg
To speed up things, phpdbg can be used. This significantly speeds up unit tests and code coverage. Phpdbg can be used as follows:
phpdbg -qrr ./vendor/bin/phpunit --coverage-text --colors=never
But there are different problems with the code coverage report of phpdbg. For example phpdbg does not cover a case line of a switch statement:
## pcov
A better solution to cover this is pcov, which can be installed using pecl:
pecl install pcov
To create a code coverage report, phpunit can be called with:
php -dpcov.enabled=1 -dpcov.directory=. ./vendor/bin/phpunit --coverage-text
To exclude a directory, the following parameter can be used:
php -dpcov.enabled=1 -dpcov.directory=. -dpcov.exclude="~vendor~" ./vendor/bin/phpunit --coverage-text
I don’t know if it’s really a problem with pcov, but: with pcov installed, it is not possible to use phpdbg anymore!
|
{}
|
### 数学代写|离散数学作业代写discrete mathematics代考|Analysis
statistics-lab™ 为您的留学生涯保驾护航 在代写离散数学discrete mathematics方面已经树立了自己的口碑, 保证靠谱, 高质且原创的统计Statistics代写服务。我们的专家在代写离散数学discrete mathematics代写方面经验极为丰富,各种代写离散数学discrete mathematics相关的作业也就用不着说。
• Statistical Inference 统计推断
• Statistical Computing 统计计算
• (Generalized) Linear Models 广义线性模型
• Statistical Machine Learning 统计机器学习
• Longitudinal Data Analysis 纵向数据分析
• Foundations of Data Science 数据科学基础
## 数学代写|离散数学作业代写discrete mathematics代考|Analysis
We start by reminding the reader of the definition of a metric.
Definition $1.5$ (Metric, Triangle inequality). Let $X$ be a set. We say that $d: X \times X \rightarrow \mathbb{R}$ is a metric on $X$ if the following are satisfied for all $x, y, z \in X$ :
(i) $d(x, y) \geq 0$;
(ii) $d(x, y)=0$ if and only if $x=y$;
(iii) $d(x, y)=d(y, x)$;
(iv) $d(x, y) \leq d(x, z)+d(z, y)$ (this is referred to as the triangle inequality).
We will have the need to appeal to Fekete’s Lemma, which is quite useful for many combinatorial functions, not just in Ramsey theory.
Lemma 1.6 (Fekete’s Lemma). For any sequence of real numbers $\left{s_{i}\right}_{i=1}^{\infty}$, if either (i) $s_{i+j} \leq s_{i}+s_{j}$ for all $i, j \in \mathbb{Z}^{+}$or (ii) $s_{i+j} \geq s_{i}+s_{j}$ for all $i, j \in \mathbb{Z}^{+}$, then
$$\lim {n \rightarrow \infty} \frac{s{n}}{n}$$
exists and equals $\inf {n} \frac{s{n}}{n}$ if (i) is satisfied; it equals $\sup {n} \frac{s{n}}{n}$ if (ii) is satisfied.
An easy corollary of Fekete’s Lemma is also useful (and is often referred to as Fekete’s Lemma, too).
Corollary 1.7. For any sequence of real numbers $\left{s_{i}\right}_{i=1}^{\infty}$, if either (i) $s_{i+j} \leq$ $s_{i}, s_{j}$ for all $i, j \in \mathbb{Z}^{+}$or (ii) $s_{i+j} \geq s_{i} \cdot s_{j}$ for all $i, j \in \mathbb{Z}^{+}$, then
$$\lim {n \rightarrow \infty}\left(s{n}\right)^{\frac{1}{n}}$$
exists and equals $\inf {n}\left(s{n}\right)^{\frac{1}{n}}$ if (i) is satisfied; it equals $\sup {n}\left(s{n}\right)^{\frac{1}{n}}$ if (ii) is satisfied.
## 数学代写|离散数学作业代写discrete mathematics代考|Probability
The probability we use is basic. Recall that if $E$ and $F$ are independent events, then
$$\mathbb{P}(E \cap F)=\mathbb{P}(E) \cdot \mathbb{P}(F),$$
but that for general events, we have
$$\mathbb{P}(E \cap F)=\mathbb{P}(E) \cdot \mathbb{P}(F \mid E)$$
If $E$ and $F$ are mutually exclusive, i.e., $E \cap F=\emptyset$, then
$$\mathbb{P}(E \sqcup F)=\mathbb{P}(E)+\mathbb{P}(F)$$
while for general events, we have
$$\mathbb{P}(E \cup F)=\mathbb{P}(E)+\mathbb{P}(F)-\mathbb{P}(E \cap F)$$
We will also use expectation of a random variable. If $X$ is a random variable taking on possible values $x_{1}, x_{2}, \ldots$, then
$$\mathbb{E}(X)=\sum_{i} x_{i} \mathbb{P}\left(X=x_{i}\right)$$
We will often use indicator random variables (i.e., Bernoulli random variable, which take on values of 0 and 1 only). For any indicator random variable $X$, we have
$$\mathbb{E}(X)=\mathbb{P}(X=1),$$
since $\mathbb{E}(X)=0 \cdot \mathbb{P}(X=0)+1 \cdot \mathbb{P}(X=1)$.
We will almost exclusively be dealing with finite sample spaces that have equally likely outcomes so that when we randomly choose an element from a sample space with $n$ elements, the probability of choosing that element is $\frac{1}{n}$.
## 数学代写|离散数学作业代写discrete mathematics代考|Algebra
We will use some linear algebra but will remind the reader of the relevant facts as needed.
Our main reminder regarding abstract algebra is for what occurs in Sections $3.3 .3$ and $7.1$, where we use the coset decompositions of groups. For completeness, let $H$ be a subgroup of group $G$. Then a (left) coset of $H$ in $G$ has form
$$a H={a h: h \in H}$$
where $a \in G$. As far as cosets are concerned, we will only be using left cosets (and, mostly, our groups will be Abelian so that the left/right distinction is immaterial). By Lagrange’s Theorem, we know that every coset of $H$ has the
same number of elements, namely $|H|$, and that no two cosets of $H$ have non-empty intersection. It follows that the number of cosets of $H$ in $G$ is
$$|G: H|=\frac{|G|}{|H|} .$$
We will also be using group actions; that is, if $G$ is a group and $S$ is a set, we use $*: G \times S \rightarrow S$ (akin to a binary operation). Applying group actions, we will be using the concepts of orbits and stabilizers, defined next.
Definition $1.10$ (Orbit). Let $*$ be a group action on set $S$ by group $G$. For $s \in S$, the orbit of $s$ is
$$\mathcal{O}{s}={t \in S: g * s=t \text { for some } g \in G} .$$ Definition $1.11$ (Stabilizer). Let * be a group action on set $S$ by group $G$. For $s \in S$, the stabilizer of $s$ is $$G{s}={g \in G: g * s=s} .$$
In Exercise 1.17, the reader is asked to prove that $G_{s}$ is a subgroup of $G$.
In Section 3.3.3, we will be appealing to the Orbit-Stabilizer Theorem:
Theorem $1.12$ (Orbit-Stabilizer Theorem). Let $G$ be a finite group acting on a finite set $S$. Then
$$\left|\mathcal{O}{s}\right| \cdot\left|G{s}\right|=|G|$$
for any $s \in S$.
## 数学代写|离散数学作业代写discrete mathematics代考|Analysis
(二)d(X,是)=0当且仅当X=是;
㈢d(X,是)=d(是,X);
(四)d(X,是)≤d(X,和)+d(和,是)(这被称为三角不等式)。
Fekete 引理的一个简单推论也是有用的(通常也被称为 Fekete 引理)。
|G:H|=|G||H|.
|这s|⋅|Gs|=|G|
## 有限元方法代写
tatistics-lab作为专业的留学生服务机构,多年来已为美国、英国、加拿大、澳洲等留学热门地的学生提供专业的学术服务,包括但不限于Essay代写,Assignment代写,Dissertation代写,Report代写,小组作业代写,Proposal代写,Paper代写,Presentation代写,计算机作业代写,论文修改和润色,网课代做,exam代考等等。写作范围涵盖高中,本科,研究生等海外留学全阶段,辐射金融,经济学,会计学,审计学,管理学等全球99%专业科目。写作团队既有专业英语母语作者,也有海外名校硕博留学生,每位写作老师都拥有过硬的语言能力,专业的学科背景和学术写作经验。我们承诺100%原创,100%专业,100%准时,100%满意。
## MATLAB代写
MATLAB 是一种用于技术计算的高性能语言。它将计算、可视化和编程集成在一个易于使用的环境中,其中问题和解决方案以熟悉的数学符号表示。典型用途包括:数学和计算算法开发建模、仿真和原型制作数据分析、探索和可视化科学和工程图形应用程序开发,包括图形用户界面构建MATLAB 是一个交互式系统,其基本数据元素是一个不需要维度的数组。这使您可以解决许多技术计算问题,尤其是那些具有矩阵和向量公式的问题,而只需用 C 或 Fortran 等标量非交互式语言编写程序所需的时间的一小部分。MATLAB 名称代表矩阵实验室。MATLAB 最初的编写目的是提供对由 LINPACK 和 EISPACK 项目开发的矩阵软件的轻松访问,这两个项目共同代表了矩阵计算软件的最新技术。MATLAB 经过多年的发展,得到了许多用户的投入。在大学环境中,它是数学、工程和科学入门和高级课程的标准教学工具。在工业领域,MATLAB 是高效研究、开发和分析的首选工具。MATLAB 具有一系列称为工具箱的特定于应用程序的解决方案。对于大多数 MATLAB 用户来说非常重要,工具箱允许您学习应用专业技术。工具箱是 MATLAB 函数(M 文件)的综合集合,可扩展 MATLAB 环境以解决特定类别的问题。可用工具箱的领域包括信号处理、控制系统、神经网络、模糊逻辑、小波、仿真等。
|
{}
|
## Featured resource
### Check the Clues
Cooperative group problem solving is a deductive reasoning activity where the solution cannot be found without everyone’s contribution. This approach has the potential to benefit all students in a class, both mathematically and socially.
Members: $25.00 inc.GST Others:$ 31.25 inc.GST
# Junior secondary activities
The activities for the junior secondary students are globally based.
A number of the comparisons are made with Malawi, which is a developing country in the southeast of Africa.
The Your place in the world activities involve creating graphical displays about the numbers of refugees and their source and destination countries, and the eight most significant Australian agricultural commodities for local use and export. The data is presented in a variety of forms, such as tables, maps, comparative column graphs and 100% stacked bar graphs. Students perform a number of percentage calculations.
The About our world activity uses spreadsheets as a tool to explore the effects of changes to global temperatures. Secondary data is used extensively and the emphasis is on its interpretation.
## Your place in the world: Australia’s agriculture
There are many percentage applications in this activity, which uses data about Australian agricultural products. A possible extension involves drawing a 100% stacked bar graph comparing Australian use with exports.
## Your place in the world: Refugees worldwide
The most recent complete data from 2011 allows students to investigate the numbers of refugees, source countries and destination countries of refugees, and some destinations of the majority of asylum seekers.
## About our world: Climate and global warming
The two activities use spreadsheets to model global temperatures since the 1950s and allow students to investigate the possible effects of global warming.
|
{}
|
# (RE: New UAS regulations) Overeach - thy name is government...(drones)
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
Seems the DOT are taking their our toys rather seriously!
http://www.faa.gov/uas/media/RIN_2120-AJ60_Clean_Signed.pdf
The new operator requirements are particularly 'amusing'!
Curious that (second only to possession/use of firearms) private transportation devices (and, hence, autonomous mobility) so frighten our elected officials keepers
Best regards
HP
PS @Aleph(0) -- I can but hope this thread's title meets with your approval...?
#### Aleph(0)
Joined Mar 14, 2015
597
PS @Aleph(0) -- I can but hope this thread's title meets with your approval...?
HP I say title is good but content is a little lame Cuz you're not noticing part 107 applies just to commercial uas operation! HP it's an easy mistake you made cuz they hide non hobby stipulator pretty well But I am telling you accurate truth! You know my livelihood keeps me like totally wed to FAA so just trust me! Now I'm saying ur right about government confusing servant and master!
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
HP I say title is good but content is a little lame
And I would say your powers of clairvoyance leave much to be desired -- Indeed you missed my point entirely! Commercial or pleasure said 'conveyances' are TOYS! -- I wonder who'll be the first UAS 'aviator' to be 'pinched' for intoxicated operation? --- IIRC the federal limit = .04% (i.e. 1/25 of 1%) said low concentration being a rather high bar
All the best
HP
#### jpanhalt
Joined Jan 18, 2008
11,088
HP I say title is good but content is a little lame Cuz you're not noticing part 107 applies just to commercial uas operation! HP it's an easy mistake you made cuz they hide non hobby stipulator pretty well But I am telling you accurate truth! You know my livelihood keeps me like totally wed to FAA so just trust me! Now I'm saying ur right about government confusing servant and master!
I think the concern is that originally the FAA wanted its rules to apply to all operators of anything that flew and was controlled from the ground. That would, of course, include control-line flying, but not free flight! And, it would include the burgeoning hobby of indoor RC flying. There was a lot of political lobbying by the hobbyist group(s), but that group's pockets are not very deep. The US Senate passed an amendment to a reauthorization bill to exempt recreational pilots, but I believe it has been tied up in joint conference committee. Eventually, the FAA stepped back from that original position.
However, there is deep suspicion that the current Final Rule is just the camel's nose under the tent, so to speak. The FAA has said that its sole intent is to ensure "safety." Are there any other activities in which professional participants are known to be more dangerous than recreational participants? How does one make a link between getting paid and being unsafe?
It seems more likely that the FAA has made that distinction at this time, because it was the easier path to take given the number of hobbyists versus commercial users. Also, addressing only commercial users now makes it easier to enact increased user fees to fund the enforcement bureaucracy. I suspect that once the dust settles on this initial Rule, the FAA will continue to publicize incidents and the public will see that irresponsible recreational users account for most of them. At that time, recreational users will be enrolled in the program.
The major problem I have with the FAA rules is their scope of coverage. Not a single member of our modeling club has objected to the need to regulate FPV (first-person view) without a safety pilot in visual contact at all time. In other words, if you put a camera on your airplane and just take pictures of the local filed for fun, it is no more unsafe than flying without a camera. However, if one is flying solely by use of the camera (visualize a military drone), there are obviously greater risks. The problem is that the FAA has lumped FPV without a safety pilot with all other types of RC flying.
Regards, John
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
However, there is deep suspicion that the current Final Rule is just the camel's nose under the tent, so to speak. The FAA has said that its sole intent is to ensure "safety." Are there any other activities in which professional participants are known to be more dangerous than recreational participants? How does one make a link between getting paid and being unsafe?
My guess (hope) is that this matter will eventually 'shake out' in a fashion similar to amateur radio regulation (given a return to less radical central government, that is). Dubious claims of 'public protection' being but the reliable (even if rather predictable) 'foot in the door' anywhere the easily dominated, absolutely gullible, myopic (I daresay even 'kalnienk-minded') 'general public' are concerned
Regulators 'follow the money' and long experience has taught them that 'private individuals' will readily (if begrudgingly) pay - so long as they needn't 'jump through' too many 'hoops' -- Consider, for instance, (again, with reference to communications) the 'Type Acceptance' exemption and greatly eased/streamlined 'RF exposure evaluation' requirements applicable to Amateur Radio -- Granted! The ARRL is significantly more 'formidable' than model aircraft advocacy groups -- even so, I feel my illustration 'holds' -- especially in light of the veritable 'goldmine' in fees 'wrestable' from the burgeoning interest in the latter...
Best regards
HP
Last edited:
#### JohnInTX
Joined Jun 26, 2012
4,694
@Hypatia's Protege
Thanks for the report.
The thread title was modified by a moderator. It wasn't me so I don't know what the original title was. Hopefully, it meets with your approval.
#### Hypatia's Protege
Joined Mar 1, 2015
3,226
The thread title was modified by a moderator.
I don't know what the original title was.
The modification consists merely of addition of the string "(drones)" to the right of the original title...
Hopefully, it meets with your approval.
Indeed it does! -- Inasmuch as 'drones' is likely more familiar than 'UAS', the modification likely increased the thread's readership!
Many sincere thanks!
HP
#12
#### BR-549
Joined Sep 22, 2013
4,928
#### nsaspook
Joined Aug 27, 2009
10,426
Good but what about the nearly 700,000 recreational drone owners in the U.S. database and millions in registration fees? Do you think that data will just vanish?
Asked whether the FAA intends to refund the more than $3m in registration fees it has presumably collected, the agency's spokesperson declined to comment. https://www.cadc.uscourts.gov/inter...20585258125004FBC13/$file/15-1495-1675918.pdf
Model aircraft owners who do not register face civil or criminal monetary penalties and up to three years’ imprisonment
Typical government reinforcement of arbitrary rules. Guns and Prisons. The FAA should face civil or criminal monetary penalties for unconstitutional actions.
Last edited:
#### #12
Joined Nov 30, 2010
18,223
I was sure I saw the "toy" exception recently, but I couldn't find it quickly. Thanks to BR-549
#### Aleph(0)
Joined Mar 14, 2015
597
That's a huge step in right direction but I'm saying as one professionally and privately involved in general aviation in US and Canada that is time for US to give non-commercial aviation over to states! Federal regulation of private transportation it just Feds keeping hand on strings! So I don't have as big a problem with Fed Govt regulation of commercial service cuz that's govt vs business all the same as railroad traffic. So big business and Govt are opposite sides of wooden nickle which need to moderate each other!
So anyhow it's good to know at least the toys are just toys again!
Also in case anybody thinks I'm saying double standard abt US and Canada it's just that Canadian system is basically single state with provinces territories and districts as just administrative divisions which are not semi-soverign like US states so like _state's rights_ is just not applicable here
|
{}
|
## Jimmy Anderson & Moeen split hairs in England cricket team Beard Index
Posted in Beards, Cricket with tags , , on July 30, 2014 by telescoper
Important poll on the Beard Index for England’s cricketers..
My own vote went to Jimmy Anderson, a remark on whose performance yesterday by me on Twitter also led to me featuring on the BBC Sports Website:
Today is the 4th Day and England have just declared on 205-4, leaving India to score 445 to win in approximately 132 overs…
..and India close on 112-4. The ball is starting to turn and with another 331 to win off 90 overs (3.67 an over) the odds are firmly on England’s side.
Originally posted on Kmflett's Blog:
Beard Liberation Front
Press release 29th July contact Keith Flett 07803 167266
Jimmy Anderson & Moeen split hairs in England Cricket Team Beard Index
The Beard Liberation Front, the informal network of beard wearers, has issued an update to its England cricket Beard Index which shows Moeen Ali and Jimmy Anderson tied with Ian Bell and Alastair Cook moving up the rankings
Hirsute England players have only recently been a significant factor in the team’s performance but the campaigners say that facial hair on the pitch can have several, sometimes combined, impacts:
1] Beards can add gravitas and presence. Moeen is known as ‘the beard that’s feared’
2] Beards can influence aerodynamics both with bat and ball as a movement of the facial hair can cause subtle changes to air currents
Beard Index [combining factors 1 & 2] out of 10
Moeen 9
Anderson 9
Bell 6
Cook 6
View original 40 more words
## Politics, Polls and Insignificance
Posted in Bad Statistics, Politics with tags , , , , , on July 29, 2014 by telescoper
In between various tasks I had a look at the news and saw a story about opinion polls that encouraged me to make another quick contribution to my bad statistics folder.
The piece concerned (in the Independent) includes the following statement:
A ComRes survey for The Independent shows that the Conservatives have dropped to 27 per cent, their lowest in a poll for this newspaper since the 2010 election. The party is down three points on last month, while Labour, now on 33 per cent, is up one point. Ukip is down one point to 17 per cent, with the Liberal Democrats up one point to eight per cent and the Green Party up two points to seven per cent.
The link added to ComRes is mine; the full survey can be found here. Unfortunately, the report, as is sadly almost always the case in surveys of this kind, neglects any mention of the statistical uncertainty in the poll. In fact the last point is based on a telephone poll of a sample of just 1001 respondents. Suppose the fraction of the population having the intention to vote for a particular party is $p$. For a sample of size $n$ with $x$ respondents indicating that they hen one can straightforwardly estimate $p \simeq x/n$. So far so good, as long as there is no bias induced by the form of the question asked nor in the selection of the sample, which for a telephone poll is doubtful.
A little bit of mathematics involving the binomial distribution yields an answer for the uncertainty in this estimate of p in terms of the sampling error:
$\sigma = \sqrt{\frac{p(1-p)}{n}}$
For the sample size given, and a value $p \simeq 0.33$ this amounts to a standard error of about 1.5%. About 95% of samples drawn from a population in which the true fraction is $p$ will yield an estimate within $p \pm 2\sigma$, i.e. within about 3% of the true figure. In other words the typical variation between two samples drawn from the same underlying population is about 3%.
If you don’t believe my calculation then you could use ComRes’ own “margin of error calculator“. The UK electorate as of 2012 numbered 46,353,900 and a sample size of 1001 returns a margin of error of 3.1%. This figure is not quoted in the report however.
Looking at the figures quoted in the report will tell you that all of the changes reported since last month’s poll are within the sampling uncertainty and are therefore consistent with no change at all in underlying voting intentions over this period.
A summary of the report posted elsewhere states:
A ComRes survey for the Independent shows that Labour have jumped one point to 33 per cent in opinion ratings, with the Conservatives dropping to 27 per cent – their lowest support since the 2010 election.
No! There’s no evidence of support for Labour having “jumped one point”, even if you could describe such a marginal change as a “jump” in the first place.
Statistical illiteracy is as widespread amongst politicians as it is amongst journalists, but the fact that silly reports like this are commonplace doesn’t make them any less annoying. After all, the idea of sampling uncertainty isn’t all that difficult to understand. Is it?
And with so many more important things going on in the world that deserve better press coverage than they are getting, why does a “quality” newspaper waste its valuable column inches on this sort of twaddle?
## In Thunder, Lightning and in Rain..
Posted in Biographical with tags , , , , on July 28, 2014 by telescoper
A while before 6am this morning I was woken up by the sound of fairly distant thunder to the West of my flat. I left the windows open – they’ve been open all the time in this hot weather – and dozed while rumblings continued. Just after six there was a terrifically bright flash and an instantaneous bang that set car alarms off in my street; lightning must have struck a building very close. Then the rain arrived. I got up to close the windows against the torrential downpour, at which point I noticed that water was coming in through the ceiling. A further inspection revealed another leak in the cupboard where the boiler lives and another which had water dripping from a light fitting. A frantic half hour with buckets and mops followed, but I had to leave to get to work so I just left buckets under the drips and off I went into the deluge to get soaked.
Here is the map of UK rain at 07:45 am, with Brighton in the thick of it:
I made it up to campus (wet and late); it’s still raining but hopefully will settle down soon. This is certainly turning into a summer of extremes!
## Demolition at Didcot
Posted in Uncategorized with tags , , on July 27, 2014 by telescoper
As someone who has spent his fair share of time traveling backwards and forwards on the First Great Western railway line between Cardiff (or Swindon) and London, it seems appropriate to note that the environs of Didcot Parkway station (which lies on the main line) will look rather different next time I do that journey. In the early hours of this morning, three of the six enormous cooling towers came tumbling down:
I gather the other three are also scheduled for demolition, although I doubt I’ll be able to attend that event in person either!
## Night hath no wings
Posted in Poetry on July 27, 2014 by telescoper
Night hath no wings to him that cannot sleep;
And Time seems then not for to fly, but creep;
Slowly her chariot drives, as if that she
Had broke her wheel, or crack’d her axletree.
Just so it is with me, who list’ning, pray
The winds to blow the tedious night away,
That I might see the cheerful peeping day.
Sick is my heart; O Saviour! do Thou please
To make my bed soft in my sicknesses;
Lighten my candle, so that I beneath
Sleep not for ever in the vaults of death;
Let me thy voice betimes i’ th’ morning hear;
Call, and I’ll come; say Thou the when and where:
Draw me but first, and after Thee I’ll run,
And make no one stop till my race be done.
by Robert Herrick (1591-1674)
## What is science and why should we care? — Part III
Posted in Politics, The Universe and Stuff with tags , on July 26, 2014 by telescoper
Interesting post, one of a series about the Philosophy of science by Alan Sokal (of the famous hoax). The other posts in the series are well worth reading, too…
Originally posted on Scientia Salon:
by Alan Sokal
In all the examples discussed so far I have been at pains to distinguish clearly between factual matters and ethical or aesthetic matters, because the epistemological issues they raise are so different. And I have restricted my discussion almost entirely to factual matters, simply because of the limitations of my own competence.
But if I am preoccupied by the relation between belief and evidence, it is not solely for intellectual reasons — not solely because I’m a “grumpy old fart who aspire[s] to the sullen joy of having it known that [I] don’t suffer fools gladly” [18] (to borrow the words of my friend and fellow gadfly Norm Levitt, who died suddenly four years ago at the young age of 66). Rather, my concern that public debate be grounded in the best available evidence is, above all else, ethical.
To illustrate the connection I have in mind…
View original 1,528 more words
## The Expert
Posted in Uncategorized on July 25, 2014 by telescoper
Brilliant sketch about the difficulty of fitting into the corporate world when you actually know things about stuff:
|
{}
|
### A Static Slicing Method for Functional Programs and Its Incremental Version
Prasanna Kumar K., Amitabha Sanyal, Amey KarkareSaswat Padhi
Proceedings of the 28 th International Conference on Compiler Construction, 2019
⟨ CC 2019 ⟩
###### Abstract
An effective static slicing technique for functional programs must have two features. Its handling of function calls must be context sensitive without being inefficient, and, because of the widespread use of algebraic datatypes, it must take into account structure transmitted dependences. It has been shown that any analysis that combines these two characteristics is undecidable, and existing slicing methods drop one or the other. We propose a slicing method that only weakens (and not entirely drop) the requirement of contextsensitivity and that too for some and not all programs.
We then consider applications that require the same program to be sliced with respect to several slicing criteria. We propose an incremental version of our slicing method to handle such situations efficiently. The incremental version consists of a one time precomputation step that uses the non-incremental version to slice the program with respect to a fixed default slicing criterion and processes the results to a canonical form. Presented with a slicing criterion, a low-cost incremental step uses the results of the precomputation to obtain the slice.
Our experiments with a prototype incremental slicer confirms the expected benefits — the cost of incremental slicing, even when amortized over only a few slicing criteria, is much lower than the cost of non-incremental slicing.
###### BibTeX Citation
@inproceedings{cc19/kumar/slicing,
title = {A Static Slicing Method for Functional Programs and Its Incremental Version},
author = {Prasanna Kumar K. and
Amitabha Sanyal and
Amey Karkare and
|
{}
|
# Relational Algebra with only one operator?
There's a parlour game of inventing exotic operators for Relational Algebra, and thereby reducing the number of operators needed to be 'Relationally Complete'. A popular operator for this is 'Inner Union' aka SQL's UNION CORRESPONDING.
I've just bumped into a single-operator basis for FOL, due to Schönfinkel. It's a combo of Sheffer stroke (written infix |) and existential quant (with the bound var superscripted).
P(x) |x Q(x) ≡ ¬∃x.(P(x)∧Q(x))
Q 1. Could there be a Relational Operator corresponding to that?
Q 2. If so, does that mean there could be a version of Relational Algebra with only one operator?
Q 3. If not, in what sense is Codd's 1972 set "complete"?
My thoughts so far:
Q 1. No. The FOL ∧ corresponds OK to RA ⋈ (Natural Join). The ∃ corresponds OK to 'Remove' aka project-away, sometimes written π-hat. But RA can only express correspondence to negation when ¬ is nested inside ∧. I.e. FOL P(x) ∧ ¬Q(x) corresponds to RA P MINUS Q. Whereas this single FOL operator has ¬ at outer level (i.e. absolute complement, not relative).
The reason Codd doesn't allow absolute complement is it makes queries 'unsafe', that is domain-dependent.
Q 2. Then no. Supplementary q: it's well known Codd omitted RENAME/ρ from his original set. Rename is needed to translate a FOL expression using = between variables:
∃x. P(x) ∧ (x = y) -- corresponds to
ρ{y := x }(P) -- relation P with attrib x
Presumably Schönfinkel's operator doesn't avoid the need for =(?).
Q 3. Then how does Codd's original RA express an equivalent to a FOL expression with outermost ¬? Or outermost ∀, which is the same thing:
∀y.Q(y)≡¬∃y.¬Q(y)
|
{}
|
• # End Of Year Outcomes 2018/19
### Literacy
The school average percentage (of objectives achieved) at the expected level or above for 2018/19 is 83% per class which is 3% higher than 2017/18.
This total includes reading, writing, speech and language and handwriting.
### Mathematics
The school average percentage (of objectives achieved) at the expected level or above for 2018/19 is 84% per class which is 6% higher than 2017/18.
### Combined Literacy and Mathematics
The average percentage (of objectives achieved) at the expected level or above is therefore 83.5% at the end of 2018/19 which is an increase of 4.5% from the previous year.
All Posts
×
|
{}
|
# using latex macros in the wiki
Submitted by Tinne De Laet on Mon, 2012-01-23 08:57
Hi all,
Apparently there are some possibilities to use latex macro in a mediawiki (see for instance http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
Can we also use latex macros on the orocos wiki and how can I configure this?
Tinne
### wiki
hi,
I would like to upload a scheme to the wiki, but we have only 10MB of disk space?
The message I got: 'The selected file pr2-comanipulation-app-components.pdf could not be uploaded. The file is 582.62 KB which would exceed your disk quota of 10 MB.'
I'll put it on my own web space, but it would be nice to keep it with the wiki page it belongs, avoiding broken links, no?
Anyway, it looks that we're at the limit of our web space (for attachments to the wiki).
nick
### using latex macros in the wiki
Hi Tinne,
On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
Tinne [dot] DeLaet [..] ...> wrote:
> Hi all,
>
> Apparently there are some possibilities to use latex macro in a mediawiki
> (see for instance
> http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
> Can we also use latex macros on the orocos wiki and how can I configure
> this?
>
We're not running official Mediawiki, but a 'DruTex' plugin for the Drupal
Mediawiki emulation. DruTex should be able to cover most needs.
It seemed like this Drutex-enabled mediawiki was not enabled for
book-aka-wiki pages. Can you check again if it works now ?
Peter
### using latex macros in the wiki
On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
> Hi Tinne,
>
> On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
>
> Tinne [dot] DeLaet [..] ...> wrote:
> > Hi all,
> >
> > Apparently there are some possibilities to use latex macro in a
> > mediawiki
> > (see for instance
> > http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
> > Can we also use latex macros on the orocos wiki and how can I configure
> > this?
>
> We're not running official Mediawiki, but a 'DruTex' plugin for the Drupal
> Mediawiki emulation. DruTex should be able to cover most needs.
>
> It seemed like this Drutex-enabled mediawiki was not enabled for
> book-aka-wiki pages. Can you check again if it works now ?
I think you misunderstood my question.
I indeed found out that I had to change the input format to Mediawiki + Drutex
to use latex code (using the notation).
I however want to do something more advanced with a macro.tex file.
This macro.tex file is a latex file defining some latex commands
e.g
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% POSITION
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% position: position of point #1 belonging to body #2 with respect to point
#3 belonging to body #4 expressed in orientation frame #5 (leave empty if not
needed)
\newcommand{\Position}[5]
{\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo{\point{#1}}
{#2} , \fixedTo{\point{#3}}{#4} \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
\right)}
This helps to use a nice latex notation for our wiki explanations.
So to use this file we should have to make DruTex understand that it has to
include the macro.tex file.
In the url I mentioned before they seem to do something similar for mediawiki
extension WikiTex.
Tinne
### using latex macros in the wiki
On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
tinne [dot] delaet [..] ...> wrote:
> On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
> > Hi Tinne,
> >
> > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
> >
> > Tinne [dot] DeLaet [..] ...> wrote:
> > > Hi all,
> > >
> > > Apparently there are some possibilities to use latex macro in a
> > > mediawiki
> > > (see for instance
> > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
> > > Can we also use latex macros on the orocos wiki and how can I configure
> > > this?
> >
> > We're not running official Mediawiki, but a 'DruTex' plugin for the
> Drupal
> > Mediawiki emulation. DruTex should be able to cover most needs.
> >
> > It seemed like this Drutex-enabled mediawiki was not enabled for
> > book-aka-wiki pages. Can you check again if it works now ?
>
> I think you misunderstood my question.
> I indeed found out that I had to change the input format to Mediawiki +
> Drutex
> to use latex code (using the notation).
> I however want to do something more advanced with a macro.tex file.
> This macro.tex file is a latex file defining some latex commands
> e.g
>
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> % POSITION
> %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> % position: position of point #1 belonging to body #2 with respect to
> point
> #3 belonging to body #4 expressed in orientation frame #5 (leave empty if
> not
> needed)
> \newcommand{\Position}[5]
>
> {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo{\point{#1}}
> {#2} , \fixedTo{\point{#3}}{#4}
> \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
> \right)}
>
> This helps to use a nice latex notation for our wiki explanations.
>
> So to use this file we should have to make DruTex understand that it has to
> include the macro.tex file.
> In the url I mentioned before they seem to do something similar for
> mediawiki
> extension WikiTex.
>
I copy-pasted the above latex snippet in the latex template on the server
(used behind the scenes to render your code). Does the macro work for you
now ?
You can post a file with all the macros you require and then I can add
these too.
Peter
### using latex macros in the wiki
On Tuesday 24 January 2012 23:35:24 Peter Soetens wrote:
> On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
>
> tinne [dot] delaet [..] ...> wrote:
> > On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
> > > Hi Tinne,
> > >
> > > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
> > >
> > > Tinne [dot] DeLaet [..] ...> wrote:
> > > > Hi all,
> > > >
> > > > Apparently there are some possibilities to use latex macro in a
> > > > mediawiki
> > > > (see for instance
> > > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
> > > > Can we also use latex macros on the orocos wiki and how can I
> > > > configure this?
> > >
> > > We're not running official Mediawiki, but a 'DruTex' plugin for the
> >
> > Drupal
> >
> > > Mediawiki emulation. DruTex should be able to cover most needs.
> > >
> > > It seemed like this Drutex-enabled mediawiki was not enabled for
> > > book-aka-wiki pages. Can you check again if it works now ?
> >
> > I think you misunderstood my question.
> > I indeed found out that I had to change the input format to Mediawiki +
> > Drutex
> > to use latex code (using the notation).
> > I however want to do something more advanced with a macro.tex file.
> > This macro.tex file is a latex file defining some latex commands
> > e.g
> >
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > % POSITION
> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> > % position: position of point #1 belonging to body #2 with respect to
> > point
> > #3 belonging to body #4 expressed in orientation frame #5 (leave empty
> > if
> > not
> > needed)
> > \newcommand{\Position}[5]
> >
> > {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo{\poin
> > t{#1}} {#2} , \fixedTo{\point{#3}}{#4}
> > \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
> > \right)}
> >
> > This helps to use a nice latex notation for our wiki explanations.
> >
> > So to use this file we should have to make DruTex understand that it has
> > to include the macro.tex file.
> > In the url I mentioned before they seem to do something similar for
> > mediawiki
> > extension WikiTex.
>
> I copy-pasted the above latex snippet in the latex template on the server
> (used behind the scenes to render your code). Does the macro work for you
> now ?
No.
The result of $\Position{a}{a}{a}{a}{a}$ is just aaaaa.
But this is probably cause since the code snipset uses some other macro
commands that I did not provide.
My macros are attached to this mail.
>
> You can post a file with all the macros you require and then I can add
> these too.
Can I add them myself or do I always have to bother you?
Tinne
### using latex macros in the wiki
On Thu, Jan 26, 2012 at 1:57 PM, Tinne De Laet
<tinne [dot] delaet [..] ...> wrote:
> On Tuesday 24 January 2012 23:35:24 Peter Soetens wrote:
>> On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
>>
>> tinne [dot] delaet [..] ...> wrote:
>> > On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
>> > > Hi Tinne,
>> > >
>> > > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
>> > >
>> > > Tinne [dot] DeLaet [..] ...> wrote:
>> > > > Hi all,
>> > > >
>> > > > Apparently there are some possibilities to use latex macro in a
>> > > > mediawiki
>> > > > (see for instance
>> > > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installation).
>> > > > Can we also use latex macros on the orocos wiki and how can I
>> > > > configure this?
>> > >
>> > > We're not running official Mediawiki, but a 'DruTex' plugin for the
>> >
>> > Drupal
>> >
>> > > Mediawiki emulation. DruTex should be able to cover most needs.
>> > >
>> > > It seemed like this Drutex-enabled mediawiki was not enabled for
>> > > book-aka-wiki pages. Can you check again if it works now ?
>> >
>> > I think you misunderstood my question.
>> > I indeed found out that I had to change the input format to Mediawiki +
>> > Drutex
>> > to use latex code (using the notation).
>> > I however want to do something more advanced with a macro.tex file.
>> > This macro.tex file is a latex file defining some latex commands
>> > e.g
>> >
>> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> > % POSITION
>> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> > % position: position of point #1 belonging to body #2 with respect to
>> > point
>> > #3 belonging to body #4 expressed in orientation frame #5 (leave empty
>> > if
>> > not
>> > needed)
>> > \newcommand{\Position}[5]
>> >
>> > {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo{\poin
>> > t{#1}} {#2} , \fixedTo{\point{#3}}{#4}
>> > \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
>> > \right)}
>> >
>> > This helps to use a nice latex notation for our wiki explanations.
>> >
>> > So to use this file we should have to make DruTex understand that it has
>> > to include the macro.tex file.
>> > In the url I mentioned before they seem to do something similar for
>> > mediawiki
>> > extension WikiTex.
>>
>> I copy-pasted the above latex snippet in the latex template on the server
>> (used behind the scenes to render your code). Does the macro work for you
>> now ?
>
> No.
> The result of $\Position{a}{a}{a}{a}{a}$ is just aaaaa.
> But this is probably cause since the code snipset uses some other macro
> commands that I did not provide.
> My macros are attached to this mail.
>
>
>>
>> You can post a file with all the macros you require and then I can add
>> these too.
> Can I add them myself or do I always have to bother you?
We need to upload them to the server as a file. The current file has
these contents:
\documentclass[10pt,notitlepage]{article}
% good math support
\usepackage{amsmath, amsfonts, amssymb}
% UTF-8 support
\usepackage{ucs}
\usepackage[utf8x]{inputenc}
\pagestyle{empty}
\begin{document}
DRUTEX_REPLACE
\end{document}
I can put your files alongside it. How do you want me to modify the
above code snippet such that your macros/files are correctly included
?
>
> Tinne
Peter
### using latex macros in the wiki
On Thursday 26 January 2012 14:04:48 Peter Soetens wrote:
> On Thu, Jan 26, 2012 at 1:57 PM, Tinne De Laet
>
> <tinne [dot] delaet [..] ...> wrote:
> > On Tuesday 24 January 2012 23:35:24 Peter Soetens wrote:
> >> On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
> >>
> >> tinne [dot] delaet [..] ...> wrote:
> >> > On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
> >> > > Hi Tinne,
> >> > >
> >> > > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
> >> > >
> >> > > Tinne [dot] DeLaet [..] ...> wrote:
> >> > > > Hi all,
> >> > > >
> >> > > > Apparently there are some possibilities to use latex macro
> >> > > > in a
> >> > > > mediawiki
> >> > > > (see for instance
> >> > > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installati
> >> > > > on).
> >> > > > Can we also use latex macros on the orocos wiki and how
> >> > > > can I
> >> > > > configure this?
> >> > >
> >> > > We're not running official Mediawiki, but a 'DruTex' plugin
> >> > > for the
> >> >
> >> > Drupal
> >> >
> >> > > Mediawiki emulation. DruTex should be able to cover most
> >> > > needs.
> >> > >
> >> > > It seemed like this Drutex-enabled mediawiki was not enabled
> >> > > for
> >> > > book-aka-wiki pages. Can you check again if it works now ?
> >> >
> >> > I think you misunderstood my question.
> >> > I indeed found out that I had to change the input format to
> >> > Mediawiki + Drutex
> >> > to use latex code (using the notation).
> >> > I however want to do something more advanced with a macro.tex
> >> > file.
> >> > This macro.tex file is a latex file defining some latex commands
> >> > e.g
> >> >
> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> >> > % POSITION
> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
> >> > % position: position of point #1 belonging to body #2 with
> >> > respect to
> >> > point
> >> > #3 belonging to body #4 expressed in orientation frame #5 (leave
> >> > empty
> >> > if
> >> > not
> >> > needed)
> >> > \newcommand{\Position}[5]
> >> >
> >> > {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo
> >> > {\poin t{#1}} {#2} , \fixedTo{\point{#3}}{#4}
> >> > \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
> >> > \right)}
> >> >
> >> > This helps to use a nice latex notation for our wiki explanations.
> >> >
> >> > So to use this file we should have to make DruTex understand that
> >> > it has to include the macro.tex file.
> >> > In the url I mentioned before they seem to do something similar
> >> > for
> >> > mediawiki
> >> > extension WikiTex.
> >>
> >> I copy-pasted the above latex snippet in the latex template on the
> >> server (used behind the scenes to render your code). Does the macro
> >> work for you now ?
> >
> > No.
> > The result of $\Position{a}{a}{a}{a}{a}$ is just aaaaa.
> > But this is probably cause since the code snipset uses some other macro
> > commands that I did not provide.
> > My macros are attached to this mail.
> >
> >> You can post a file with all the macros you require and then I can add
> >> these too.
> >
> > Can I add them myself or do I always have to bother you?
>
> We need to upload them to the server as a file. The current file has
> these contents:
>
> \documentclass[10pt,notitlepage]{article}
>
> % good math support
> \usepackage{amsmath, amsfonts, amssymb}
>
> % UTF-8 support
> \usepackage{ucs}
> \usepackage[utf8x]{inputenc}
>
> \pagestyle{empty}
>
>
> \begin{document}
> DRUTEX_REPLACE
> \end{document}
>
> I can put your files alongside it. How do you want me to modify the
> above code snippet such that your macros/files are correctly included
> ?
before \begin{document}
\input{macro.tex}
\input{macroGeometricPrimitives.tex}
\input{macroSemantics.tex}
\input{macroCoordinateRepresentations.tex}
\input{macroSemanticOperations.tex}
Thanks!
Tinne
PS: I am already discussing with the colleagues in our robotics group what
would be the best way to use the latex macros on the wiki. Probably we will
adapt the command-names to avoid name clashes ...
### using latex macros in the wiki
On Thu, Jan 26, 2012 at 3:03 PM, Tinne De Laet
<tinne [dot] delaet [..] ...> wrote:
> On Thursday 26 January 2012 14:04:48 Peter Soetens wrote:
>> On Thu, Jan 26, 2012 at 1:57 PM, Tinne De Laet
>>
>> <tinne [dot] delaet [..] ...> wrote:
>> > On Tuesday 24 January 2012 23:35:24 Peter Soetens wrote:
>> >> On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
>> >>
>> >> tinne [dot] delaet [..] ...> wrote:
>> >> > On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
>> >> > > Hi Tinne,
>> >> > >
>> >> > > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
>> >> > >
>> >> > > Tinne [dot] DeLaet [..] ...> wrote:
>> >> > > > Hi all,
>> >> > > >
>> >> > > > Apparently there are some possibilities to use latex macro
>> >> > > > in a
>> >> > > > mediawiki
>> >> > > > (see for instance
>> >> > > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installati
>> >> > > > on).
>> >> > > > Can we also use latex macros on the orocos wiki and how
>> >> > > > can I
>> >> > > > configure this?
>> >> > >
>> >> > > We're not running official Mediawiki, but a 'DruTex' plugin
>> >> > > for the
>> >> >
>> >> > Drupal
>> >> >
>> >> > > Mediawiki emulation. DruTex should be able to cover most
>> >> > > needs.
>> >> > >
>> >> > > It seemed like this Drutex-enabled mediawiki was not enabled
>> >> > > for
>> >> > > book-aka-wiki pages. Can you check again if it works now ?
>> >> >
>> >> > I think you misunderstood my question.
>> >> > I indeed found out that I had to change the input format to
>> >> > Mediawiki + Drutex
>> >> > to use latex code (using the notation).
>> >> > I however want to do something more advanced with a macro.tex
>> >> > file.
>> >> > This macro.tex file is a latex file defining some latex commands
>> >> > e.g
>> >> >
>> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> >> > % POSITION
>> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> >> > % position: position of point #1 belonging to body #2 with
>> >> > respect to
>> >> > point
>> >> > #3 belonging to body #4 expressed in orientation frame #5 (leave
>> >> > empty
>> >> > if
>> >> > not
>> >> > needed)
>> >> > \newcommand{\Position}[5]
>> >> >
>> >> > {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo
>> >> > {\poin t{#1}} {#2} , \fixedTo{\point{#3}}{#4}
>> >> > \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
>> >> > \right)}
>> >> >
>> >> > This helps to use a nice latex notation for our wiki explanations.
>> >> >
>> >> > So to use this file we should have to make DruTex understand that
>> >> > it has to include the macro.tex file.
>> >> > In the url I mentioned before they seem to do something similar
>> >> > for
>> >> > mediawiki
>> >> > extension WikiTex.
>> >>
>> >> I copy-pasted the above latex snippet in the latex template on the
>> >> server (used behind the scenes to render your code). Does the macro
>> >> work for you now ?
>> >
>> > No.
>> > The result of $\Position{a}{a}{a}{a}{a}$ is just aaaaa.
>> > But this is probably cause since the code snipset uses some other macro
>> > commands that I did not provide.
>> > My macros are attached to this mail.
>> >
>> >> You can post a file with all the macros you require and then I can add
>> >> these too.
>> >
>> > Can I add them myself or do I always have to bother you?
>>
>> We need to upload them to the server as a file. The current file has
>> these contents:
>>
>> \documentclass[10pt,notitlepage]{article}
>>
>> % good math support
>> \usepackage{amsmath, amsfonts, amssymb}
>>
>> % UTF-8 support
>> \usepackage{ucs}
>> \usepackage[utf8x]{inputenc}
>>
>> \pagestyle{empty}
>>
>>
>> \begin{document}
>> DRUTEX_REPLACE
>> \end{document}
>>
>> I can put your files alongside it. How do you want me to modify the
>> above code snippet such that your macros/files are correctly included
>> ?
>
> before \begin{document}
>
>
> \input{macro.tex}
> \input{macroGeometricPrimitives.tex}
> \input{macroSemantics.tex}
> \input{macroCoordinateRepresentations.tex}
> \input{macroSemanticOperations.tex}
>
> Thanks!
>
> Tinne
>
> PS: I am already discussing with the colleagues in our robotics group what
> would be the best way to use the latex macros on the wiki. Probably we will
> adapt the command-names to avoid name clashes ...
Okay, I uploaded these files and modified the template code. If
there's a new version of a file, let me know.
Peter
### using latex macros in the wiki
This does not seem to work.
I tried to use
$\body{b}$
$\orientation{a}$
$\Position{a}{b}{c}{d}{r}$
but the output is just "baabcdr"
Tinne
PS: sorry for the top-posting but my email client just gave up and now I have to use the wonderful outlook web app :s
________________________________________
Van: Peter Soetens [peter [..] ...]
Verzonden: donderdag 26 januari 2012 15:08
Aan: Tinne De Laet
CC: orocos-dev [..] ...
Onderwerp: Re: [Orocos-Dev] using latex macros in the wiki
On Thu, Jan 26, 2012 at 3:03 PM, Tinne De Laet
<tinne [dot] delaet [..] ...> wrote:
> On Thursday 26 January 2012 14:04:48 Peter Soetens wrote:
>> On Thu, Jan 26, 2012 at 1:57 PM, Tinne De Laet
>>
>> <tinne [dot] delaet [..] ...> wrote:
>> > On Tuesday 24 January 2012 23:35:24 Peter Soetens wrote:
>> >> On Tue, Jan 24, 2012 at 9:52 AM, Tinne De Laet <
>> >>
>> >> tinne [dot] delaet [..] ...> wrote:
>> >> > On Monday 23 January 2012 17:13:14 Peter Soetens wrote:
>> >> > > Hi Tinne,
>> >> > >
>> >> > > On Mon, Jan 23, 2012 at 9:57 AM, Tinne De Laet <
>> >> > >
>> >> > > Tinne [dot] DeLaet [..] ...> wrote:
>> >> > > > Hi all,
>> >> > > >
>> >> > > > Apparently there are some possibilities to use latex macro
>> >> > > > in a
>> >> > > > mediawiki
>> >> > > > (see for instance
>> >> > > > http://www.mediawiki.org/wiki/Extension:WikiTex/Installati
>> >> > > > on).
>> >> > > > Can we also use latex macros on the orocos wiki and how
>> >> > > > can I
>> >> > > > configure this?
>> >> > >
>> >> > > We're not running official Mediawiki, but a 'DruTex' plugin
>> >> > > for the
>> >> >
>> >> > Drupal
>> >> >
>> >> > > Mediawiki emulation. DruTex should be able to cover most
>> >> > > needs.
>> >> > >
>> >> > > It seemed like this Drutex-enabled mediawiki was not enabled
>> >> > > for
>> >> > > book-aka-wiki pages. Can you check again if it works now ?
>> >> >
>> >> > I think you misunderstood my question.
>> >> > I indeed found out that I had to change the input format to
>> >> > Mediawiki + Drutex
>> >> > to use latex code (using the notation).
>> >> > I however want to do something more advanced with a macro.tex
>> >> > file.
>> >> > This macro.tex file is a latex file defining some latex commands
>> >> > e.g
>> >> >
>> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> >> > % POSITION
>> >> > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
>> >> > % position: position of point #1 belonging to body #2 with
>> >> > respect to
>> >> > point
>> >> > #3 belonging to body #4 expressed in orientation frame #5 (leave
>> >> > empty
>> >> > if
>> >> > not
>> >> > needed)
>> >> > \newcommand{\Position}[5]
>> >> >
>> >> > {\textrm{Position\ifthenelse{\equal{#5}{}}{}{Coord}}\left(\fixedTo
>> >> > {\poin t{#1}} {#2} , \fixedTo{\point{#3}}{#4}
>> >> > \ifthenelse{\equal{#5}{}}{}{,\orientation{#5}}
>> >> > \right)}
>> >> >
>> >> > This helps to use a nice latex notation for our wiki explanations.
>> >> >
>> >> > So to use this file we should have to make DruTex understand that
>> >> > it has to include the macro.tex file.
>> >> > In the url I mentioned before they seem to do something similar
>> >> > for
>> >> > mediawiki
>> >> > extension WikiTex.
>> >>
>> >> I copy-pasted the above latex snippet in the latex template on the
>> >> server (used behind the scenes to render your code). Does the macro
>> >> work for you now ?
>> >
>> > No.
>> > The result of $\Position{a}{a}{a}{a}{a}$ is just aaaaa.
>> > But this is probably cause since the code snipset uses some other macro
>> > commands that I did not provide.
>> > My macros are attached to this mail.
>> >
>> >> You can post a file with all the macros you require and then I can add
>> >> these too.
>> >
>> > Can I add them myself or do I always have to bother you?
>>
>> We need to upload them to the server as a file. The current file has
>> these contents:
>>
>> \documentclass[10pt,notitlepage]{article}
>>
>> % good math support
>> \usepackage{amsmath, amsfonts, amssymb}
>>
>> % UTF-8 support
>> \usepackage{ucs}
>> \usepackage[utf8x]{inputenc}
>>
>> \pagestyle{empty}
>>
>>
>> \begin{document}
>> DRUTEX_REPLACE
>> \end{document}
>>
>> I can put your files alongside it. How do you want me to modify the
>> above code snippet such that your macros/files are correctly included
>> ?
>
> before \begin{document}
>
>
> \input{macro.tex}
> \input{macroGeometricPrimitives.tex}
> \input{macroSemantics.tex}
> \input{macroCoordinateRepresentations.tex}
> \input{macroSemanticOperations.tex}
>
> Thanks!
>
> Tinne
>
> PS: I am already discussing with the colleagues in our robotics group what
> would be the best way to use the latex macros on the wiki. Probably we will
> adapt the command-names to avoid name clashes ...
Okay, I uploaded these files and modified the template code. If
there's a new version of a file, let me know.
Peter
### using latex macros in the wiki
On Thu, Jan 26, 2012 at 3:47 PM, Tinne De Laet
<Tinne [dot] DeLaet [..] ...> wrote:
> This does not seem to work.
> I tried to use
> $\body{b}$
> $\orientation{a}$
> $\Position{a}{b}{c}{d}{r}$
>
> but the output is just "baabcdr"
It works now. See http://www.orocos.org/wiki/test-latex
Peter
### using latex macros in the wiki
On Thursday 26 January 2012 16:13:16 Peter Soetens wrote:
> On Thu, Jan 26, 2012 at 3:47 PM, Tinne De Laet
>
> <Tinne [dot] DeLaet [..] ...> wrote:
> > This does not seem to work.
> > I tried to use
> > $\body{b}$
> > $\orientation{a}$
> > $\Position{a}{b}{c}{d}{r}$
> >
> > but the output is just "baabcdr"
>
> It works now. See http://www.orocos.org/wiki/test-latex
Indeed! Thanks!
Tinne
|
{}
|
# Build a list from applying a recursive function on another list
I have one list a of length (n+1):
a={a[0],a[1],...,a[n]}
I wish to build a list by applying a function f, a recursive non-linear function, that depends on the value of a at both indexes i and (i-1) and on the previous value of the "under construction" list. The first value of list b is defined as b0. Here is the list I would like to get:
b={b0,f(b0,a[0],a[1]),...,f(b[n-1],a[n-1],a[n]))}
The first and last element of list a won't change but I want to test several values of incrementation, therefore n (the length of vector a) will change.
I have tried using Table, Array, combining it with Module and I managed to call at least one specific value by its index from list a but I can't find a way to do the multiple manipulations as described above.
• Are a[i] numbers? Does the function f return a number? Or the same type of object as a[i]? Apr 19, 2017 at 20:22
• both a and f are functions (that will be numbers only specific values of parameters)
– Elsa
Apr 19, 2017 at 22:22
Here is one way:
avec = Array[a, 4, 0];
bvec = ConstantArray[b[0], Length[avec - 1]];
Do[
bvec[[i]] = f[bvec[[i - 1]], avec[[i - 1]], avec[[i]]]
, {i, 2, Length[avec]}
]
bvec
{b[0], f[b[0], a[0], a[1]], f[f[b[0], a[0], a[1]], a[1], a[2]], f[f[f[b[0], a[0], a[1]], a[1], a[2]], a[2], a[3]]}
Or more functional:
bvec2 = FoldList[f[#1, Sequence @@ #2] &, b[0], Partition[avec, 2, 1]];
bvec == bvec2
True
In:
Clear[a, b, f, g, bs]
g[i_] := f[b[i - 1], a[i - 1], a[i]]
bs[n_] := Range[n] // MapThread[g, {#}] & // Join[{b[0]}, #] &
bs[4]
Out:
{b[0], f[b[0], a[0], a[1]], f[b[1], a[1], a[2]], f[b[2], a[2], a[3]],
f[b[3], a[3], a[4]]}
|
{}
|
- Current Issue - Ahead of Print Articles - All Issues - Search - Open Access - Information for Authors - Downloads - Guideline - Regulations ㆍPaper Submission ㆍPaper Reviewing ㆍPublication and Distribution - Code of Ethics - For Authors ㆍOnlilne Submission ㆍMy Manuscript - For Reviewers - For Editors
Linear transformations that preserve the assignment on $R=E_m$ and $S=(s_1,\cdot,s_n)$ Bull. Korean Math. Soc. 1996 Vol. 33, No. 2, 311-318 Gwang Yeon Lee Hanseo University Abstract : For positive integral vectors $R=(r_1,\cdots,r_m)$ and $S=(s_1,\cdots,s_n)$, we consider the class ${\Cal U}(R,S)$ of all $m\times n$ matrices of 0's and 1's with row sum vector $R$ and column sum vector $S$. Let $\overline{{\Cal U}(R,S)}$ denote the convex hull of ${\Cal U}(R,S)$. A vector $E_m$ denote the $m$-tuple of 1's. Let $R=E_m$ and $S=(s_1,\cdots,s_n)$ with $s_1+\cdots+s_n=m$. In this paper, we consider a linear transformations that preserve the assignment on $\overline{{\Cal U}(R,S)}$. Keywords : assignment function, linear preserver, bipartite graph MSC numbers : 05C50, 15A04 Downloads: Full-text PDF
|
{}
|
# Gauss law for cilynder
1. May 17, 2006
So question :
We have cored cilynder. Inner radius 10 cm, outer radius 20 cm. In the walls
of the cilynder uniformed charge 2nK/m^3. Find electric field magnitude at the points from axes 8cm 18cm 28cm.
Sorry for my english.
2. May 17, 2006
### neutrino
I think you mean 2 nC/m^3 for the charge density. What attempts have you made in solving this problem. The first case is the easiest.
3. May 17, 2006
Yes I mean nC/m^3. So For 8cm I think to use that E = <r0>*r\(2*e*0e).
it is a case then point where we define electric field is inner cilinder.
But then we have 18cm, then point would be in the cilinder.
3 case i could use E = <ro> *R^2\(2*e0*e*r), but what should be R - radius od cilynder 20cm.
<ro> is charge density.
4. May 17, 2006
### neutrino
For the first case, the gaussian cyclinder would enclose no charge at all. Now for the other two, draw similar gaussian cylinders - one within the walls and the thrid outisde the charged cylinder. If you know the general case of evaluating the field of a long cylinder using Gauss' law, it would be a simple exercise to solve the problems.
5. May 17, 2006
### neutrino
EDIT: Never mind...I was being an idiot!
6. May 17, 2006
|
{}
|
# Breakout… Getting the ball reflection X angle when htitting paddle / bricks [duplicate]
Im currently creating a breakout clone for my first ever C# / XNA game. Currently Ive had little trouble creating the paddle object, ball object, and all the bricks. The issue im currently having is getting the ball to bounce off of the paddle and bricks correctly based off of where the ball touches the object. This is my forumala thus far:
if (paddleLocation.Intersects(ballLocation))
{
motion.Y *= -1;
// determine X
motion.X = 1 - 2 * (ballLocation.X - paddleLocation.X) / (paddleLocation.Width / 2);
}
The problem is, the ball goes the opposite direction then its supposed to. When the ball hits the left side of the paddle, instead of bouncing back to the left, it bounces right, and vise versa. Does anyone know what the math equation is to fix this?
change
ballLocation.X - paddleLocation.X
to
paddleLocation.X - ballLocation.X
|
{}
|
# A mass attached to the end of a spring is oscillating with a period of 2.25 s on a horizontal...
## Question:
A mass attached to the end of a spring is oscillating with a period of 2.25 s on a horizontal frictionless surface. The mass was released from rest at t = 0 from the position x = 0.0300 m.
(a) Determine the location of the mass at t = 5.51 s?
(b) Determine if the mass is moving in the positive or negative x-direction at t = 5.51 s?
## Angular Frequency:
The angular frequency is calculated by taking the ratio of the time period given for the rotation to twice the constant value of pie and expressed in radians per second. The Angular frequency is a scalar quantity which is a measure of the rate of rotation.
Given data
• The time period is given as: {eq}T = 2.25\,{\rm{s}}{/eq}
• The initial position is given as: {eq}{x_o} = 0.0300\,{\rm{m}}{/eq}
• The time given is : {eq}t = 5.51\,{\rm{m}}{/eq}
(a)
The expression to calculate the position of the mass at given time {eq}t = 5.51\,{\rm{s}}{/eq} ,
{eq}x\left( t \right) = {x_o}\cos \left( {\omega t} \right)\,.....\,(I){/eq}
Here, {eq}\omega {/eq} is the angular frequency.
Calculate the Angular frequency.
{eq}\begin{align*} \omega & = \dfrac{{2\pi }}{T}\\ \omega & = \dfrac{{2 \times 3.14}}{{2.25}}\\ \omega & = 2.8\,{\rm{rad/s}} \end{align*}{/eq}
Substituting the value in the expression (I),
{eq}\begin{align*} x\left( {5.51} \right)& = 0.0300cos\left( {2.8 \times 5.51 \times \dfrac{{180}}{\pi }} \right)\\ x\left( {5.51} \right) &= 0.0300 \times \left( { - 0.9612} \right)\\ x\left( {5.51} \right) &= - 0.0288\,{\rm{m}} \end{align*}{/eq}
Thus, the position is {eq}0.0288\,{\rm{m}}{/eq}
(b)
The negative sign in the position of mass depicting that the mass is moving in the negative x-direction at {eq}t = 5.51\,{\rm{s}}{/eq}.
|
{}
|
# JackAudio over Network: Jack Client/Server Connection
JackAudio is a low latency audio connection software, that can transmit audio data via TCP/IP network connections. Setup is somewhat crucial for "first timers". I hope that this post can help over the standard pitfalls.
2. Setup JACK Master
• Launch Jack Control Application (as Administrator)
• Configure Jack according to the following settings
• Start Jack using the "Start"-Button in the Jack Control GUI
• Run an elevated command prompt (as Administrator), change to the Program Files/Jack directory and run the following command:
jack_load netmanager
You can optionally bind the netmanager to an IP-Address using:
jack_load netmanager -i "-a [IP-Address]"
3. Setup the JACK Slave on another computer
• From the command line enter the following:
jackd -R -d net -a 192.168.0.1
Note that the IP-Address must match the IP from the step before.
Using ASIO-Software, Jack publishes a "JackRouter" virtual driver that can be used to stream audio data through the network channel. Within the directory "C:\Program Files (x86)\Jack\32bits" there is a file called "Jackrouter,ini" which lets you configure input and output channels of the virtual sound driver.
# Audio-Programming: Directshow Logarithmic Volume Control
Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 47 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 49 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 47 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 49 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 47 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 49 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 47 Warning: preg_replace(): The /e modifier is no longer supported, use preg_replace_callback instead in /home/.sites/132/site8248620/web/wp-content/plugins/latex/latex.php on line 49
[Latexpage]When programming audio user-interfaces, volume is usually calculated in form of logarithmic scales. In Windows, specifically DirectX, volume ranges from -10000 (=silence) up to 0 (=maximum volume). However, a volume slide is moved linearly between e.g. 0 (=silence) up to 1.0 (maximum volume) - so we need to convert linear values to logarithmic and vice versa.
The basic function, that takes a linear value in the range of [0;1] and converts to a exponential value can be denoted as:
Note, that this function ranges from 0 up to 10 on the vertical axis. The inverse function can be denoted as the logarithmic function with base 10:
In order to apply the correct ranges, we need to shift the values of x in both functions. Thus, we can rewrite $f(x)$ as:
For the logarithmic function, we write:
Using these formulae, we can convert volumes within the linear ranges from 0.0 to 1.0 to logarithmic values between -10000 and 0 and vice versa.
|
{}
|
## College Algebra (10th Edition)
$(-\infty, -1)$ Refer to the image below for the graph.
Add $2x$ to both sides of the inequality: $3-2x+2x\gt 5+2x \\3 \gt 5+2x$ Subtract $5$ to both sides of the inequality: $3-5 \gt 5+2x-5 \\-2 \gt 2x$ Divide $2$ to both sides of the inequality: $\frac{-2}{2} \gt \frac{2x}{2} \\-1 \gt x$ This inequality is equivalent to: $x \lt -1$ Thus, the solution set is $(-\infty, -1)$. To graph the solution set, plot a hole at $-1$ then shade the region to its left.
|
{}
|
Number of Differentiable Structures on a Smooth Manifold
On John Lee's book, Introduction to Smooth Manifolds, I stumbled upon the next problem (problem 1.6):
Let $M$ be a nonempty topological manifold of dimension $n \geq 1$. If $M$ has a smooth structure, show that it has uncountably many distinct ones.
The trick in this exercise was to use the function $F_s(x) = |x|^{s-1}x$, where $s \in \mathbb{R}$ and $s>0$. This function defines an homeomorphism from $\mathbb{B}^n$ to itself, and is a diffeomorphism iff $s=1$.
Now, reading Loring W. Tu's book, An Introduction to Manifolds, he writes:
"It is known that in dimension $< 4$ every topological manifold has a unique differentiable structure and in dimension $>4$ every compact topological manifold has a finite number of differentiable structures. [...]"
Can someone help me explain how this last "known fact" and problem 1.6 in Lee's book don't contradict each other?
The distinction to be made is that a differentiable structure is a choice of maximal smooth atlas $\mathcal A$, but two different choices $\mathcal A$ and $\mathcal A'$ can lead to isomorphic smooth structures. As an example, the canonical smooth structure $\mathcal A$ on $\mathbb R$ that contains the smooth function ${\rm id}:\mathbb R\longrightarrow \mathbb R$ is isomorphic to the smooth structure $\mathcal A'$ that contains the smooth function $x\mapsto x^3$, although $\mathcal A'\neq \mathcal A$. Thus, although a manifold admits uncountably many different smooth structures, it may have finitely many isomorphism classes of such structures.
• Gracias Pedro :)
– rie
Aug 27 '16 at 18:04
• "a differentiable structure is a choice of maximal smooth atlas $A$". How is it possible to have multiple maximal smooth atlasses? If atlasses $A'$ and $A$ are both smooth atlasses, isn't their union also a smooth atlas? (thus contradicting the assumption that they are maximal smooth atlasses). Feb 7 '17 at 12:39
• @Programmer2134 No, the union of two smooth atlasses is not always a smooth atlas. This is true if and only if they are contained in the same maximal atlas. Feb 7 '17 at 17:42
• It's maybe worth adding that although it may not be immediate that A or A' are maximal, their corresponding maximal smooth atlases can not be equal because the two coordinate functions chosen are not smoothly compatible (and thus must live in different maximal atlases by definition). Jun 22 '20 at 23:18
In the second statement, "unique" means unique up to diffeomorphism.
If you have a manifold $M$ with a smooth structure $A$ and a homeomorphism $\varphi :M \rightarrow M$, which is not a diffeomorphism if we consider it as a map between the smooth manifolds $(M,A) \rightarrow (M,A)$, then we can define a distinct smooth structure, say A', on $M$ by composing the coordinate charts of $M$ with $\varphi$.
Now consider $\varphi: (M,A') \rightarrow (M,A)$. Which this respect to these smooth structures, $\varphi$ will be a diffeomorphism. So while you have a distinct smooth structure, it is not really that different.
The (quite difficult) question how many smooth structures on a given topological manifold exist up to diffeomorphism. This is what Tu talks about.
|
{}
|
Archived
This topic is now archived and is closed to further replies.
OpenGL How to draw a circle or a arc filled with color in OpenGL?
This topic is 6151 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
How to draw a circle or a arc filled with color in OpenGL?
Share on other sites
I guess you can use a triangle fan. Try something like this:
glBegin(GL_TRIANGLE_FAN);glVertex3f(0.0f, 0.0f, 0.0f)float i;for (i = 0; i <= 360.0f; i += 360/num_steps) glVertex3f(cos(DEGTORAD * i) * radius, sin(DEGTORAD * i) * radius, 0.0f);glEnd();
Assume that num_steps is sorta the level of detail of the circle. Basically it''s how circular your circle is going to be. In actuality, this really just creates a regular polygon with num_steps sides. If num_steps is around 30 maybe, it will look close to a circle. radius is the radius of the circle, and DEGTORAD is pi/180.
1. 1
Rutin
37
2. 2
3. 3
4. 4
5. 5
• 11
• 10
• 13
• 103
• 11
• Forum Statistics
• Total Topics
632976
• Total Posts
3009672
• Who's Online (See full list)
There are no registered users currently online
×
|
{}
|
# 40 CFR § 60.204 - Test methods and procedures.
§ 60.204 Test methods and procedures.
(a) In conducting the performance tests required in § 60.8, the owner or operator shall use as reference methods and procedures the test methods in appendix A of this part or other methods and procedures as specified in this section, except as provided in § 60.8(b).
(b) The owner or operator shall determine compliance with the total fluorides standard in § 60.202 as follows:
(1) The emission rate (E) of total fluorides shall be computed for each run using the following equation:
$E=\left(\sum _{i=1}^{N}{C}_{si}{Q}_{sdi}\right)/\left(PK\right)$
where:
E = emission rate of total fluorides, g/Mg (lb/ton) of equivalent P2O5 feed.
Csi = concentration of total fluorides from emission point “i,” mg/dscm (gr/dscf).
Qsdi = volumetric flow rate of effluent gas from emission point “i,” dscm/hr (dscf/hr).
N = number of emission points associated with the affected facility.
P = equivalent P2O5 feed rate, Mg/hr (ton/hr).
K = conversion factor, 1000 mg/g (7,000 gr/lb).
(2) Method 13A or 13B shall be used to determine the total fluorides concentration (Csi) and volumetric flow rate (Qsdi) of the effluent gas from each of the emission points. The sampling time and sample volume for each run shall be at least 60 minutes and 0.85 dscm (30 dscf).
(3) The equivalent P2O5 feed rate (P) shall be computed for each run using the following equation:
P = Mp Rp
where:
Mp = total mass flow rate of phosphorus-bearing feed, Mg/hr (ton/hr).
Rp = P2O5 content, decimal fraction.
(i) The accountability system of § 60.203(a) shall be used to determine the mass flow rate (Mp) of the phosphorus-bearing feed.
(ii) The Association of Official Analytical Chemists (AOAC) Method 9 (incorporated by reference - see § 60.17) shall be used to determine the P2O5 content (Rp) of the feed.
[54 FR 6669, Feb. 14, 1989, as amended at 65 FR 61757, Oct. 17, 2000]
|
{}
|
Free Version
Moderate
# Maxwell Equations: Magnetic Field Properties
EANDM-WH4YO2
Consider these two vector fields, each a candidate to represent a magnetic field in free space.
(A) $\vec{B}(x,y,z)=B_o(3x\hat{x} -4y \hat{y} +z \hat{z})$
(B) $\vec{B}(x,y,z)=B_o(x^2yz\hat{x}+xy^2z\hat{y}-2xyz^2\hat{z})$
Which of these fields could possibly represent a magnetic field?
A
Field A no
Field B no
B
Field A yes
Field B no
C
Field A no
Field B yes
D
Field A yes
Field B yes
|
{}
|
1 like 0 dislike
43 views
Is $(x+2)$ a factor of $x^2+4x+4$?
| 43 views
2 like 0 dislike
If $x + 2$ is a factor, $x = -2$ must give zero when substituted:
$x^2+4x+4 = 0$
$(-2)^2 + 4(-2) + 4 = 0$, therefore $x + 2$ is a factor.
by Silver Status (31.3k points)
selected by
0 like 0 dislike
1 like 0 dislike
|
{}
|
Quod Erat Demonstrandum
2010/10/06
教學碎念
Filed under: NSS,Pure Mathematics — johnmayhk @ 3:22 下午
1.
(他們笑我出卷容易,因自己功力有限,我求他們見諒。)
Resolve $\frac{2x + 3}{x^2(x+3)^2}$ into partial fractions.
$\frac{1}{x(x+3)} \equiv \frac{1}{3}(\frac{1}{x} - \frac{1}{x+3})$
$\frac{d}{dx}\frac{1}{x(x+3)} \equiv \frac{1}{3}\frac{d}{dx}(\frac{1}{x} - \frac{1}{x+3})$
$\frac{2x + 3}{x^2(x+3)^2} \equiv \frac{1}{3}(\frac{1}{x^2} - \frac{1}{(x+3)^2})$
2.
For integer $n$, prove that $9^n + 10^n < 11^n$ iff $n \ge 5$.
$11^n - 9^n$
$= (10+1)^n - (10-1)^n$
$= 2(C_1^n10^{n-1} + C_3^n10^{n-3} + C_5^n10^{n-5} + \dots)$
$= 2(C_1^n10^{n-1} + C_3^n10^{n-3} + C_5^n10^{n-5} + \dots)$
$< 2(C_1^n10^{n-1} + C_3^n10^{n-1} + C_5^n10^{n-1} + \dots)$
$= 2\times 10^{n-1}(C_1^n + C_3^n + C_5^n + \dots)$
$= 2\times 10^{n-1}(2^{n-1}) = 2^n10^{n-1}$
For sufficiently large $n$, we have
$1^n + 2^n + 3^n + 4^n + 5^n + 6^n + 7^n + 8^n + 9^n + 10^n < 11^n$
(估計上式成立於整數 $n \ge 7$
$\sum_{k=1}^mk^n < (m+1)^n$ for sufficiently large $n$
For integer $n$, prove that $9^n + 10^n < 11^n$ iff $n \ge 5$.
$9^n + 10^n$
$< 10^n + 10^n$
$= 2(10^n)$
set
$2(10^n) < 11^n$
$\Longleftrightarrow \log2 + n < n\log11$
$\Longleftrightarrow n > \frac{\log2}{\log11 - 1} \approx 7$
$n > 7$$9^n + 10^ < 11^n$ 成立。
$1^n + 2^n + 3^n + 4^n + 5^n + 6^n + 7^n + 8^n + 9^n + 10^n < 11^n$
$\sum_{k=1}^mk^n < (m+1)^n$
3.
$\frac{n^2(n+1)^2}{4} = (\frac{n(n+1)}{2})^2 = (1 + 2 + 3 + \dots + n)^2$ 證畢。
|
{}
|
# Tag Info
34
Disclaimer: I use Coq on daily basis... I have seen in some places that people use formal verification and/or computer-aided verification for cryptography. To my knowledge, there aren't that many places that do such a thing. First, let's define our concepts: Formal Verification: The act of proving the correctness of algorithms with respect to a certain ...
12
Disclaimer: I use Coq on daily basis... About the tools As you are looking for a formal verification, I would advise you to take a look at Coq. Even though mainly used by Academics, it provides a logical framework and an interface to write formal and interactive proofs. Based this language there exists some libraries dedicated to cryptographic proof : ...
11
First, you need more than just a signature, because a VRF produces both an output and a proof. To an observer, the output is uniformly distributed unless the observer also has the proof, which can be used to verify the output. With a signature scheme and a random oracle $H$, you could use a signature $s$ on a message $m$ as a proof and $h = H(s)$ as an ...
7
How does one verify a key revocation? As Jon Callas already stated: you simply don’t. In case a different wording helps, here’s a quote related to the exact same question… https://lists.gnupg.org/pipermail/gnupg-users/2014-February/049100.html … I revoked my key and on the public key server it says: "* KEY REVOKED * [not verified]" Why does ...
7
No, the user of the key does. A revocation issued by the key itself, or by a designated revoker, which is some different key. If I am going to encrypt to you, I look at the key before I do, and I look to see if your key is revoked. Similarly, if I am verifying a signature your key made, I look to see if the key is revoked.
5
NIST has a statistical test suite for testing (pseudo) random number generators. There are a number of other suites as well, such as Diehard, Dieharder, and TestU01. But all these tests can do is disprove the claim that your generator is random; they cannot prove it. So you really need, in addition, an independent argument for why your generator's output ...
5
Sharon Goldberg's research group at Boston University has a web site on VRFs with research references and applications, including key transparency in CONIKS, authenticated enumeration-resistant denial of existence in DNSSEC with NSEC5, and the Byzantine agreement protocol Algorand. Here's a quick history of how negative answers work in DNS and DNSSEC. The ...
5
You've asked for a way to hash a file into a short string $h$ so that given a partial download $c'_0 \mathbin\| c'_1 \mathbin\| c'_2 \mathbin\| \cdots \mathbin\| c'_{i-1}$ of the file that should start with $c_0 \mathbin\| c_1 \mathbin\| c_2 \mathbin\| \cdots \mathbin\| c_{i-1}$ but may have been modified in transit, you can compute some verification ...
4
Formal verification is used to verify the security services of your algorithm or your protocol. It uses specific high level modeling specification to specify your security solution and uses a back end formal verification tools to see whether or not there are security breaches or not. The outcome of the formal verification will tell you if your protocol is ...
4
One (very generalized) solution would be to use a general ZKP solution like libsnark. In libsnark (and other tools like it), you would write a function that accepts both public and private inputs, and outputs a proof that the inputs satisfy the logic of the function. This proof can then be verified, at a much lower cost than it took to generate it. E.g., ...
4
In my experience the persons doing the standardization may not know about formal methods in the first place. And even if a formal method was used, they would not know how to assess it. Note that whatever mathematical method is applied, the security of a protocol is still dependent on how the domain was modelled. If the model is even slightly incorrect, a ...
4
There are several option - none of which is trivial to implement. A bit of background first. Essentially, verifiable delegation of computation boils down to being able to prove relations between inputs and outputs, so that the verification time is way smaller than the computation time, for relations that can be computed in polynomial time. In contrast, the ...
4
Well, one possibility to generate a moderately lightweight certificate would be to use this theorem: If we have values $p, q, g$ such that: $1 < g < p$ $q > \sqrt{p}$ $q \mid p-1$ $g^q \equiv 1 \pmod p$ $q$ is prime Then $p$ is prime. So, for a certificate, we would have a list of $(p_i, g_i)$ values such that $p_{i-1}, p_i, g_i$ meet the above ...
4
A typical thing which you cannot do with a proof of sequential work is achieving time-lock encryption. In time lock encryption, you want the user to be able to retrieve the hidden message only after some time (i.e., you want to "send a message to the future", as its inventors initially put it). With a VDF, you can use the unique secret to mask the secret ...
3
This might not be the answer you are looking for, but as you are looking for a formal verification, I would advise you to take a look at Coq. Even though mainly used by Academics, it provides a logical framework and an interface to write formal and interactive proofs. Based this language there exists some libraries dedicated to cryptographic proof : ...
3
The canonical algorithm to construct the QAP polynomials from an arithmetic circuit does not yield a polynomial in the standard form ($a_0 + a_1x + \dots$), but as a set of $(x,f(x))$ points. In order to compute $f(s)$ for arbitrary $s$, as required by the protocol, you have to run some interpolation algorithm to reconstruct the polynomial from all the ...
3
Give a zero-knowledge proof that $y_1 \times y_2$ is a Quadratic Residue. [Extra verbage included because a one line answer feels too brief] If we have $y_1 = x_1^2 t^{b_1}$ and $y_2 = x_2^2 t^{b_2}$, then $y_1 y_2 = (x_1x_2)^2 t^{b_1 + b_2}$. If $b_1 = b_2$, this product is either $(x_1x_2)^2$ (if $b_1 = b_2 = 0$), or $(x_1x_2t)^2$ (if $b_1 = b_2 = 1$), ...
3
Can this be done? In general, there is a way: you can prove the statement you sketch using zero-knowledge proofs. Due to [1] we know that zero-knowledge proofs for any language in NP exist. Let us write down what you want to prove as an NP language $L$. Therefore let $\sf (sk, pk)$ be the key pair, consisting of a secret key $\sf sk$ and a public key $\sf ... 3 You can find two algorithms for generating such$p$and$q$in Appendix A.1, FIPS-186-4 (digital signature standard). edited to add: Essentially, the two algorithms generate a pseudorandom prime number$q$of the desired size first, then generate a pseudorandom random number$p$(such that$q|(p-1)$) of the desired size, and test whether$p$is prime. If ... 2 Reform the problem. Instead of each participant picking their givee (which they give to), have them select a giver (which they receive from). Each participant randomly generates a number (appropriately large) and anonymously submits it (e.g., via the tor network) to the site. This number represents them as giver. After all participants have entered, the ... 2 In context of interactive proof systems (including zero-knowledge proofs) completeness means the same as the term correctness as used for many other (interactive) cryptographic schemes or protocols. I guess that's mainly due to historical reasons (there are even some people that use correctness instead of completeness in context of zero-knowledge proofs). ... 2 I believe a zero knowledge proof that$-1$is a quadratric nonresidue would accomplish that. If we know that$n$has two prime factors, and that$n \equiv 1 \pmod{4}$, then$n$is either a product of two primes both$1 \bmod 4$, or two primes both$3 \bmod 4$. If it were the former, then$-1$is a QR modulo$p$, and$-1$is a QR modulo$q$, and hence$-1$... 2 It looks fine; whether you use the secret$S_0, S_1$as the HMAC key, or whether you use the random value$r$as the HMAC key; if$t' = t$, it implies that either$S_0 = S_1$, or we found a collision in the underlying hash function. I would personally suggest you use$S_0, S_1$as the key. With HMAC, it doesn't really matter; however if we extend this to ... 2 Thinking about this and considering Paul Uszak's very useful (albeit perhaps pessimistic) remarks, one idea to consider for this is to use measurements of randomly fluctuating natural phenomena of high public interest that are published regularly by multiple independent parties that have strong incentives to provide accurate measurements. The key ideas to ... 2 There is no sensible solution to this. It is impossible, even if this was not a hypothetical question. It cannot be done for primarily two reasons:- You cannot have the nodes measure any analogue quantity. Analogue measurement noise will govern the accuracy of the reading. Coupled with the typical hash based randomness extractors, the avalanche effect ... 2 Disclaimer: I'm currently doing a PhD in Formal Methods and Cryptography and I'm not really sure of my answer. The first application of Formal Methods is to be applied to pieces of software. The goal is to prove security properties on them. These are usually safety critical softwares (those you find in planes, trains, nuclear facilities...) This field is ... 2 This is to ensure$v$is in the cyclic sub-group$G$of$Z_N^*$that has a large enough order$m=p'\cdot q'$. Moreover, with a large probability$v$is a generator of$G$, so that$v_i=v^{s_i}$is a one-to-one mapping from$s_i$to$v_i$, which is important when proving correctness of the shares. 2 Yes. Say the message is$m$and the commitment is$C$such that$C = g^mh^r$. Since you can use verifiable encryption to prove that a given ciphertext encrypts$m$in relation$g^m = y$where$g$and$y$are also public knowledge, using the Schnorr protocol you can prove that the$m$in relation$g^m = y$is same as the$m$in$C\$
1
It is well known that it is not possible to achieve complete fairness in the two party setting, to agree on a random unbiased coin. See Limits on the security of coin flips when half the processors are faulty. The functionality you are looking for seems to reduce to this functionality, which in turn is not possible.
1
Theoretically it seems to be possible. First idea: If Alice and Bob have a way to verify that the information is correct one approach would be for them to give the algorithm for that to a trusted third party. Then this third party can check the information and only exchange the values if both of them are correct. If the third party is not trusted but does ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
unit:T
Type
Description
The SI unit of flux density (or field intensity) for magnetic fields (also called the magnetic induction). The intensity of a magnetic field can be measured by placing a current-carrying conductor in the field. The magnetic field exerts a force on the conductor, a force which depends on the amount of the current and on the length of the conductor. One tesla is defined as the field intensity generating one newton of force per ampere of current per meter of conductor. Equivalently, one tesla represents a magnetic flux density of one weber per square meter of area. A field of one tesla is quite strong: the strongest fields available in laboratories are about 20 teslas, and the Earth's magnetic flux density, at its surface, is about 50 microteslas. The tesla, defined in 1958, honors the Serbian-American electrical engineer Nikola Tesla (1856-1943), whose work in electromagnetic induction led to the first practical generators and motors using alternating current. $$T = V\cdot s \cdot m^{-2} = N\cdot A^{-1}\cdot m^{-1} = Wb\cdot m^{-1} = kg \cdot C^{-1}\cdot s^{-1}\cdot A^{-1} = kg \cdot s^{-2}\cdot A^{-1} = N \cdot s \cdot C^{-1}\cdot m^{-1}$$ where, $$\\$$ $$A$$ = ampere, $$C$$=coulomb, $$m$$ = meter, $$N$$ = newton, $$s$$ = second, $$T$$ = tesla, $$Wb$$ = weber
Properties
0112/2///62720#UAA285
Wb/m^2
Annotations
The SI unit of flux density (or field intensity) for magnetic fields (also called the magnetic induction). The intensity of a magnetic field can be measured by placing a current-carrying conductor in the field. The magnetic field exerts a force on the conductor, a force which depends on the amount of the current and on the length of the conductor. One tesla is defined as the field intensity generating one newton of force per ampere of current per meter of conductor. Equivalently, one tesla represents a magnetic flux density of one weber per square meter of area. A field of one tesla is quite strong: the strongest fields available in laboratories are about 20 teslas, and the Earth's magnetic flux density, at its surface, is about 50 microteslas. The tesla, defined in 1958, honors the Serbian-American electrical engineer Nikola Tesla (1856-1943), whose work in electromagnetic induction led to the first practical generators and motors using alternating current. $$T = V\cdot s \cdot m^{-2} = N\cdot A^{-1}\cdot m^{-1} = Wb\cdot m^{-1} = kg \cdot C^{-1}\cdot s^{-1}\cdot A^{-1} = kg \cdot s^{-2}\cdot A^{-1} = N \cdot s \cdot C^{-1}\cdot m^{-1}$$ where, $$\\$$ $$A$$ = ampere, $$C$$=coulomb, $$m$$ = meter, $$N$$ = newton, $$s$$ = second, $$T$$ = tesla, $$Wb$$ = weber
Tesla(en)
Generated 2022-06-29T17:17:36.347-04:00 by lmdoc version 1.1 with TopBraid SPARQL Web Pages (SWP)
|
{}
|
# Terrell-Penrose Effect for Objects Approaching Relativistic Velocities
Andrew York
After Einstein and his introduction of the Theory of Relativity, it was generally thought that objects moving at relativistic speeds (close to the speed of light) would appear contracted, or squashed, in the direction of motion. It wasn't until decades later, with the publications in 1959 by Roger Penrose and James Terrell, that it was understood that an object approaching an observer at relativistic speeds would actually appear elongated, and even appear to rotate, allowing the back side of the object to be 'viewed' before the object had arrived. Only when receding from the observer does that fast-moving body appear squashed.
Though the effect is real, in actuality it cannot be observed visually, since the object would be moving by at unimaginable speed. Unless of course the moving object was vastly large, in which case we could indeed observe this phenomenon. It does seem unlikely that we will have any solar-system-sized objects moving by us at 0.99 the speed of light anytime soon, however. At least I hope not.
When I first read about this effect, I thought it would be interesting to work out my own equations to graphically describe the effect. First it is necessary to understand why this rotation effect happens in the first place.
Let's imagine a 2D world, and you (the observer) are sitting at point m. Let's think of a square A that is 1 kilometer in length and height. Let's place the square 10 kilometers away from you to the back face (distance d), 9 km away from the front face (distance g), and the bottom face of the square is one kilometer above the earth (we'll call this distance k). If it were moving it would come straight at you and pass right over your head one kilometer above. But for now, let's give it a velocity of v = 0, so it isn't moving at all, just hovering there, its bottom face 1 km from the ground. We are speaking of the distance along the ground to the front and back faces of the square, which is along the x-axis.
To begin to understand the basics of what is happening, we will only look at the bottom of the object at first. Let's look at the path of light from corner α (back bottom corner of square in figure 2) to point m, which is you, the observer. We will call this starting point of light point q, making a line between q and m. At this moment point q and corner α have the same coordinates, but this will not always be so when the square is moving. So we will consider the line qm.
Point m is at x = 0, and the length along the x-axis from point m to point q we will call d for distance. So the path that light will travel from point q to point m is actually the hypotenuse formed by the triangle dkh, and we can find length h using the pythagorean theorem: $$\sqrt{d^2 + k^2} = h$$. So h is line qm.
Similary, since g is the distance along the x-axis from point p to the observer at point m, we can find the length of the path light will take from point p to point m as $$\sqrt{g^2 + k^2} = j$$. So j is line pm.
Clearly the distance light travels to us from point p is a little shorter than the distance light travels to us from point q. Now imagine that we have an insanely fast camera with a shutter speed as quick as 1/100 millionth of a second or so, allowing us to take a very fast snapshot of the square from point m. The photons arriving to our position from point q have left point q at an earlier time than the photons arriving to us from point p. The difference in time between the departure of photons from each point that will simultaneously arrive to point m is (h - j)/c. This shows that the light from point p departed 0.00000325 seconds later than the light from point q, to arrive simultaneously to our position as we take the snapshot.
Now we get to the reason for all this detail, which at first seems unnecessary. Here's the thing - because the object has a velocity of zero, the square has not moved at all in the time it takes light to travel from point q to point m. But if the square is moving, then while the light is traveling to us from point q, then point p is also moving toward us during the time it take the light to move from point q to point m. This is the very situation which causes the object to appear to stretch and rotate as its speed increases, as our very fast snapshot would begin to show at very high velocities. This is what we call Terrell rotation, or the Terrell-Penrose effect. Let's explore this.
Clearly, light moves so fast that this effect would not even begin to be noticeable until the velocities approach a significant percentage of the speed of light. Let's ramp up the speed of the object to v = 0.99c, or 99% of the speed of light. Now things begin to get interesting. So imagine this - a photon reflects from point q when corner α of the square is exactly 10 km away (along the x-axis). Now, while that photon is making a straight line to point m at the speed of light along line h, the other bottom corner p of the square is also moving at 0.99c, almost as fast as the light. If our snapshot is taken exactly when the photon from point q arrives to us at point m, then the question is this: What will the coordinates of corner p have to be to send a photon to us that arrives simultaneously with the photon from point q? The answer is not intuitive, and we need to solve it algebraically to find the answer. But first let's visualize the problem with some geometry.
As the photon travels from point q toward the observer at point m, we will let t equal Δq, or the distance the light travels from point q to any point q' along the line h. Similarly, as the photon is moving from q to q' with distance t, the corner of the square at point p is also moving parallel to the x-axis from point p to p'. The distance Δp traveled by the square from point p to p' is velocity * t. Since our square for this example is moving at v = 0.99c, then the distance from p to p' is vt, or 0.99 * t.
Our quest right now is to find the conditions where the observer's snapshot captures both the photon from point q, and also the photon from point p' that will arrive at the same exact instant. Notice that from any point p', a photon would follow hypotenuse j of unique length found by $$\sqrt{(g - vt)^2 + k^2}$$. The conditions that need to be met to allow the photons from q and p' to arrive simultaneously are when $$h - t = j$$. Remember that h = the hypotenuse described by line qm, and j = the hypotenuse described by line p'm. Since all of line h represents the distance traversed at speed of light, and j also represents the distance traversed at the speed of light, it is clear that $$h - t = j$$. The problem to be solved is to find the exact distance the square has moved from p to p' that will satisfy our conditions. And to find this unique solution we can do some algebraic manipulation to create a quadratic equation for that purpose.
Making a Quadratic Equation
We know that only when $$h - t = j$$ are the precise conditions met where the light from both q and p' will arrive to us at m simultaneously. We also know that the distance Δp from p to p' always equals vt. And looking at figure 3 above, we see that $$i = vt - g$$ (we will consider the value of i to be negative here, because it is moving along the x-axis backward from zero along the negative number line). Now we can construct an equation that will allow us to solve for i, thereby giving us the location of the square's corner at point p' that will appear to us on our snapshot, revealing the Terrell-Penrose effect of the elongated positions of the bottom corners of the square.
Let's manipulate the equations above to get to our quadratic equation that will solve for i. Since
$h - t = j$
we can also write that as
$h - t = \sqrt{i^2 + k^2}$
Let's consider the value of i to be the distance from n to p'. In figure 3 we see that this will result in a negative value for i. With this in mind we can say
$vt = g + i$
and
$t = \frac{(g + i)}{v}$
Since
$h - t = \sqrt{i^2 + k^2}$
then
$h - \frac{(g + i)}{v}\ = \sqrt{i^2 + k^2}$
$vh - g - i = v\sqrt{i^2 + k^2}$
Creating a quadratic equation from this to solve for i would be as follows:
$vh - g - i = v\sqrt{i^2 + k^2}$ $((vh - g) - i)^2 = v^2(i^2 + k^2)$ $(vh - g)^2 - 2i(vh - g) + i^2 = v^2i^2 + v^2k^2$
multiply both sides by $$\frac{1}{i^2}$$
$\frac{(vh - g)^2}{i^2}\ - \frac{2(vh - g)}{i}\ + 1 = v^2 + \frac{v^2k^2}{i^2}$ $\frac{(vh - g)^2 - v^2k^2}{i^2}\ - \frac{2(vh - g)}{i}\ + (1 - v^2) = 0$ $((vh - g)^2 - v^2k^2)\frac{1}{i^2}\ - 2(vh - g)\frac{1}{i}\ + (1 - v^2) = 0$
Now we see we have created a quadratic equation in which we can solve for i:
Figure 4a
Even though the quadratic coefficient a, the linear coefficient b, and the constant c are all comprised of equations instead of simply constants, we can still solve the quadratic in this form. Noting that the variable is in reciprocal form, we will take the reciprocal of the quadratic formula to solve for i:
## $i = \frac{1}{{-b + \sqrt{b^2 - 4ac} \over 2a}}$
Taking Relativistic Length Contraction Into Consideration
All moving objects experience relativistic length contraction along their line of motion. As we can see from the graph below, this effect is not noticeable until an object is moving at a significant percentage of the speed of light. At speeds we experience on earth, this effect is not apparent at all. For example, a jet flying at mach 1 (the speed of sound, or about 1,235 km per hour) would only contract to 0.99999163 of its rest length. For a jet 20 meters long, this would only be a contraction of 0.1674 millimeters. Even if it were moving at half the speed of light, or 0.5c, it would only have contracted to 0.866 of its rest length.
Figure 4b
We haven't yet considered the effect of relativistic length contraction on our moving square A. Since our square will be contracted along its line of motion, we have to calcuate the length of the square along the x-axis with length contraction included. Length contraction is simply calculated as
$L = L_0\sqrt{1 - \frac{v^2}{c^2}}$
Where $$L_0$$ is the length of the object at v = 0.
We can dispense with division by $$c^2$$ since we are letting $$c = 1$$, and calculate the length of the square's sides thus:
Figure 4c
And since the square is 1 km long on each side, multiplying the top and bottom lengths with the length contraction formula gives us $$1 * \sqrt{1 - 0.99^2}$$, or 0.141067 km for the length of the square. Which would give the square dimensions more like this:
Figure 4d
So with these parameters in mind, we can now insert the corrected distances to the front face of the square into our quadratic equation. This will increase the value of g to |qx| - sw (understanding that the absolute value |qx| equals the distance d to the back face of the square along the x-axis). We will leave the initial position of the back corner of the square (point q) the same, and let the length contraction be represented by the coordinates of the square's front face.
Now when we solve for i, the result is i = -3.79 km. This means when the far corner of the bottom face at point q is 10 km away, our snapshot would show the near corner p to be located at p', stretched out to only 3.79 km away from us at point m (as measured along the x-axis). This means the bottom face of square A, though in reality being contracted in length to sw or 0.14 km while moving at v = 0.99c, would show its apparent length on our snapshot as 6.21 km in length! This is the Terrell-Penrose effect, or Terrell rotation. We'll see the rotation effect better when we move up our calculations from a line into two and then three dimensions.
Terrell Rotation in 2 Dimensions
We now have the tools to calculate the effect in a 2-D coordinate system. With our point q fixed at the location of far bottom corner of square A at given moment in time, we can scan with our quadratic equation along the lines comprising the four faces of the square, in relation to the fixed position of point q. Remember that point q represents the point from which light leaves the coinciding point on the square; but point q remains fixed as the inception point of our reference light path and does not change, even though the square continues to move forward at velocity v.
To calculate the amount of the Terrell-Penrose effect for the bottom face of the square, we set coordinate for py to the value of k. Then we calculate for each point px where $$q \leq p\leq sw.$$ It is almost like a raster scan along the bottom face of the square with point p sweeping along the line from q to the other bottom corner position of q + sw. Our quadratic equation can be used with graphing software to find and display the apparent location of each point as it will appear on our snapshot.
We then do the same process with the upper face of the square, but we would set point py to value k + s, and then we would again scan along the x-axis between the x-values of q to q + sw.
For the vertical sides of the square, would scan along the y-axis from ky to ky + s. The x-coordinate for the sides would be fixed at qx and qx +sw, respectively.
With qx,y at 10,1 and our square moving at v = 0.99c, the blue lines in figure 7a below represents the apparent positions of all points of the square from which light would arrive simultaneously to our snapshot at point m. Square A in its actual size and location at the moment of departure of our reference light path from q to m is shown in red. All the points on the blue faces illustrate where each point of the square would have to actually be in space for the light leaving them to arrive at point m simultaneously with the light that travels from q to m. And this is what our quadratic equation is able to calculate. (Note: In this article I will not address the extreme blueshift that happens to light when it emanates from an object approaching us at relativistic speeds. We'll just consider the Terrell-Penrose effect and not the frequency of the light that arrives to us.)
Now, we can place our point q on any spot on the square that we choose, and calculate for the rest of the square accordingly. In the above example with point q originating at the far bottom corner of the square, we see that the apparent position of the upper face extends backward in space to satistfy our calculations. If we choose to place point q at the UPPER far corner of the square, then all our calculations would show the rest of the square extending forward from that point. This may be the easiest orientation to work with, because the fartherst point of the square away from us would remain the reference point, but it is not the only orientation we can choose, and we are free to adjust our positions as we wish. We would get the identical amount of Terrell-Penrose effect shown in figure 6b if we set the location for q at the far upper corner of the square, and let qx= 16.803 and qy = k + s, as shown in figures 7a and 7b below.
Now let's look at some animated 2D illustrations of Terrell rotation at various velocities and positions. We must remember that our viewpoint of these figures is a cross-section that the observer does not see; the observer at m sees only the one-dimensional line-faces of the square as it approaches. In Figure 8 below, we see that at one-half of the speed of light, the Terrell rotation for our square is not that pronounced.
figure 8
But at 99% of the speed of light, the effect is much more extreme.
figure 9
And just for fun, let's look at the square coming at us dead center at 99.99% the speed of light. Even though the square is length contracted to 0.014 km, we still see it stretched out a surprising amount as it approaches. Only after it passes the observer would the squashed appearance reveal the extent of its length contraction.
figure 10
Terrell Rotation in 3 Dimensions
It's pretty straightforward to modify our equations to render the Terrell-Penrose effect in a 3D graphing program. Where we made our raster-like scan along the four lines comprising the square in our two-dimensional equations, now for three-dimensions we can use the same process to scan each of the six faces of a cube, line by line. By adding variables to describe the placement of the cube along the x-, y- and z-axes, we simply scan the front, back and sides vertically line by line along the z-axis, much like we did for the front and back of the square as a single-line scan for the 2D version. For the top and bottom of the cube, we would scan them horizontally along the y-axis.
For example, to scan the sides and calculate for point p, we can take our variable k that signifies the vertical distance to the point we are calculating (here represented as kz ) , and introduce a variable u to represent the distance to point p along the y-axis, and simply use the pythagorean theorem to find hypotenuse λ, which is the distance required to calculate the Terrell rotation for a point in relation to q (wherever that is chosen to be on the cube's surface) within 3D coordinates.
By employing similar techniques for the calculation required for all six faces of a cube, here are the results of animating the successive frames from our snapshot camera at point m. We can now view the phenomena of Terrell rotation from the exact point of view of the observer at point m, by making a video with the photos from our 'enhanced shutter speed' snapshot camera.
Because the rotation effect can be bewildering enough to make it difficult to identify which faces of the cube we are actually seeing, I have chosen different colors for the faces as exhibited below in figure 12.
figure 12
One of the most surprising and interesting effects of Terrell rotation is that an object moving toward you at relativistic speeds can have its back side be visible while the object is still approaching. I have made the back face of the cube red so you can see it become visible before the cube arrives. I've also made the bottom of the cube a blue grid that allows you to see some of the internal positions of the other faces. You'll notice immediately that the blue bottom of the cube is visible from many kilometers away, looking almost as if it were the front face of the cube.
figure 13
And here are three cubes approaching 1km above the ground, and 1km apart from each other. It's interesting to observe how the cubes in each location exhibit different qualities of the Terrell-Penrose effect.
figure 14
And so, we have explored the Terrell-Penrose effect and looked at some examples of this unusual phenomenon. As a last look, here is a video of a cube's center passing right through the observer at v=0.99c, with musical accompaniment.
© 2020 Andrew York
|
{}
|
Browse Questions
# Which of the following is the strongest reducing agent?
$(a)\;Be\qquad(b)\;Ba\qquad(c)\;Ca\qquad(d)\;Mg$
Can you answer this question?
$Ba$ is the strongest reducing agent
Hence (b) is the correct answer.
answered Jan 28, 2014
|
{}
|
+0
# equation of line
0
46
1
Write an equation of the line that passes through the given point and is perpendicular to the given line
(-4,-4); y = -2x - 3
Mar 19, 2021
#1
+507
0
Since the slope of that line is -2, the the perpendicular one would be its negative reciprocal, which is $$\frac{1}{2}$$
This means you can use the equation and plug in the point (-4, -4) to find the y intercept:
$$y = mx + b$$
$$-4 = \frac{1}{2} \cdot (-4) + b$$
$$-4 = -2 + b$$
$$b = -2$$
This means the equation of the line, and your answer, is: $$y = \frac{1}{2}x -2$$
Mar 19, 2021
|
{}
|
### Counting Counters
Take a counter and surround it by a ring of other counters that MUST touch two others. How many are needed?
### Cuisenaire Rods
These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like?
### Doplication
We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes?
# Journeying in Numberland
## Journeys in Numberland
Tom and Ben are in Numberland in the district called Addition.
They have a map which looks like this:
They are at point B and they begin their journey with ten points.
For every square they walk to the right on the map, they add five.
For every square they walk to the left on the map, they take away five.
If they go North (up on the map), they add two for every square, and if they go South (down on the map), they take away two for every square.
First they make these journeys:
The blue line shows Tom's journey and the green line shows Ben's.
How many points do they have each when they reach E?
Do you notice anything?
Here is a different grid for you to make up some journeys of your own, beginning at B and ending at E.
You can download and print off this sheet which has two copies of the grid map.
After they had explored in the district called Addition in Numberland, Tom and Ben go on to the district called Multiply.
Here they have a new map which looks like this (here are two copies of the map):
They explore here too. Each time they start at B with $10$ points and make their way to E. Try lots of journeys yourself.
What do you notice about the journeys this time?
Can you explain why this happens?
### Why do this problem?
This problem will give learners a chance to make predictions and generalisations. It also provides practice in simple addition and subtraction, and later in multiplication and division. It draws out the inverse relationship between the pairs of operations but it also encourages children to think about the order of operations.
You will need copies of this sheet, and for the second part of the problem this sheet. Squared paper might also be useful.
### Possible approach
You could start by showing the first part of the problem to the whole group and by explaining the setting for the problem. A small scale version could be drawn out on the playground or on the hall floor so that the game can be played practically. The first 'journeys' of both boys could be worked out at this stage.
After this introduction, the group could work in pairs so that they are they are able to talk through their ideas with a partner, using copies of the first sheet. Encourage them to find interesting routes that use subtraction as well as addition. Routes can be drawn using different colours but pairs may well need more than one copy of the sheet. Children may need to use jottings to keep track of their calculations and these could be done on paper or mini whiteboards, for example.
Before having a go at the second part of the problem (multiplication and division), encourage pairs to predict what they think might happen. You may feel that calculators could be used for checking results at this stage.
At the end of the lesson, bring the whole group together again to discuss their findings. They could show their most interesting and/or longest routes. Were they surprised by the results? Why do they think this happened? Although this task focuses only on numerical operations, the explanation of the results demands a very sound understanding of the number system.
### Key questions
Can you find a more interesting way to go that uses subtraction as well as addition?
|
{}
|
# Miscellaneous Plasma Parameters (plasmapy.formulary.misc)
Functions for miscellaneous plasma parameter calculations.
## Functions
Bohm_diffusion(T_e, B) Return the Bohm diffusion coefficient. Calculate the magnetic energy density. Calculate the magnetic pressure. mass_density(density, particle[, z_ratio]) Calculate the mass density from a number density. Return the thermal pressure for a Maxwellian distribution.
## Aliases
PlasmaPy provides short-named (alias) versions of the most common plasma functionality. These aliases are only given to functionality where there is a common lexicon in the community, for example has the alias . All aliases in PlasmaPy are denoted with a trailing underscore _.
DB_(T_e, B) Alias to Bohm_diffusion. Alias to magnetic_pressure. pth_(T, n) Alias to thermal_pressure. rho_(density, particle[, z_ratio]) Alias to mass_density. Alias to magnetic_energy_density.
|
{}
|
Originally Posted by TacticalPro
shoot me.... i cramped, thought it was sin^(2x+x)=1
so anyway it should be:
sin2x+sinx=1
sin3x=1
sinx=1/3
Sorry, but if you cannot simplify sin(2x + x) and solve sin(3x) = 1 correctly then you should not even consider attempting to solve sin2xcosx+cos2xsinx=1. You need to go right back to the basics and review them carefully, in partnership with your teacher or a live tutor.
|
{}
|
Set section and subsection header to ttf font in KOMA-script
I'd like to set a custom font I acquired for all section and subsection headers in a scratcl paper I'm writing. It's a ttf and installed correctly on my Mac.
I know about \setkomafont and \addtokomafont, and I also have found the fontspec package, but all that seems to do is set the font for the entire document via \setmainfont, or for a section via \fontspec if I understand correctly.
I just can't seem to add one and one here, so to recap: how do I set (sub)section headers to a font installed on my Mac?
-
To use the system fonts you need to use xe(la)tex or lualatex instead of (pdf)latex. And with fontspec you can define a new family with \newfontfamily.
% I'm a UNICODE/UTF-8 encoded file!
\documentclass{scrartcl}
\usepackage{fontspec}
\newfontfamily\atfamily{American Typewriter}
% for testing
\usepackage{lipsum}
\begin{document}
\section{Test}
\subsection{Test}
\lipsum[1]
\end{document}
Save this file as UTF-8 file and compile it with xelatex then you’ll get this.
-
Thanks for answering! Also thanks for using \addtokomafont, I hate having to figure out sizes and stuff :) – Zsub Mar 26 '12 at 10:44
If you want to use pdf(la)tex you will have to generate the various support files needed: metrics .tfm, encoding files, font definition files, map files, perhaps virtual font files.
With xelatex/lualatex you can use the font directly:
\documentclass{scrreprt}
\usepackage{fontspec,color}
\setkomafont{subsection}{\color{red}\fontspec{Times New Roman}}
\begin{document}
\chapter{Abc}
\section{text}
\subsection{text}
\end{document}
-
Thanks for answering! – Zsub Mar 26 '12 at 10:42
|
{}
|
## Algebra: A Combined Approach (4th Edition)
$-\dfrac{5}{3}$
Given $\dfrac{x^2-y+2z}{3x} ,$ if $x=-2 \ , \ y = 0$ and $z = 3$ $\dfrac{(-2)^2-0+2\times 3}{3\times (-2)} = \dfrac{4+6}{-6} = -\dfrac{10}{6} = -\dfrac{5}{3}$
|
{}
|
## Differential and Integral Equations
### Bounded holomorphic functional calculus for non-divergence form differential operators
#### Abstract
Let $L$ be a second-order elliptic partial differential operator of non-divergence form acting on ${\bf R^n}$ with bounded coefficients. We show that for each $1 < p_0 <2, L$ has a bounded $H_{\infty}$-functional calculus on $L^p({\bf R^n})$ for $p_0 <p <\infty$ if the $BMO$ norm of the coefficients is sufficiently small.
#### Article information
Source
Differential Integral Equations, Volume 15, Number 6 (2002), 709-730.
Dates
First available in Project Euclid: 21 December 2012
|
{}
|
Let $$w=a_1a_2a_3...$$ be an infinite word over a finite alphabet and $$\epsilon>0$$. Do there exist integers $$n,k$$ such that $$\frac{d(a_1a_2...a_n,a_{k+1}a_{k+2}...a_{k+n})}{n}<\epsilon$$ ? ($$d(u,v)$$ is the hamming distence)
• Is your alphabet just $\{0,1\}$ or an arbitrary finite set? – fedja May 14 at 1:56
• An arbitrary finite set – Phan Quốc Vượng May 14 at 2:18
• Off hand the following should be a counterexample for small enough $\varepsilon$: Consider the sequence 1 "a", 2 random symbols "b" or "c" independent with probability $1/2$ each, 4 "a", 8 random "b,c", 16 "a", 32 random "b,c", etc. It looks like with positive probability it works for all $n\ge n_0$ with some $n_0$ because $k>16n$ or so are definitely useless. Now replace the first $n_0$ symbols by some unique ones and your counterexample is ready. – fedja May 14 at 2:27
• Thank you. What about binary word? – Phan Quốc Vượng May 14 at 9:23
• If we ignore small $n$, the same counterexample should work. If we understand the question literally, then $a_1$ cannot repeat, so we don't have much choice. – fedja May 14 at 10:45
OK, let's go over it slowly.
The alphabet will consist of 4 symbols: $$x,u,b,c$$.
The infinite word will be $$xU_1Q_2U_3Q_4U_5Q_6\dots$$ where $$U_m$$ is the finite word consisting of $$m$$ symbols $$u$$ and $$Q_m$$ is the random word consisting of $$m$$ symbols each of which is $$b$$ or $$c$$ with probability $$1/2$$ with the convention that the choices of symbols at different positions are independent. So you get something like $$xubcuuubcbbuuuuucbbccbuuuuuuucccbcbbb\dots$$
It is easy to check (see the discussion here) that as $$n\to\infty$$, the string $$a_1a_2\dots a_n$$ contains one symbol $$x$$ ($$a_1=x$$), $$\frac n2+O(\sqrt n)$$ symbols $$u$$ and $$\frac n2+O(\sqrt n)$$ symbols each of which is $$b$$ or $$c$$.
Now suppose that $$n$$ is large enough and $$k>n^2$$. Then the $$u$$'s in the word $$a_{k+1}a_{k+2}\dots a_{k+n}$$ form a single block and the non-$$u$$'s form another block. One of these blocks has length $$\ell \ge n/2$$. However, the corresponding block in $$a_1a_2\dots a_n$$ is occupied by $$\frac \ell 2+O(\sqrt n)$$ symbols $$u$$ and $$\frac \ell 2+O(\sqrt n)$$ symbols that are not $$u$$, so the Hamming distance in question is at least $$\frac \ell 2+O(\sqrt n)\ge \frac n4+O(\sqrt n)\ge \frac n5$$ if $$n\ge n_0$$.
Thus we need to look only at $$k\le n^2$$ for large $$n$$. We have $$\frac n2+O(\sqrt n)$$ random symbols in $$a_1a_2\dots a_n$$ and, for fixed $$k\ge 1$$, the probability that each of them is matched in $$a_{k+1}a_{k+2}\dots a_{k+n}$$ is $$0$$ or $$1/2$$, the corresponding events being independent. Thus, the chance that we have at least $$\frac n3$$ matchings instead of expected $$\le\frac n4$$ is at most $$Ce^{-cn}$$ by the Bernstein (a.k.a. Chernov, Hoeffding, etc.) bound. Since the series $$\sum_n Cn^2e^{-cn}$$ converges, we conclude that with probability close to $$1$$, the Hamming distance in question is at least $$\frac n6+O(\sqrt n)>\frac n7$$ for all $$n\ge n_0$$, $$k\le n^2$$.
Finally, due to the uniqueness of $$x$$ in the word, the Hamming distance is always at least $$1$$, so the ratio in question is never less than $$\min(\frac 17,\frac 1{n_0})$$.
I hope it is clearer now but feel free to ask questions if something is still confusing.
By the way, the word "conjecture" means "a statement supported by extensive circumstantial evidence and several rigorous partial results", not "something that just came to my head" or "something I want to be true", so, since you put it in the title, I wonder what positive results you can prove here.
• Thank you. What happen if the alphabet consist of $2$ symbols?. That conjecture is from a simple problem:Let $w=a_1a_2...$ be an binary infinite words and $N>0$. Then there exit $n,k$ such that $d(a_1...a_n,a_{k+1}...a_{k+n})<\frac{n}{2}−N$ or $d(a_1...a_n,a_{k+1}...a_{k+n})>\frac{n}{2}+N$. I wonder that can we put $d(a_1...a_n,a_{k+1}...a_{k+n})$ be smaller (being larger is impossible with $w=000...$). – Phan Quốc Vượng May 16 at 4:47
• @PhanQuốcVượng If you care only about sufficiently large $n$, you can emulate the 3-letter alphabet by putting $u=00001111, b=00110011, c=01010101$, say (the key feature is that if $8$ does not divide $k$, you have at least 1 discrepancy in every octuplet and if it does, then you can just think of u,b,c as single symbols. – fedja May 16 at 6:41
|
{}
|
# Difference between density and distribution [in formal mathematical terms]
A similar question has been already asked but its not in mathematical framework and therefore seems to be different. According to definitions from the book that I am reading, a random variable and a distribution are defined as follows:
Definition. Let $(\Omega', \mathcal{A}')$ be a measurable space and let $X:\Omega\to\Omega'$ be measurable. Then $X$ is called a random variable.
Definition. Let $X$ be a random variable. The probability measure $P_X:=P\circ X^{-1}$ is called the distribution.
Now according to what I see in physical textbooks there is some other thing called density and that differs from distribution. How that one is formally defined?
• Short answer: a probability density, when it exists, is the derivative of its corresponding distribution. – Giuseppe Negro Apr 6 '14 at 16:24
• @GiuseppeNegro, Thank you. It would have been great if you have written this as an answer – Cupitor Apr 6 '14 at 17:57
The distribution is simply the assignment of probabilities to sets of possible values of the random variable. If I tell you how probable it is that a certain random variable is between $3$ and $5$, and also how probably it is that it's in every other possible set, then I've told you the distribution. Since I can't do this for every set individually, since there are infinitely many sets, perhaps a more down-to-earth way to say this is this: Suppose $X$ and $Y$ are random variables. If it is true of every set that the probability that $X$ is in that set is the same as the probability that $Y$ is in that same set, then $X$ and $Y$ have the same distribution.
A probability density function is a way of characterizing some distributions. For example, consider the function $$f(x) = \begin{cases} 0 & \text{if }x<0, \\ e^{-x} & \text{if }x\ge 0. \end{cases}$$ To say that this is the probability density function of a random variable $X$ is to say that for every measurable set $A$ of real numbers, $$\Pr(X\in A) = \int_A f(x)\,dx.$$ The probability assigned to each set $A$ is given by the integral above. A more concrete example: $$\Pr(3<X<5) = \int_3^5 e^{-x}\,dx\text{ and }\Pr(X\ge 2) = \int_2^\infty e^{-x}\,dx.$$
Not every probability distribution has a density. Say we let $X$ be the number of aces when a die is thrown four times. Then $X\in\{0,1,2,3,4\}$. The probability distribution assigns a positive number to every set that intersects that last set. For example the set $\{x : x\ge 3.2\}$ intersects $\{0,1,2,3,4\}$ and thus the probability distribution of $X$ assigns a positive number to that set. But there is no function $f$ such that for every set $A$ we have $\int_A f(x)\,dx$ equal to the probability that $X\in A$.
PS prompted by comments below: To put it in a different kind of language: Say $m$ is a measure (not necessarily assigning finite measure to the whole space) on the set of all measurable subsets of a space $S$. A probability density with respect to the measure $m$ is a measurable function $f:S\to[0,\infty)$ such that the function $$A\mapsto \int_A f\,dm$$ is a probability measure on the set of measurable subsets of $S$.
A probability distribution on $S$ is simply a probability measure on the set of all measurable subsets of $S$. But not quite "simply": The probability distribution of a random variable $X:\Omega\to S$ is the probability measure on measurable subsets of $S$ that assigns measure $P(\{\omega\in\Omega : X(\omega)\in A\})$ to each measurable subset $A$ of $S$.
PPS: When $f\ge0$ is a measurable function on Borel or Lebesgue-measurable subsets of $\mathbb R$, one sometimes refers to the "measure" $f(x)\,dx$, meaning the measure $$A\mapsto \int_A f(x)\,dx.$$ If in addition $\displaystyle\int_{\mathbb R} f(x)\,dx=1$, so that $f$ is a probability density, then one may similarly refer to the "probability distribution" $f(x)\,dx$.
(Of course, not all probability distributions on Borel subsets of the real line are of this form.)
• I appreciate the time you have spent on answering the question with such a detailed answer but I was looking for the formal definition. One more thing is what you have defined again(distributions) has been already defined as part of my question. – Cupitor Apr 6 '14 at 15:47
• @Cupitor : I've now added a "formal" definition. – Michael Hardy Apr 6 '14 at 16:17
• @O.B.D.A. : Typo fixed. – Michael Hardy Apr 6 '14 at 16:45
|
{}
|
General Information
Student: Veronika Steffanova 450 Charles University in Prague veronika.steffanova guess_what_is_here rutgers.edu $L(p,q)$-labeling of Interval Graphs
$L(p,q)$-labeling of Interval Graphs
Project Description
Interval graphs are intersection graphs of a family of intervals of real numbers. $L(p,q)$-labeling of a graph $G$ is a mapping $l \colon V_G \to X$ where $X \subset \mathbb{Z}$ such that
• $|f(u) - f(v)| \geq p$ whenever the vertices $u$ and $v$ are connected by an edge,
• $|f(u) - f(v)| \geq q$ whenever there exists a vertex $w$ such that both $u$ and $v$ are neighbors of $w$.
Finally, span of graph $G$ is the smallest number $k$ such that there exists $L(p,q)$-labeling of $G$ using $X = \{ 0, \dots, k \}$. In this project, we look for a formula for the span of $L(2,1)$ for the class of interval graphs and its connection to the chromatic number of the graph and the maximum degree of the graph.
Previous work
• Peter Che Bor Lam, Guohua Gu, Wai Chee Shiu, Tao-Ming Wang: On Distance Two Labelling of Unit Interval Graphs here.
• G.J. Chang; D. Kuo, The L(2,1)-labeling problem on graphs. SIAM J. Discrete Math. 9 (1996), 309--316.
• D. Sakai, Labelling chordal graphs: distance two condition. SIAM J. Disc. Math. 7 (1994), 133--140.
• J.R. Griggs; R.K. Yeh, Labelling graphs with a condition at distance 2. SIAM J. Discrete Math. 5 (1992), 586--595.
Current activities
First week
• Introduction, definition
• Presentation
• Search the articles.
Second week
• First observations.
• Bridge workshop
• Talks
Third week
• Try to find an algorithm for labeling interval graphs.
• Upper bound from the algorithm.
• Bridge workshop
• Talks
Fourth week
• Lower bound.
• Bridge workshop
• Talks
• Cultural day
Fifth week
• $L(2,1)$ - labeling is NP-complete on interval graphs
• Improving the upper bound to be tight.
• Bridge workshop
• Looking for example for more general example for lower bound
Sixth week
• Looking for example for more general example for lower bound
• Path generalization
• Star generalization
• Bridge workshop
Seventh week
• Writing the report
• Looking for an example with higher chromatic number
• Preparing the presentation for Friday
• Talks
|
{}
|
# Cumulative distribution function determine the random variable
I don't know that determine is the right word, but I try to explain. What I need to understand. :) So.. We know's that if a function fit this conditions:
• Monotonically non-decreasing for each of its variables
• Right-continuous for each of its variables.
$$0 \le F(x_1,\ldots,x_n) \le 1$$ $$\lim_{x_1,\ldots,x_n\to\infty} F(x_1,\ldots,x_n)=1$$ $$\lim_{x_i\to-\infty} F(x_1,\ldots,x_n) = 0,\text{ for all } i$$ then the function is or can be a cumulative distribution function.
In this logic the cumulative distribution function determine the random variable? How I can prove it in mathematical way? This is true, I understand in my own way, but not mathematically.
Maybe we can start that the cumulative distribution function determine the probability distribution and vica versa. But how I can prove it mathematically that, the probability distribution determine random variable?
Thanks for your explanation, I am really grateful:)
-
Not all functions satisfying the conditions you have stated are necessarily cumulative distribution functions (CDFs). You also need to have (for the case $n=2$) that for all $a<b$ and $c<d$ that $$P\{a<X_1\leq b, c<X_2\leq d\}=F(b,d)-F(a,d)-F(b,c)+F(a,c) \geq 0$$ and similarly for larger $n$. For example $$F(x,y)= \begin{cases} 1, & x \geq 1 ~ \text{or}~ y\geq 1,\\0, &\text{otherwise,}\end{cases}$$ is non-decreasing, right-continuous, etc but is not a valid CDF. Also, the CDF does not determine a random variable. Note, for example, that $X\sim\text{Bernoulli}(0.5)$ and $1-X$ have same CDF. – Dilip Sarwate Nov 3 '12 at 16:45
thank you the answer, I not understand sure yet, but I try to understand. Can you explain with more detail why CDF not determine the random variable, with examples and expressions? thank you very much. – Tatar Elemér Nov 3 '12 at 19:34
The CDF determines what kind of random variable you have, but not which random variable you have. All random variables with CDF $$F(x)=\begin{cases}0,&x< 0,\\\frac{1}{2},&0\leq x<1,\\1,&x\geq 1,\end{cases}$$ are called Bernoulli random variables with parameter $\frac{1}{2}$. If $X\sim\text{Bernoulli}(\frac{1}{2})$, then so is $Y=1-X$ a Bernoulli random variable with parameter $\frac{1}{2}$ and $P\{X=Y\}=0$. If $X=1$ iff the first toss of a fair coin was H and $Y=1$ iff the second toss was H, then $X$ and $Y$ are independent Bernoulli random variables and $P\{X=Y\}=\frac{1}{2}$. – Dilip Sarwate Nov 3 '12 at 20:43
To add to my comment above, $X$ and $Y$ are the same kind of random variable in my examples above, but they are not the same variable ($X$ is not the same as $Y$: if they were the "same" variable, then it would be the case that $P\{X=Y\}=1$.) The technical name for "being the same" is almost surely (abbreviated a.s.) and one would say that $X=Y$ a.s. Also, in my two examples, $X$ and $Y$ have different relationships between them: in the first example, $Y$ is a function of $X$ while in the second example, $X$ and $Y$ are mutually independent. – Dilip Sarwate Nov 3 '12 at 21:08
I understand now, thank you, please write an answer and I accept it, because you are the first. – Tatar Elemér Nov 5 '12 at 21:48
In general the CDF does not determine the distribution function. Consider for instance the uniform distributions over $[a,b]$ and over $(a,b)$. The distribution functions are different but it is straightforward to check that the CDFs are identical.
and similarly a uniform distribution over the irrationals in $(a,b)$. But they are identical distributions except for a set of probability $0$. – Henry Nov 3 '12 at 21:09
|
{}
|
# A 125 GeV SM-like Higgs in the MSSM and the $\gamma \gamma$ rate
Authors
Type
Preprint
Publication Date
Mar 28, 2012
Submission Date
Dec 14, 2011
Identifiers
DOI: 10.1007/JHEP03(2012)014
Source
arXiv
We consider the possibility of a Standard Model (SM)-like Higgs in the context of the Minimal Supersymmetric Standard Model (MSSM), with a mass of about 125 GeV and with a production times decay rate into two photons which is similar or somewhat larger than the SM one. The relatively large value of the SM-like Higgs mass demands stops in the several hundred GeV mass range with somewhat large mixing, or a large hierarchy between the two stop masses in the case that one of the two stops is light. We find that, in general, if the heaviest stop mass is smaller than a few TeV, the rate of gluon fusion production of Higgs bosons decaying into two photons tends to be somewhat suppressed with respect to the SM one in this region of parameters. However, we show that an enhancement of the photon decay rate may be obtained for light third generation sleptons with large mixing, which can be naturally obtained for large values of $\tan\beta$ and sizable values of the Higgsino mass parameter.
|
{}
|
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From \$16.50/m
# Making Things Faster With Gearman and Supervisor
Difficulty:IntermediateLength:ShortLanguages:
Sometimes our services need to perform some huge tasks after user interaction. For example, we need to send a letter, generate a report file, or call external APIs. These kinds of tasks can be slow because of third parties and can consume the resources of your server.
In this case, an application can become a snake eating an elephant, as in the book The Little Prince. You take some data from a user and make him wait because the snake needs some time to digest an elephant (or something else that your app needs to do):
To process this functionality faster, you need to make the parts of your application asynchronous. You can achieve this by delegating this task to a more powerful server or running it in a background process.
And Gearman is a proper tool that can be used to do this.
## What Are We Going to Do?
In this tutorial, we will create a simple application that will delegate a task from a client to the Gearman worker. Our application will calculate a Fibonacci sequence in three processes. To run worker processes, we will install and configure Supervisor.
Please note that the examples in this tutorial need PHP7 to run.
## So What Is Gearman Anyway?
First, let's discover what Gearman is from its homepage:
Gearman provides a generic application framework to farm out work to other machines or processes that are better suited to do the work. It allows you to do work in parallel, to load balance processing, and to call functions between languages. It can be used in a variety of applications, from high-availability web sites to the transport of database replication events. In other words, it is the nervous system for how distributed processing communicates.
In other words, Gearman is a queuing system that is easy to scale on many servers and flexible to use because of multi-language support.
## Install Gearman
If you are running Debian/Ubuntu, run the following command to install Gearman with the necessary tools and PHP extension:
After that, run the Gearman server and check the status:
But you will not see anything helpful after the status command because we haven't started any worker yet. Just remember this until we need it.
## Create a Client
And we are ready to start a script called client.php. This script will create a Gearman client and send information to a server on the same machine:
You may have noticed that we sent numbers in a JSON format. Gearman clients and workers talk to each other in a string format, so one of the ways to serialize an array is to use the json_encode() function or something similar.
After receiving an answer from the worker, we will unserialize it with json_decode() and output as CSV rows:
We have just finished our client script, so let's run it from terminal:
But it will be stuck without any output. Why? It is waiting for a worker to connect.
## Create a Worker
It's time to create a worker to do the job that was ordered by the client. We will require a file with the fibonacci() function and create a new Gearman worker on the current server:
After this, we will add a new function called the same as we called it in the client code:
And, of course, don't forget to wrap your answer to JSON format. The last thing to do is loop the worker script to use it many times without restarting:
We can run the worker script in the background:
At this moment, you may already have observed that the client script has ended its job and written something like this:
## Check the Gearman Status
Finally, we have our worker running, so we can check the status again:
In each row, there is a function name and three numbers: the number of tasks in the queue (0), the number of jobs running (1), and the number of capable workers (2).
Of course, to add more workers, you can run more worker scripts. To stop each of them, you can use killall. But there is a great tool to manage workers, and it is called Supervisor.
## A Few Words About Supervisor
As the manual says:
Supervisor is a client/server system that allows its users to monitor and control a number of processes on UNIX-like operating systems.
Let's install it and create the basic configuration file:
In the editor that opens, we will create a basic configuration for a Gearman worker:
This will say to Supervisor that the worker must run in three processes and restart when ended. Now save the configuration file, reload Supervisor, and check the status of the running processes:
We can see three workers that are ready to take jobs from client scripts.
## Conclusion
We've completed the basic tasks to install and configure Gearman. Now you are free to play with example code, so try to make the following changes to the code:
• Add some worker process in the background, like sending an e-mail.
• Play with task priorities using GearmanClient::doHigh.
• Chunk data using GearmanJob::sendData, which can be useful in the case of long tasks that can be observed by the status bar.
Also, you can scale the power of your workers by increasing the number of processes or running them on a faster server. And don't forget to use Supervisor to make your workers run.
If you have any questions, don't hesitate to ask questions in the comments to the article.
|
{}
|
CryptoDB
Yiannis Tselekounis
Publications
Year
Venue
Title
2021
PKC
Trusted execution enviroments (TEEs) enable secure execution of program on untrusted hosts and cryptographically attest the correctness of outputs. As these are complex systems, it is hard to capture the exact security achieved by protocols employing TEEs. Crucially TEEs are typically employed in multiple protocols at the same time, thus composable security (with global subroutines) is a natural goal for such systems. We show that under an attested execution setup $\Gatt$ we can realise cryptographic functionalities that are unrealizable in the standard model. We propose a new primitive of Functional Encryption for Stateful and Randomised functionalities (FESR) and an associated protocol, Steel, that realizes it. We show that Steel UC-realises FESR in the universal composition with global subroutines model (TCC 2020). Our work is also a validation of the compositionality of earlier work (Iron}, CCS 2017) capturing (non-stateful) hardware-based functional encryption. As the existing functionality for attested execution of Pass et al. (Eurocrypt 2017) is too strong for real world use, we propose a weaker functionality that allows the adversary to conduct rollback and forking attacks. We show that the stateful variant of $\Steel$, contrary to the stateless variant corresponding to Iron, is not secure in this setting and propose several mitigation techniques.
2020
CRYPTO
Secure messaging (SM) protocols allow users to communicate securely over untrusted infrastructure. In contrast to most other secure communication protocols (such as TLS, SSH, or Wireguard), SM sessions may be long-lived (e.g., years) and highly asynchronous. In order to deal with likely state compromises of users during the lifetime of a session, SM protocols do not only protect authenticity and privacy, but they also guarantee forward secrecy (FS) and post-compromise security (PCS). The former ensures that messages sent and received before a state compromise remain secure, while the latter ensures that users can recover from state compromise as a consequence of normal protocol usage. SM has received considerable attention in the two-party case, where prior work has studied the well-known double-ratchet paradigm, in particular, and SM as a cryptographic primitive, in general. Unfortunately, this paradigm does not scale well to the problem of secure group messaging (SGM). In order to address the lack of satisfactory SGM protocols, the IETF has launched the message-layer security (MLS) working group, which aims to standardize an eponymous SGM protocol. In this work we analyze the TreeKEM protocol, which is at the core of the SGM protocol proposed by the MLS working group. On a positive note, we show that TreeKEM achieves PCS in isolation (and slightly more). However, we observe that the current version of TreeKEM does not provide an adequate form of FS. More precisely, our work proceeds by formally capturing the exact security of TreeKEM as a so-called continuous group key agreement (CGKA) protocol, which we believe to be a primitive of independent interest. To address the insecurity of TreeKEM, we propose a simple modification to TreeKEM inspired by recent work of Jost et al. (EUROCRYPT '19) and an idea due to Kohbrok (MLS Mailing List). We then show that the modified version of TreeKEM comes with almost no efficiency degradation but achieves optimal (according to MLS specification) CGKA security, including FS and PCS. Our work also lays out how a CGKA protocol can be used to design a full SGM protocol.
2018
CRYPTO
Non-malleable codes were introduced by Dziembowski, Pietrzak and Wichs (ICS ’10) and its main application is the protection of cryptographic devices against tampering attacks on memory. In this work, we initiate a comprehensive study on non-malleable codes for the class of partial functions, that read/write on an arbitrary subset of codeword bits with specific cardinality. Our constructions are efficient in terms of information rate, while allowing the attacker to access asymptotically almost the entire codeword. In addition, they satisfy a notion which is stronger than non-malleability, that we call non-malleability with manipulation detection, guaranteeing that any modified codeword decodes to either the original message or to $\bot$⊥. Finally, our primitive implies All-Or-Nothing Transforms (AONTs) and as a result our constructions yield efficient AONTs under standard assumptions (only one-way functions), which, to the best of our knowledge, was an open question until now. In addition to this, we present a number of additional applications of our primitive in tamper resilience.
2013
ASIACRYPT
|
{}
|
Solved
How to import a text file to a environment variable in a batch file
Posted on 2011-02-22
548 Views
I am trying to use a path previously stored as a text file to set an environment variable in a windows batch file. I had thought that type file.txt >> %var% would work but it didn't. This should be simple but I need help figuring it out.
0
Question by:ProTek2
[X]
Welcome to Experts Exchange
Add your voice to the tech community where 5M+ people just like you are talking about what matters.
• Help others & share knowledge
• Earn cash & points
• 2
• 2
LVL 16
Accepted Solution
sjklein42 earned 500 total points
ID: 34952758
File containing path:
path.txt
c:\foo\bar
command:
for /F %i in (path.txt) do set mysymbol=%i
inside a batch file setPathToFileContents.bat (note doubled %):
@for /F %%i in (%2) do @set %1=%%i
call this way
setPathToFileContents mysymbol path.txt
set
...
mysymbol=c:\foo\bar
...
0
LVL 3
Expert Comment
ID: 34952763
Hi, I think you should use "Echo" instead of "Type" if you'd be doing it that way or instead use the "SET" command inside of the batch file instead.
0
Author Closing Comment
ID: 34955398
I'm sure that my inexperience with scripting was the only reason that I didn't follow it exactly. However, I realized after that problem was solved that I hadn't asked the right question. The answer saves the path but it gives no way to find WHERE it is saved when needed.
0
LVL 16
Expert Comment
ID: 34955744
Yes. You can use the "path" by referring to it as %mysymbol%
For example, after calling
setPathToFileContents mysymbol path.txt
You can then use that path:
dir %mysymbol%
Of course, you don't need to call it "mysymbol".
0
Author Comment
ID: 34956399
Not the way I'm using it. The initial batch file will be started by a downloaded setup package and in that .cmd file, PowerShell is used to restart it with the "runas" parameter because administrative privileges are required for the other activities, not the least of which is moving a file into the system path for subsequent use. When the cmd shell is invoked, it is running in c:\windows\system32 instead of the download folder and the normal variables become null. The path may have been saved in a file but the path to the file (which holds the path to the file) is also lost. It turns out that a PushD created variable survives the elevation process and I used that to concatenate the "move" directive that I needed.
But thank you for your information. I'm sure that it will come in handy at some point.
0
Featured Post
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
If you have done a reformat of your hard drive and proceeded to do a successful Windows XP installation, you may notice that a choice between two operating systems when you start up the machine. Here is how to get rid of this: Click Start Clic…
Possible fixes for Windows 7 and Windows Server 2008 updating problem. Solutions mentioned are from Microsoft themselves. I started a case with them from our Microsoft Silver Partner option to open a case and get direct support from Microsoft. If s…
In this video, we discuss why the need for additional vertical screen space has become more important in recent years, namely, due to the transition in the marketplace of 4x3 computer screens to 16x9 and 16x10 screens (so-called widescreen format). …
The viewer will learn how to successfully create a multiboot device using the SARDU utility on Windows 7. Start the SARDU utility: Change the image directory to wherever you store your ISOs, this will prevent you from having 2 copies of an ISO wit…
Suggested Courses
Course of the Month4 days, 19 hours left to enroll
|
{}
|
13.4 Induced electric fields (Page 3/4)
Page 3 / 4
Summary
• A changing magnetic flux induces an electric field.
• Both the changing magnetic flux and the induced electric field are related to the induced emf from Faraday’s law.
Conceptual questions
Is the work required to accelerate a rod from rest to a speed v in a magnetic field greater than the final kinetic energy of the rod? Why?
The work is greater than the kinetic energy because it takes energy to counteract the induced emf.
The copper sheet shown below is partially in a magnetic field. When it is pulled to the right, a resisting force pulls it to the left. Explain. What happen if the sheet is pushed to the left?
Problems
Calculate the induced electric field in a 50-turn coil with a diameter of 15 cm that is placed in a spatially uniform magnetic field of magnitude 0.50 T so that the face of the coil and the magnetic field are perpendicular. This magnetic field is reduced to zero in 0.10 seconds. Assume that the magnetic field is cylindrically symmetric with respect to the central axis of the coil.
4.67 V/m
The magnetic field through a circular loop of radius 10.0 cm varies with time as shown in the accompanying figure. The field is perpendicular to the loop. Assuming cylindrical symmetry with respect to the central axis of the loop, plot the induced electric field in the loop as a function of time.
The current I through a long solenoid with n turns per meter and radius R is changing with time as given by dI / dt . Calculate the induced electric field as a function of distance r from the central axis of the solenoid.
Inside, $B={\mu }_{0}nI\text{,}\phantom{\rule{0.5em}{0ex}}\oint \stackrel{\to }{E}·d\stackrel{\to }{l}=\left(\pi {r}^{2}\right){\mu }_{0}n\frac{dI}{dt},$ so, $E=\frac{{\mu }_{0}nr}{2}·\frac{dI}{dt}$ (inside). Outside, $E\left(2\pi r\right)=\pi {R}^{2}{\mu }_{0}n\frac{dI}{dt},$ so, $E=\frac{{\mu }_{0}n{R}^{2}}{2r}·\frac{dI}{dt}$ (outside)
Calculate the electric field induced both inside and outside the solenoid of the preceding problem if $I={I}_{0}\phantom{\rule{0.2em}{0ex}}\text{sin}\phantom{\rule{0.2em}{0ex}}\omega t.$
Over a region of radius R , there is a spatially uniform magnetic field $\stackrel{\to }{B}.$ (See below.) At $t=0$ , $B=1.0\phantom{\rule{0.2em}{0ex}}\text{T,}$ after which it decreases at a constant rate to zero in 30 s. (a) What is the electric field in the regions where $r\le R$ and $r\ge R$ during that 30-s interval? (b) Assume that $R=10.0\phantom{\rule{0.2em}{0ex}}\text{cm}$ . How much work is done by the electric field on a proton that is carried once clock wise around a circular path of radius 5.0 cm? (c) How much work is done by the electric field on a proton that is carried once counterclockwise around a circular path of any radius $r\ge R$ ? (d) At the instant when $B=0.50\phantom{\rule{0.2em}{0ex}}\text{T}$ , a proton enters the magnetic field at A , moving a velocity $\stackrel{\to }{v}$ $\left(v=5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{6}\phantom{\rule{0.2em}{0ex}}\text{m}\text{/}\text{s}\right)$ as shown. What are the electric and magnetic forces on the proton at that instant?
a. ${E}_{\text{inside}}=\frac{r}{2}\phantom{\rule{0.2em}{0ex}}\frac{dB}{dt}$ , ${E}_{\text{outside}}=\frac{{r}^{2}}{2R}\phantom{\rule{0.2em}{0ex}}\frac{dB}{dt}$ ; b. $W=4.19\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-23}\phantom{\rule{0.2em}{0ex}}\text{J}$ ; c. 0 J; d. ${F}_{\text{mag}}=4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-13}\phantom{\rule{0.2em}{0ex}}\text{N},$ ${F}_{\text{elec}}=2.7\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-22}\phantom{\rule{0.2em}{0ex}}\text{N}$
The magnetic field at all points within the cylindrical region whose cross-section is indicated in the accompanying figure starts at 1.0 T and decreases uniformly to zero in 20 s. What is the electric field (both magnitude and direction) as a function of r , the distance from the geometric center of the region?
The current in a long solenoid of radius 3 cm is varied with time at a rate of 2 A/s. A circular loop of wire of radius 5 cm and resistance $2\phantom{\rule{0.2em}{0ex}}\text{Ω}$ surrounds the solenoid. Find the electrical current induced in the loop.
$7.1\phantom{\rule{0.2em}{0ex}}\mu \text{A}$
The current in a long solenoid of radius 3 cm and 20 turns/cm is varied with time at a rate of 2 A/s. Find the electric field at a distance of 4 cm from the center of the solenoid.
What is differential form of Gauss's law?
help me out on this question the permittivity of diamond is 1.46*10^-10.( a)what is the dielectric of diamond (b) what its susceptibility
a body is projected vertically upward of 30kmp/h how long will it take to reach a point 0.5km bellow e point of projection
i have to say. who cares. lol. why know that t all
Jeff
is this just a chat app about the openstax book?
kya ye b.sc ka hai agar haa to konsa part
what is charge quantization
it means that the total charge of a body will always be the integral multiples of basic unit charge ( e ) q = ne n : no of electrons or protons e : basic unit charge 1e = 1.602×10^-19
Riya
is the time quantized ? how ?
Mehmet
What do you meanby the statement,"Is the time quantized"
Mayowa
Can you give an explanation.
Mayowa
there are some comment on the time -quantized..
Mehmet
time is integer of the planck time, discrete..
Mehmet
planck time is travel in planck lenght of light..
Mehmet
it's says that charges does not occur in continuous form rather they are integral multiple of the elementary charge of an electron.
Tamoghna
it is just like bohr's theory. Which was angular momentum of electron is intral multiple of h/2π
determine absolute zero
The properties of a system during a reversible constant pressure non-flow process at P= 1.6bar, changes from constant volume of 0.3m³/kg at 20°C to a volume of 0.55m³/kg at 260°C. its constant pressure process is 3.205KJ/kg°C Determine: 1. Heat added, Work done, Change in Internal Energy and Change in Enthalpy
U can easily calculate work done by 2.303log(v2/v1)
Abhishek
Amount of heat added through q=ncv^delta t
Abhishek
Change in internal energy through q=Q-w
Abhishek
please how do dey get 5/9 in the conversion of Celsius and Fahrenheit
what is copper loss
this is the energy dissipated(usually in the form of heat energy) in conductors such as wires and coils due to the flow of current against the resistance of the material used in winding the coil.
Henry
it is the work done in moving a charge to a point from infinity against electric field
what is the weight of the earth in space
As w=mg where m is mass and g is gravitational force... Now if we consider the earth is in gravitational pull of sun we have to use the value of "g" of sun, so we can find the weight of eaeth in sun with reference to sun...
Prince
g is not gravitacional forcé, is acceleration of gravity of earth and is assumed constante. the "sun g" can not be constant and you should use Newton gravity forcé. by the way its not the "weight" the physical quantity that matters, is the mass
Jorge
Yeah got it... Earth and moon have specific value of g... But in case of sun ☀ it is just a huge sphere of gas...
Prince
Thats why it can't have a constant value of g ....
Prince
not true. you must know Newton gravity Law . even a cloud of gas it has mass thats al matters. and the distsnce from the center of mass of the cloud and the center of the mass of the earth
Jorge
please why is the first law of thermodynamics greater than the second
every law is important, but first law is conservation of energy, this state is the basic in physics, in this case first law is more important than other laws..
Mehmet
First Law describes o energy is changed from one form to another but not destroyed, but that second Law talk about entropy of a system increasing gradually
Mayowa
first law describes not destroyer energy to changed the form, but second law describes the fluid drection that is entropy. in this case first law is more basic accorging to me...
Mehmet
define electric image.obtain expression for electric intensity at any point on earthed conducting infinite plane due to a point charge Q placed at a distance D from it.
explain the lack of symmetry in the field of the parallel capacitor
pls. explain the lack of symmetry in the field of the parallel capacitor
Phoebe
|
{}
|
# Thread: integral with dx on top?
1. ## integral with dx on top?
dx/(2 √x +2x)
I tried 1/(2 √x +2x)dx
then i know that it is not du/u so i tried finding u' but i don't think i have the right one...
i found 6x^1/2 because the bottom is 4x^3/2
so: -6x^1/2 6x^1/2/(2 √x +2x)dx
then: -6x^1/2[ln(abs value 4x^3/2)+c]
but i don't know after that.
the answer is ln(1+ √x) +c
2. Originally Posted by genlovesmusic09
dx/(2 √x +2x)
I tried 1/(2 √x +2x)dx
then i know that it is not du/u so i tried finding u' but i don't think i have the right one...
i found 6x^1/2 because the bottom is 4x^3/2
so: -6x^1/2 6x^1/2/(2 √x +2x)dx
then: -6x^1/2[ln(abs value 4x^3/2)+c]
but i don't know after that.
the answer is ln(1+ √x) +c
$\int\frac{\,dx}{2\sqrt{x}+2x}=\tfrac{1}{2}\int\fra c{\,dx}{\sqrt{x}+\left(\sqrt{x}\right)^2}$.
Let ${\color{red}u=\sqrt{x}}\implies\,du=\frac{\,dx}{2\ sqrt{x}}\implies 2\sqrt{x}\,du=\,dx$.
So we have $\tfrac{1}{2}\int\frac{2{\color{red}\sqrt{x}}\,du}{ u+u^2}=\int\frac{u\,du}{u\left(1+u\right)}=\int\fr ac{\,du}{1+u}$
Can you continue?
|
{}
|
# American Institute of Mathematical Sciences
• Previous Article
Existence of pullback attractors for the non-autonomous suspension bridge equation with time delay
• DCDS-B Home
• This Issue
• Next Article
Existence and blow up of solutions to the $2D$ Burgers equation with supercritical dissipation
## Global dynamics of an age-structured model with relapse
1 Laboratoire d'Analyse Non linéaire et Mathématiques Appliquées, Département de Mathématiques, Université Aboubekr Belkaïd Tlemcen, 13000 Tlemcen, Algeria 2 Institut de Mathématiques de Bordeaux, Université de Bordeaux, 33000, Bordeaux, France
* Corresponding author
Revised February 2019 Published September 2019
The aim of this paper is to study a general class of $SIRI$ age infection structured model where infectivity depends on the age since infection and where some individuals from the $R$ class, also called quarantaine class in this work, can return to the infectiousness class after a while. Using classical technics we compute a basic reproductive number $R_0$ and show that the disease dies out when $R_0 < 1$ and persists if $R_0 > 1$. Some Lyapunov suitable functions are derived to prove global stability for the disease free equilibrium (DFE) when $R_0 < 1$ and for the endemic equilibrium (EE) when $R_0 > 1$. Using numerical results we show that the non homogeneous infectivity combined with the feedback to the infectiousness class of a part of the quarantaine population modifies drastically the behavior of the epidemic.
Citation: Mohammed Nor Frioui, Tarik Mohammed Touaoula, Bedreddine Ainseba. Global dynamics of an age-structured model with relapse. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2019226
##### References:
show all references
##### References:
A schematic diagram of the epidemic model with quarantine
The functions $\beta,\theta$ and $\delta$ with respect to age $a$
The evolution of solution $S$ with respect to time $t$
The evolution of solutions $i$ and $q$ with respect to time $t$ and age $a$
The functions $\beta, \theta$ and $\delta$ with respect to age $a$
The evolution of solution $S$ with respect to time $t$
The evolution of solutions $i$ and $q$ with respect to time $t$ and age $a$
The functions $\beta,\theta$ and $\delta$ with respect to age $a$
The evolution of solution $S$ with respect to time $t$
The evolution of solutions $i$ and $q$ with respect to time $t$ and age $a$
The functions $\beta,\theta$ and $\delta$ with respect to age $a$ : $\delta \equiv 0$ such that $R_0 < 1$, $\delta \not\equiv 0$ such that $R_0 < 1$ and $\delta \not\equiv 0$ such that $R_0 > 1$
The evolution of solution S with respect to time t : δ ≡ 0 such that R0 < 1, $\delta \not \equiv 0$ such that $\delta \not \equiv 0$ and $\delta \not \equiv 0$ such that R0 > 1
The evolution of solution i with respect to time t and age a : δ ≡ 0 such that R0 < 1, $\delta \not \equiv 0$ such that R0 < 1 and $\delta \not \equiv 0$ such that R0 > 1
The evolution of solution q with respect to time t and age a : δ ≡ 0 such that R0 < 1
[1] Yoshiaki Muroya, Toshikazu Kuniya, Yoichi Enatsu. Global stability of a delayed multi-group SIRS epidemic model with nonlinear incidence rates and relapse of infection. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 3057-3091. doi: 10.3934/dcdsb.2015.20.3057 [2] C. Connell McCluskey. Global stability of an $SIR$ epidemic model with delay and general nonlinear incidence. Mathematical Biosciences & Engineering, 2010, 7 (4) : 837-850. doi: 10.3934/mbe.2010.7.837 [3] Andrei Korobeinikov, Philip K. Maini. A Lyapunov function and global properties for SIR and SEIR epidemiological models with nonlinear incidence. Mathematical Biosciences & Engineering, 2004, 1 (1) : 57-60. doi: 10.3934/mbe.2004.1.57 [4] Yu Yang, Shigui Ruan, Dongmei Xiao. Global stability of an age-structured virus dynamics model with Beddington-DeAngelis infection function. Mathematical Biosciences & Engineering, 2015, 12 (4) : 859-877. doi: 10.3934/mbe.2015.12.859 [5] Yuming Chen, Junyuan Yang, Fengqin Zhang. The global stability of an SIRS model with infection age. Mathematical Biosciences & Engineering, 2014, 11 (3) : 449-469. doi: 10.3934/mbe.2014.11.449 [6] Jinhu Xu, Yicang Zhou. Global stability of a multi-group model with generalized nonlinear incidence and vaccination age. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 977-996. doi: 10.3934/dcdsb.2016.21.977 [7] Yu Ji, Lan Liu. Global stability of a delayed viral infection model with nonlinear immune response and general incidence rate. Discrete & Continuous Dynamical Systems - B, 2016, 21 (1) : 133-149. doi: 10.3934/dcdsb.2016.21.133 [8] Carlota Rebelo, Alessandro Margheri, Nicolas Bacaër. Persistence in some periodic epidemic models with infection age or constant periods of infection. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1155-1170. doi: 10.3934/dcdsb.2014.19.1155 [9] Yan-Xia Dang, Zhi-Peng Qiu, Xue-Zhi Li, Maia Martcheva. Global dynamics of a vector-host epidemic model with age of infection. Mathematical Biosciences & Engineering, 2017, 14 (5&6) : 1159-1186. doi: 10.3934/mbe.2017060 [10] Shouying Huang, Jifa Jiang. Global stability of a network-based SIS epidemic model with a general nonlinear incidence rate. Mathematical Biosciences & Engineering, 2016, 13 (4) : 723-739. doi: 10.3934/mbe.2016016 [11] Yu Ji. Global stability of a multiple delayed viral infection model with general incidence rate and an application to HIV infection. Mathematical Biosciences & Engineering, 2015, 12 (3) : 525-536. doi: 10.3934/mbe.2015.12.525 [12] Yu Yang, Yueping Dong, Yasuhiro Takeuchi. Global dynamics of a latent HIV infection model with general incidence function and multiple delays. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 783-800. doi: 10.3934/dcdsb.2018207 [13] Geni Gupur, Xue-Zhi Li. Global stability of an age-structured SIRS epidemic model with vaccination. Discrete & Continuous Dynamical Systems - B, 2004, 4 (3) : 643-652. doi: 10.3934/dcdsb.2004.4.643 [14] Deqiong Ding, Wendi Qin, Xiaohua Ding. Lyapunov functions and global stability for a discretized multigroup SIR epidemic model. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1971-1981. doi: 10.3934/dcdsb.2015.20.1971 [15] Ting Guo, Haihong Liu, Chenglin Xu, Fang Yan. Global stability of a diffusive and delayed HBV infection model with HBV DNA-containing capsids and general incidence rate. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4223-4242. doi: 10.3934/dcdsb.2018134 [16] Yoichi Enatsu, Yukihiko Nakata, Yoshiaki Muroya. Global stability of SIR epidemic models with a wide class of nonlinear incidence rates and distributed delays. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 61-74. doi: 10.3934/dcdsb.2011.15.61 [17] Antoine Perasso. Global stability and uniform persistence for an infection load-structured SI model with exponential growth velocity. Communications on Pure & Applied Analysis, 2019, 18 (1) : 15-32. doi: 10.3934/cpaa.2019002 [18] Zhixing Hu, Ping Bi, Wanbiao Ma, Shigui Ruan. Bifurcations of an SIRS epidemic model with nonlinear incidence rate. Discrete & Continuous Dynamical Systems - B, 2011, 15 (1) : 93-112. doi: 10.3934/dcdsb.2011.15.93 [19] Jianquan Li, Yicang Zhou, Jianhong Wu, Zhien Ma. Complex dynamics of a simple epidemic model with a nonlinear incidence. Discrete & Continuous Dynamical Systems - B, 2007, 8 (1) : 161-173. doi: 10.3934/dcdsb.2007.8.161 [20] Kazuo Yamazaki, Xueying Wang. Global stability and uniform persistence of the reaction-convection-diffusion cholera epidemic model. Mathematical Biosciences & Engineering, 2017, 14 (2) : 559-579. doi: 10.3934/mbe.2017033
2018 Impact Factor: 1.008
|
{}
|
# Article
Full entry | PDF (0.1 MB)
Keywords:
central binomial coefficient; Legendre polynomial
Summary:
We exploit the properties of Legendre polynomials defined by the contour integral $\bold P_n(z)=(2\pi {\rm i})^{-1} \oint (1-2tz+t^2)^{-1/2}t^{-n-1} {\rm d} t,$ where the contour encloses the origin and is traversed in the counterclockwise direction, to obtain congruences of certain sums of central binomial coefficients. More explicitly, by comparing various expressions of the values of Legendre polynomials, it can be proved that for any positive integer $r$, a prime $p \geqslant 5$ and $n=rp^2-1$, we have $\sum _{k=0}^{\lfloor n/2\rfloor }{2k \choose k}\equiv 0, 1\text { or }-1 \pmod {p^2}$, depending on the value of $r \pmod 6$.
References:
[1] Callan, D.: On generating functions involving the square root of a quadratic polynomial. J. Integer Seq. 10 (2007), Article 07.5.2. MR 2304410 | Zbl 1138.05300
[2] Callan, D., Chapman, R.: Divisibility of a central binomial sum (Problems and Solutions 11292&11307 [2007, 451&640]). American Mathematical Monthly 116 (2009), 468-470.
[3] Gradshteyn, I. S., Ryzhik, I. M.: Table of Integrals, Series, and Products. Translated from the Russian. Translation edited and with a preface by Alan Jeffrey and Daniel Zwillinger 7th ed. Elsevier/Academic Press, Amsterdam (2007). MR 2360010
[4] Mattarei, S.: Asymptotics of partial sums of central binomial coefficients and Catalan numbers. arXiv:0906.4290v3.
Partner of
|
{}
|
Unicode-symbols
Jump to: navigation, search
1 Overview
An overview of the packages that provide Unicode symbols.
Naming: A package X-unicode-symbols defines new symbols for functions and operators from the package X.
All symbols are documented with their actual definition and information regarding their Unicode code point. They should be completely interchangeable with their definitions.
Alternatives for existing operators have the same fixity. New operators will have a suitable fixity defined.
1.1 UnicodeSyntax
GHC offers the UnicodeSyntax language extension. If you decide to use Unicode in your Haskell source then this extension can greatly improve how it looks.
Simply put the following above a module to enable unicode syntax:
{-# LANGUAGE UnicodeSyntax #-}
2 base-unicode-symbols
Extra symbols for the base package.
API docs: http://hackage.haskell.org/package/base-unicode-symbols
github: https://github.com/roelvandijk/base-unicode-symbols
checkout: git clone git://github.com/roelvandijk/base-unicode-symbols.git
2.1 Problematic symbols
Original Symbol Code point Name
not ¬ U+AC NOT SIGN
The problem with this symbol is that you would like to use it as an unary prefix operator:
¬(¬x) ≡ x
Unfortunately this is not valid Haskell. The following is:
(¬)((¬)x) ≡ x
But you can hardly call that an improvement over the simple:
not (not x) ≡ x
2.2 New symbol ideas
(please add your own)
I'm thinking of adding the following symbol as another alternative for (*).
Original Symbol Code point Name
(*) × U+D7 MULTIPLICATION SIGN
2 * 3 ≡ 6
2 ⋅ 3 ≡ 6
2 × 3 ≡ 6
A disadvantage of this symbol is its similarity to the letter x:
sqr x = x × x
Original Symbol Code point Name
Bool 𝔹 U+1D539 MATHEMATICAL DOUBLE-STRUCK CAPITAL B
This idea is an extension of
type ℕ = Integer
and
type ℚ = Ratio ℕ
The advantage is that it looks nice and that it is a logical extension of ℕ, ℚ and ℝ. The disadvantage is that their is no documented prior use of this character to denote boolean values. This could be detrimental to the readability of code.
Example:
(∧) ∷ 𝔹 → 𝔹 → 𝔹
3 containers-unicode-symbols
Extra symbols for the containers package.
API docs: http://hackage.haskell.org/package/containers-unicode-symbols
github: https://github.com/roelvandijk/containers-unicode-symbols
checkout: git clone git://github.com/roelvandijk/containers-unicode-symbols.git
3.1 New symbol ideas
(please add your own)
4 Input methods
These symbols are all very nice but how do you type them?
Wikipedia has a helpful article: http://en.wikipedia.org/wiki/Unicode_input
(please add info for other editors)
4.1 Emacs
Direct
Enter symbols directly: C-x 8 RET (ucs-insert), then type either the character's name or its hexadecimal code point.
TeX input method
The TeX input method, invoked with M-x set-input-method and entering TeX allows you to enter Unicode characters by typing in TeX-like sequences. For example, typing \lambda inserts a λ.
This is probably the most convenient input method for casual use.
A list of available sequences may be viewed with M-x describe-input-method
Custom input method
I wrote my own input method:
github: https://github.com/roelvandijk/emacs-haskell-unicode-input-method
checkout: git clone git://github.com/roelvandijk/emacs-haskell-unicode-input-method.git
To automically load in haskell-mode put the following code in your .emacs file:
(require 'haskell-unicode-input-method)
(add-hook 'haskell-mode-hook
(lambda () (set-input-method "haskell-unicode")))
Make sure the directory containing the .elisp file is in your load-path, for example:
(add-to-list 'load-path "~/.elisp/emacs-haskell-unicode-input-method")
To manually enable use M-x set-input-method or C-x RET C-\ with haskell-unicode. Note that the elisp file must be evaluated for this to work.
Now you can simply type -> and it is immediately replaced with . Use C-\ to toggle the input method. To see a table of all key sequences use M-x describe-input-method haskell-unicode. A sequence like <= is ambiguous and can mean either or . Typing it presents you with a choice. Type 1 or 2 to select an option or keep typing to use the default option.
If you don't like the highlighting of partially matching tokens you can turn it off:
(setq input-method-highlight-flag nil)
Abbrev mode
The Abbrev mode is not suitable since it only deals with words, not operators.
Agda
Use Agda's input method.
4.2 Vim
(real Vim users might want to expand this section)
Direct
• Decimal value: type C-Vnnn where 0 ≤ nnn ≤ 255.
• Octal value: type C-VOnnn or C-Vonnn where 0 ≤ nnn ≤ 377.
• Hex value: type C-VXnn or C-Vxnn where 0 ≤ nn ≤ FF.
• Hex value for BMP codepoints: type C-Vunnnn where 0 ≤ nnnn ≤ FFFF.
• Hex value for any codepoint: type C-VUnnnnnnnn where 0 ≤ nnnnnnnn ≤ FFFFFFFF.
4.3 System wide
m17n input methods
A set of input methods has been written by Urs Holzer for the m17n library. The main goal of Urs is to build input methods for mathematical characters. However, most of the symbols used in the *-unicode-symbols packages can be written using Urs's methods. More information is available at Input Methods for Mathematics page. For most Linux distributions, just download a tarball, extract *.mim files to /usr/share/m17n and enable iBus for input methods.
5 Fonts
The following free fonts have good Unicode coverage:
|
{}
|
SIAM News Blog
SIAM News
#### A Posteriori Error Control and Speedup of Calculations
Systems of nonlinear algebraic equations arise in numerous applications of scientific computing. Iterative linearizations, with the Newton method as a prominent example (see, for example, [5]), are extensively used for the approximate solution of such systems. At each step of an exact Newton method, a system of linear algebraic equations needs to be solved. To alleviate the computational burden, the solution of this linear system can be approximated, typically by employing some early stopping criterion within an iterative linear algebraic solver. This is the essence of the so-called inexact Newton method. A crucial question is when the linear algebraic solver should be stopped. Is it possible to speed up the calculation with a suitably chosen algebraic stopping criterion? Answers via a priori limit theory have been suggested (see [1] and the references therein).
The situation becomes more intricate when the system of nonlinear algebraic equations results from the discretization of some nonlinear partial differential equation. In this context, three sources of error are inevitable: algebraic error, linked to the linear algebraic solver, linearization error, linked to the linearization iteration, and discretization error. It is then natural to envisage an early stopping criterion for the Newton iteration itself. Intuitively, converging the iterative linear and nonlinear solvers to machine precision does not seem to be necessary. Devising stopping criteria for both solvers so as to balance the three error components is not straightforward and, to our knowledge, such criteria have relied to date essentially on heuristics.
In [3], following [2, 4], we identified and estimated separately the three error components via the theory of a posteriori error estimates for nonlinear diffusion PDEs. Within this theory, we have used equilibrated flux reconstructions, originating in the pioneering work of Prager and Synge [6]. The twofold advantage of this approach is to deliver guaranteed, fully computable error estimates, a key issue for the conception of practical stopping criteria, and to allow for a unified theory encompassing most discretization schemes. We then devised and analyzed stopping criteria stipulating that there is no need to continue with the algebraic solver iterations once the linearization or discretization error components start to dominate, and no need to continue with the linearization iterations once the discretization error component starts to dominate. We call the resulting algorithm an adaptive inexact Newton method.
To illustrate the idea, we consider the following nonlinear diffusion PDE: Find $$u : \Omega \rightarrow \mathbb{R}$$ such that
$\nabla \cdot \boldsymbol{\sigma}(u,\nabla u) = f \qquad \mathrm{in}~ \Omega, \qquad \qquad \mathrm{(1a)} \\ u = 0 \qquad \qquad \qquad \mathrm{on}~ \partial \Omega, \qquad \qquad \mathrm{(1b)}$
where $$\Omega \subset \mathbb{R}^d, d \geq 2$$, is a polygonal (polyhedral) domain (an open, bounded, and connected set), $$f : \Omega \rightarrow \mathbb{R}$$ a given source term, and $$\boldsymbol{\sigma} : \mathbb{R} \times \mathbb{R}^d \rightarrow \mathbb{R}^d$$ the nonlinear flux function. We let $$u^{k,i}_{h}$$ be a numerical approximation of $$u$$ obtained on a computational mesh of $$\Omega$$, at the linearization step $$k$$ and algebraic solver step $$i$$. Up to higher-order terms on the right-hand side, our a posteriori error estimate takes on the general form
$\tag{2} J_u(u^{k,i}_{h}) \leq \eta^{k,i} \leq \eta^{k,i}_{\mathrm{disc}} + \eta^{k,i}_{\mathrm{lin}} + \eta^{k,i}_{\mathrm{alg}}$
for a suitable error measure $$J_{u}(u^{k,i}_{h})$$. Here, the overall estimator $$\eta^{k,i}$$ as well as the estimators of the three error components $$\eta^{k,i}_{\mathrm{disc}}$$, $$\eta^{k,i}_{\mathrm{lin}}$$, and $$\eta^{k,i}_{\mathrm{alg}}$$ are fully computable. Our stopping criteria can be formulated for the linear and nonlinear solvers, respectively, as
$\tag{3} \eta^{k,i}_{\mathrm{alg}} \leq \gamma_{\mathrm{alg}}\mathrm{max}\{\eta^{k,i}_{\mathrm{disc}}, \eta^{k,i}_{\mathrm{lin}}\}, \\ \eta^{k,i}_{\mathrm{lin}} \leq \gamma_{\mathrm{lin}}\eta^{k,i}_{\mathrm{disc}},$
where the values of the parameters $$\gamma_{\mathrm{alg}}$$ and $$\gamma_{\mathrm{lin}}$$ are set by the user, typically to a small percentage. From a mathematical viewpoint, an important result is that, under our stopping criteria, there exists a generic constant $$C$$ such that, up to higher-order terms on the right-hand side,
$$$\tag{4} \eta^{k,i}_{\mathrm{disc}} + \eta^{k,i}_{\mathrm{lin}} + \eta^{k,i}_{\mathrm{alg}} \leq CJ_u(u^{k,i}_{h}),$$$
which is called efficiency and, together with (2), proves the equivalence of the error measure $$J_{u}(u^{k,i}_{h})$$ with our estimates. Moreover, as $$C$$ is independent of the mesh size $$h$$, the domain $$\Omega$$, and the nonlinear function $$\boldsymbol{\sigma}$$, the a posteriori error estimate is robust. The bounds (2) and (4) are established for an error measure $$J_{u}(u^{k,i}_{h})$$ based on a dual norm of the difference between the exact flux $$\boldsymbol{\sigma}(u,\nabla u)$$ and the approximate flux $$\boldsymbol{\sigma}(u^{k,i}_{h}, \nabla u^{k,i}_{h})$$. In numerical results for the nonlinear $$p$$-Laplace equation, that is, $$\boldsymbol{\sigma}(u, \nabla u) = -|\nabla u|^{p-2}\nabla u$$ for a real number $$p > 1$$ in (1), $$J_{u}(u^{k,i}_{h})$$ is very close to the Lebesgue norm of the flux difference $$||\boldsymbol{\sigma}(u,\nabla u) − \boldsymbol{\sigma}(u^{k,i}_h, \nabla u^{k,i}_h)||_{q,\Omega}$$, with $$q = p/(p − 1)$$. This error measure is important from a physical viewpoint, as the underlying PDE expresses a conservation principle by means of a balance law for the fluxes. The derivation of a posteriori error estimates for alternative error measures, e.g., in a goal-oriented setting, is an active area of research.
Figure 1 shows a comparison of results for the exact, inexact, and adaptive inexact Newton methods in the example of a nonlinear $$p$$-Laplace equation, with discretization by the Crouzeix–Raviart nonconforming finite element method, Newton linearization, and a conjugate gradient linear solver with diagonal preconditioning. The behavior of the overall error measure $$||\boldsymbol{\sigma}(u,\nabla u) − \boldsymbol{\sigma}(u^{k,i}_h, \nabla u^{k,i}_h)||_{q,\Omega}$$ as a function of the number of degrees of freedom is quite similar for the three methods. This means that our early stopping criteria do not influence the overall error. What differs is the level below which the “side” (algebraic and linearization) errors are forced to decrease; in our approach, the user specifies this by means of (3). We used $$\gamma_{\mathrm{alg}} = \gamma_{\mathrm{lin}} = 0.3$$.
Figure 1. Error and estimates on a series of uniformly refined meshes with the exact Newton (left), inexact Newton (middle), and adaptive inexact Newton (right) methods.
The left panel of Figure 2 provides further insight into the dependence of the error and of our estimates on the Newton iterations. The error and all but the linearization estimates start to stagnate after the linearization error ceases to dominate. Whereas the exact Newton method (with a convergence criterion of 10−8) needs 20 iterations, we can safely stop after 11 iterations in our approach. The middle panel of Figure 2 presents similar plots for the CG iterations. Our adaptive algorithm stops after 32 iterations, whereas the exact method (with a convergence criterion of 10−8) needs about 650 iterations. The total number of algebraic solver iterations required per refinement level is displayed in the right panel of Figure 2. On the last mesh, the inexact Newton method achieves a sixfold speedup compared with the exact Newton method (8690 vs. 1470 iterations). Our adaptive inexact Newton method achieves a further fivefold speedup (306 vs. 1470 iterations).
Figure 2. Error and estimates as a function of: Newton iterations, 6th-level uniformly refined mesh (left); preconditioned conjugate gradient iterations in the 8th Newton step on the 6th-level uniformly refined mesh (middle); and total number of linear solver iterations per uniform mesh refinement level (right).
In Figure 3, we illustrate our adaptive inexact Newton method in conjunction with adaptive mesh refinement, still for the nonlinear $$p$$-Laplace equation. With local, elementwise stopping criteria, the predicted error distribution typically matches the actual one quite nicely, as illustrated in Figure 3. The figure also shows the adaptive mesh refinement triggered by a corner singularity. This stems from a theoretical result asserting the local efficiency of our estimates that is formulated by means of a mesh-localized version of (4).
Figure 3. Estimated (left) and actual (right) error distribution, 5th-level adaptively refined mesh.
In conclusion, we advocate that only the necessary number of algebraic solver iterations at each linearization step, and only the necessary number of linearization iterations should be carried out within an adaptive inexact Newton method. This typically leads to important computational savings, further increased with the addition of mesh adaptivity, thereby paving the way to a complete adaptive strategy. The driving force is a posteriori estimates that ensure a guaranteed and robust error upper bound. More details on our approach can be found in [3].
References
[1] S.C. Eisenstat and H.F. Walker, Globally convergent inexact Newton methods, SIAM J. Optim., 4 (1994), 393–422.
[2] L. El Alaoui, A. Ern, and M. Vohralík, Guaranteed and robust a posteriori error estimates and balancing discretization and linearization errors for monotone nonlinear problems, Comput. Methods Appl. Mech. Engrg., 200 (2011), 2782–2795.
[3] A. Ern and M. Vohralík, Adaptive inexact Newton methods with a posteriori stopping criteria for nonlinear diffusion PDEs, HAL Preprint 00681422 v2, submitted for publication, 2012.
[4] P. Jiránek, Z. Strakoš, and M. Vohralík, A posteriori error estimates including algebraic error and stopping criteria for iterative solvers, SIAM J. Sci. Comput., 32 (2010), 1567–1590.
[5] L.V. Kantorovich, Functional analysis and applied mathematics, Uspekhi Mat. Nauk, 3 (1948), 89–185.
[6] W. Prager and J.L. Synge, Approximations in elasticity based on the concept of function space, Quart. Appl. Math., 5 (1947), 241–269.
Alexandre Ern is a professor of scientific computing at Ecole des Ponts ParisTech, Université Paris-Est, and an associate professor of numerical analysis and optimization at Ecole Polytechnique. Martin Vohralík is a senior researcher at INRIA Paris-Rocquencourt.
|
{}
|
# Arithmetic Series help
#### CathyLou
Please could someone tell me the way to find the smallest positive term of an arithmetic series (C1 level) as I cannot find a formula anywhere.
Thank you.
Cathy
Related Precalculus Mathematics Homework Help News on Phys.org
#### cristo
Staff Emeritus
You could rearrange the formula $$S_n=\frac{n[2a_1+(n-1)d]}{2}$$, where n is the numbers of terms, Sn is the sum of the first n terms, d is the difference between the ith and (i+1)th term, and a1 is the first term of the series.
#### CathyLou
You could rearrange the formula $$S_n=\frac{n[2a_1+(n-1)d]}{2}$$, where n is the numbers of terms, Sn is the sum of the first n terms, d is the difference between the ith and (i+1)th term, and a1 is the first term of the series.
|
{}
|
Solve: $$\frac{{12}}{{13}} \times \frac{{285}}{{96}} \div \frac{{171}}{{169}} =\: ?$$
1. $$3\frac{2}{3}$$
2. $$2\frac{{17}}{{24}}$$
3. $$\frac{7}{8}$$
4. $$\frac{{11}}{{24}}$$
Answer (Detailed Solution Below)
Option 2 : $$2\frac{{17}}{{24}}$$
Detailed Solution
Concept used:
Follow BODMAS rule to solve this question, as per the order given below
Calculations:
(12/13) × (285/96) ÷ (171/169) = ?
⇒ (12/13) × (285/96) × (169/171) = ?
⇒ (12/96) × (169/13) × (285/171) = ?
⇒ (1/8) × (13/1) × (15/9) = ?
⇒ 65/24 = $$2\frac{{17}}{{24}}$$
∴ The value of ? is $$2\frac{{17}}{{24}}$$
Free
Cell
351969
10 Questions 10 Marks 7 Mins
|
{}
|
Psychometrics
# Why composite scores are more extreme than the average of their parts
Suppose that two tests have a correlation of 0.6. On both tests an individual obtained an index score of 130, which is 2 standard deviations above the mean. If both tests are combined, what is the composite score?
Our intuition is that if both tests are 130, the composite score is also 130. Unfortunately, taking the average is incorrect. In this example, the composite score is actually 134. How is it possible that the composite is higher than both of the scores?
If I measure the length of a board twice or if I take the temperature of a sick child twice, the average of the results is probably the best estimate of the quantity I am measuring. Why can’t I do this with standard scores?
Standard scores do not behave like many of our most familiar units of measurement. Degrees Celsius have meaning in reference to a standard, the temperature at which water freezes at sea level. In contrast, standard scores do not have meaning compared to some absolute standard. Instead, the meaning of a standard score derives from its position in the population distribution. One way to describe the position of a score is its distance from the population mean. The size of this distance is then compared to the standard deviation, which is how far scores typically are from the population mean (more precisely, the standard deviation is the square root of the average squared distance from the mean). Thus, the “standard” to which standard scores are compared are the mean and standard deviation.
An index score of 130 is 2 standard deviations above the mean of 100.
The average of two imperfectly correlated index scores is not an index score. Its standard deviation is smaller than 15 and thus our sense of what index scores mean does not apply to the average of two index scores. To make sense of the composite score, we must convert it into an index score that has a standard deviation of 15.
$\dfrac{(130+130-2*100)}{\sqrt{2+2*0.6}}+100\approx 134$
How is this possible? It is unusual for someone to score 130. It is even more unusual for someone to score 130 on two tests that are imperfectly correlated. The less correlated the tests, the more unusual it is to score high on both tests.
Below is a geometric representation of this phenomenon. Correlated tests can be graphed with oblique axes (as is done in factor analyses with oblique rotations). The cosine of the correlation is the angle between the axes. As seen below, the lower the correlation, the more extreme the composite. As the correlation approaches 1, the composite approaches the average of the scores.
The lower the correlation, the more extreme the composite score.
If the scores are lower than the population mean, the composite score is lower than the average of the parts. For example, if the two scores are 71, and the correlation between the scores is 0.9, the composite score is 70.
When the subtest scores are below the mean, the composite score is lower than the average of the subtest scores.
In a previous post, I presented this material in greater detail.
Standard
1. James says:
|
{}
|
# Special L for Lie Derivative
Does anyone happen to know how to typeset the special L character found, for example, in eq. 2.41 in this paper: http://arxiv.org/pdf/1107.5792v2.pdf that is being used for the Lie derivative? Thanks for the help!
• It seems to be the pound symbol: try \mathsterling. By the way, there is a website which can help you with this type of questions: detexify.kirelabs.org. And Welcome to TeX.sx! – Corentin Mar 15 '13 at 19:31
• @Corentin Thanks for the tip and the welcome! – joshphysics Mar 15 '13 at 19:51
## 3 Answers
Sadly I think this is probably the sign for GBP (Great British Pounds) which is achieved with \pounds.
• In fact you can go to arxiv.org/format/1107.5792 and download the TeX source file for their document. There you'll find the command \newcommand\Lie{\pounds}. – Jay Taylor Mar 15 '13 at 19:33
• there's nothing sad about the Great British Pound! :) – cmhughes Mar 15 '13 at 19:50
• Oh wow that's actually hilarious; I didn't notice that. Thanks for the tip about TeX source also btw! – joshphysics Mar 15 '13 at 19:51
• No, it's only sad to see it abused and out of context! ;) No worries. It's something good to keep in mind for the next time you upload a paper to the arXiv! – Jay Taylor Mar 15 '13 at 20:21
Do you mean the pound sterling sign (£)? You could just use the unicode character: £. I think it might even be on any keyboard with some Alt+key or Strg+key combination. On my mac it is Alt+Shift+4, just where the \$ is.
\mathcal{L} is what I have used in the past, if you don't want to use the \pounds sign.
• Yeah I considered that but then realized that \mathcal is hella played out. – joshphysics Nov 6 '15 at 2:25
• Heh, just realized that if you pronounce \mathcal{L} out loud, it sounds like a math-y Superman... – Reinstate Monica Nov 14 '17 at 18:22
|
{}
|
# American Institute of Mathematical Sciences
July 2011, 16(1): 1-14. doi: 10.3934/dcdsb.2011.16.1
## The Euler-Maruyama approximations for the CEV model
1 School of Mathematical Sciences, Monash University, Clayton Campus, Building 28, Wellington road, Victoria, 3800, Australia 2 School of Mathematical Sciences, Monash University, Clayton Campus, Building 28,, Wellington road, Victoria, 3800, Australia 3 Department of Engineering Systems, Tel Aviv University, Tel Aviv, Ramat Aviv, 69978, Israel
Received April 2010 Revised August 2010 Published April 2011
The CEV model is given by the stochastic differential equation $X_t=X_0+\int_0^t\mu X_s ds+\int_0^t\sigma (X^+_s)^p dW_s$, $\frac{1}{2}\le p<1$. It features a non-Lipschitz diffusion coefficient and gets absorbed at zero with a positive probability. We show the weak convergence of Euler-Maruyama approximations $X_t^n$ to the process $X_t$, $0 \le t \le T$, in the Skorokhod metric, by giving a new approximation by continuous processes. We calculate ruin probabilities as an example of such approximation. The ruin probability evaluated by simulations is not guaranteed to converge to the theoretical one, because the limiting distribution is discontinuous at zero. To approximate the size of the jump at zero we use the Levy metric, and also confirm the convergence numerically.
Citation: Vyacheslav M. Abramov, Fima C. Klebaner, Robert Sh. Lipster. The Euler-Maruyama approximations for the CEV model. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 1-14. doi: 10.3934/dcdsb.2011.16.1
##### References:
[1] A. Alfonsi, Higher order discretization schemes for the CIR process: application to affine term structure and Heston models, Mathematics of Computation, 79 (2010), 209-237. doi: 10.1090/S0025-5718-09-02252-2. Google Scholar [2] V. Bally and D. Talay, The law of the Euler scheme for stochastic differential equations I. Convergence rate of the distribution function, Probability Theory and Related Fields, 104 (1996), 43-60. doi: 10.1007/BF01303802. Google Scholar [3] P. Billingsley, "Converges of Probability Measures," John Wiley & Sons, New York, 1968. Google Scholar [4] M. Bossy and A. Diop, An efficient discretisation scheme for one dimensional SDEs with a diffusion coefficient function of the form $|x|^\alpha, \alpha\in [1/2, 1)$, preprint, version 2, INRIA, France, 2007. Google Scholar [5] J. C. Cox, The constant elasticity of variance option. Pricing model, The Journal of Portfolio Management, 23 (1997), 15-17. Google Scholar [6] D. Dawson, http://www.math.ubc.ca/ db5d/SummerSchool09/lectures-dd/lecture4.pdf, (Accessed on August 3, 2010.) Google Scholar [7] G. Deelstra and F. Delbaen, Convergence of discretized stochastic (interest rate) processes with stochastic drift term, Applied Stochastic Models and Data Analysis, 14 (1998), 77-84. doi: 10.1002/(SICI)1099-0747(199803)14:1<77::AID-ASM338>3.0.CO;2-2. Google Scholar [8] F. Delbaen and H. A. Shirakawa, Note of option pricing for constant elasticity of variance model, Asia-Pacific Financial Markets, 9 (2002), 159-168. doi: 10.1023/A:1024173029378. Google Scholar [9] W. Feller, Two singular diffusion problems, Annals of Mathematics, 54 (1951), 173-182. doi: 10.2307/1969318. Google Scholar [10] I. Gy, A note on Eulers approximations, Potential Analysis, 8 (1998), 205-216. doi: 10.1023/A:1008605221617. Google Scholar [11] I. Gy,I. ongy and N. Krylov, Existence of strong solutions for Itó's stochastic equations via approximations, Probability Theory and Related Fields, 105 (1996), 143-158. doi: 10.1007/BF01203833. Google Scholar [12] N. Halidias and P. Kloeden, A note on the Euler-Maruyama scheme for stochastic differential equations with a discontinuous monotone drift coefficient, BIT Numerical Mathematics, 48 (2008), 51-59 . doi: 10.1007/s10543-008-0164-1. Google Scholar [13] P. L. Hennequin and A. Tortrat, "Théorie des Probabilités et Quelques Applications," Masson et Cie, Éditeurs, Paris, 1965. Google Scholar [14] D. J. Higham and X. Mao, Convergence of Monte Carlo simulations involving the mean-reverting square root process, Computational Finance, 8 (2005), 35-61. Google Scholar [15] D. J. Higham, X. Mao and A. M. Stuart, Strong convergence of Euler-type methods for nonlinear stochastic differential equations, SIAM Journal of Numerical Analysis, 40 (2002), 1041-1063. doi: 10.1137/S0036142901389530. Google Scholar [16] M. Hutzenthaler and A. Jentzen, Non-globally Lipschitz counterexamples for the stochastic Euler scheme,, preprint, (). Google Scholar [17] A. Jentzen, P. E. Kloeden and A. Neuenkirch, Pathwise approximation of stochastic differential equations on domains: Higher order convergence rates without global Lipschitz coeffcients, Numerische Mathematik, 112 (2009), 41-64. doi: 10.1007/s00211-008-0200-8. Google Scholar [18] Yu. Kabanov, R. Liptser and A. N. Shiryaev, Estimates of closeness in variation of probability measures (Russian), Dokl. Akad. Nauk SSSR, 278 (1984), 265-268. Google Scholar [19] F. C. Klebaner, "Introduction to Stochastic Calculus with Applications," 2nd edition, Imperial College Press, London, 2005. Google Scholar [20] P. E. Kloeden and K. Platten, "Numerical Solution of Stochastic Differential Equations," Springer-Verlag, Berlin, 1992. Google Scholar [21] R. Sh. Liptser and A. N. Shiryayev, "Theory of Martingales" [Translated from the Russian by K. Dzjaparidze] Mathematics and its Applications (Soviet Series), 49 Kluwer Academic Publishers Group, Dordrecht, 1989. Google Scholar [22] G. N. Milstein and M. V. Tretyakov, "Stochastic Numerics for Mathematical Physics," Springer-Verlag, Berlin, 2004. Google Scholar [23] L. C. G. Rogers and D. Williams, "Diffusions, Markov Processes, and Martingales. Vol. 2. Itô Calculus," Reprint of the 2nd edition, Cambridge University Press, Cambridge, 2000. Google Scholar [24] T. Shiga and S. Watanabe, Bessel diffusions as a one parameter family of diffusion processes, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 27 (1973), 37-46. Google Scholar [25] D. Talay, Simulation and numerical analysis of stochastic differential systems: A review, in "Probabilistic Methods in Applied Physics. Lecture Notes in Physics," 451, 63-106, Springer, Berlin, 1995. Google Scholar [26] L. Yan, The Euler scheme with irregular coefficients, Annals of Probability, 30 (2002), 1172-1194. doi: 10.1214/aop/1029867124. Google Scholar [27] H. Zähle, Weak approximation of SDEs by discrete time processes, Journal of Applied Mathematics and Stochastic Analysis, 2008 (2008), Article ID 275747, 15 pages. Google Scholar
show all references
##### References:
[1] A. Alfonsi, Higher order discretization schemes for the CIR process: application to affine term structure and Heston models, Mathematics of Computation, 79 (2010), 209-237. doi: 10.1090/S0025-5718-09-02252-2. Google Scholar [2] V. Bally and D. Talay, The law of the Euler scheme for stochastic differential equations I. Convergence rate of the distribution function, Probability Theory and Related Fields, 104 (1996), 43-60. doi: 10.1007/BF01303802. Google Scholar [3] P. Billingsley, "Converges of Probability Measures," John Wiley & Sons, New York, 1968. Google Scholar [4] M. Bossy and A. Diop, An efficient discretisation scheme for one dimensional SDEs with a diffusion coefficient function of the form $|x|^\alpha, \alpha\in [1/2, 1)$, preprint, version 2, INRIA, France, 2007. Google Scholar [5] J. C. Cox, The constant elasticity of variance option. Pricing model, The Journal of Portfolio Management, 23 (1997), 15-17. Google Scholar [6] D. Dawson, http://www.math.ubc.ca/ db5d/SummerSchool09/lectures-dd/lecture4.pdf, (Accessed on August 3, 2010.) Google Scholar [7] G. Deelstra and F. Delbaen, Convergence of discretized stochastic (interest rate) processes with stochastic drift term, Applied Stochastic Models and Data Analysis, 14 (1998), 77-84. doi: 10.1002/(SICI)1099-0747(199803)14:1<77::AID-ASM338>3.0.CO;2-2. Google Scholar [8] F. Delbaen and H. A. Shirakawa, Note of option pricing for constant elasticity of variance model, Asia-Pacific Financial Markets, 9 (2002), 159-168. doi: 10.1023/A:1024173029378. Google Scholar [9] W. Feller, Two singular diffusion problems, Annals of Mathematics, 54 (1951), 173-182. doi: 10.2307/1969318. Google Scholar [10] I. Gy, A note on Eulers approximations, Potential Analysis, 8 (1998), 205-216. doi: 10.1023/A:1008605221617. Google Scholar [11] I. Gy,I. ongy and N. Krylov, Existence of strong solutions for Itó's stochastic equations via approximations, Probability Theory and Related Fields, 105 (1996), 143-158. doi: 10.1007/BF01203833. Google Scholar [12] N. Halidias and P. Kloeden, A note on the Euler-Maruyama scheme for stochastic differential equations with a discontinuous monotone drift coefficient, BIT Numerical Mathematics, 48 (2008), 51-59 . doi: 10.1007/s10543-008-0164-1. Google Scholar [13] P. L. Hennequin and A. Tortrat, "Théorie des Probabilités et Quelques Applications," Masson et Cie, Éditeurs, Paris, 1965. Google Scholar [14] D. J. Higham and X. Mao, Convergence of Monte Carlo simulations involving the mean-reverting square root process, Computational Finance, 8 (2005), 35-61. Google Scholar [15] D. J. Higham, X. Mao and A. M. Stuart, Strong convergence of Euler-type methods for nonlinear stochastic differential equations, SIAM Journal of Numerical Analysis, 40 (2002), 1041-1063. doi: 10.1137/S0036142901389530. Google Scholar [16] M. Hutzenthaler and A. Jentzen, Non-globally Lipschitz counterexamples for the stochastic Euler scheme,, preprint, (). Google Scholar [17] A. Jentzen, P. E. Kloeden and A. Neuenkirch, Pathwise approximation of stochastic differential equations on domains: Higher order convergence rates without global Lipschitz coeffcients, Numerische Mathematik, 112 (2009), 41-64. doi: 10.1007/s00211-008-0200-8. Google Scholar [18] Yu. Kabanov, R. Liptser and A. N. Shiryaev, Estimates of closeness in variation of probability measures (Russian), Dokl. Akad. Nauk SSSR, 278 (1984), 265-268. Google Scholar [19] F. C. Klebaner, "Introduction to Stochastic Calculus with Applications," 2nd edition, Imperial College Press, London, 2005. Google Scholar [20] P. E. Kloeden and K. Platten, "Numerical Solution of Stochastic Differential Equations," Springer-Verlag, Berlin, 1992. Google Scholar [21] R. Sh. Liptser and A. N. Shiryayev, "Theory of Martingales" [Translated from the Russian by K. Dzjaparidze] Mathematics and its Applications (Soviet Series), 49 Kluwer Academic Publishers Group, Dordrecht, 1989. Google Scholar [22] G. N. Milstein and M. V. Tretyakov, "Stochastic Numerics for Mathematical Physics," Springer-Verlag, Berlin, 2004. Google Scholar [23] L. C. G. Rogers and D. Williams, "Diffusions, Markov Processes, and Martingales. Vol. 2. Itô Calculus," Reprint of the 2nd edition, Cambridge University Press, Cambridge, 2000. Google Scholar [24] T. Shiga and S. Watanabe, Bessel diffusions as a one parameter family of diffusion processes, Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 27 (1973), 37-46. Google Scholar [25] D. Talay, Simulation and numerical analysis of stochastic differential systems: A review, in "Probabilistic Methods in Applied Physics. Lecture Notes in Physics," 451, 63-106, Springer, Berlin, 1995. Google Scholar [26] L. Yan, The Euler scheme with irregular coefficients, Annals of Probability, 30 (2002), 1172-1194. doi: 10.1214/aop/1029867124. Google Scholar [27] H. Zähle, Weak approximation of SDEs by discrete time processes, Journal of Applied Mathematics and Stochastic Analysis, 2008 (2008), Article ID 275747, 15 pages. Google Scholar
[1] Pavel Chigansky, Fima C. Klebaner. The Euler-Maruyama approximation for the absorption time of the CEV diffusion. Discrete & Continuous Dynamical Systems - B, 2012, 17 (5) : 1455-1471. doi: 10.3934/dcdsb.2012.17.1455 [2] Mikhail Krastanov, Michael Malisoff, Peter Wolenski. On the strong invariance property for non-Lipschitz dynamics. Communications on Pure & Applied Analysis, 2006, 5 (1) : 107-124. doi: 10.3934/cpaa.2006.5.107 [3] Boris Hasselblatt and Amie Wilkinson. Prevalence of non-Lipschitz Anosov foliations. Electronic Research Announcements, 1997, 3: 93-98. [4] Wei Mao, Liangjian Hu, Xuerong Mao. Asymptotic boundedness and stability of solutions to hybrid stochastic differential equations with jumps and the Euler-Maruyama approximation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 587-613. doi: 10.3934/dcdsb.2018198 [5] Chunhong Li, Jiaowan Luo. Stochastic invariance for neutral functional differential equation with non-lipschitz coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3299-3318. doi: 10.3934/dcdsb.2018321 [6] Yavdat Il'yasov. On critical exponent for an elliptic equation with non-Lipschitz nonlinearity. Conference Publications, 2011, 2011 (Special) : 698-706. doi: 10.3934/proc.2011.2011.698 [7] Wei Mao, Liangjian Hu, Surong You, Xuerong Mao. The averaging method for multivalued SDEs with jumps and non-Lipschitz coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4937-4954. doi: 10.3934/dcdsb.2019039 [8] Nurullah Yilmaz, Ahmet Sahiner. Generalization of hyperbolic smoothing approach for non-smooth and non-Lipschitz functions. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021170 [9] Mahmoud Abouagwa, Ji Li. G-neutral stochastic differential equations with variable delay and non-Lipschitz coefficients. Discrete & Continuous Dynamical Systems - B, 2020, 25 (4) : 1583-1606. doi: 10.3934/dcdsb.2019241 [10] Gang Cai, Yekini Shehu, Olaniyi S. Iyiola. Inertial Tseng's extragradient method for solving variational inequality problems of pseudo-monotone and non-Lipschitz operators. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021095 [11] Weiyin Fei, Liangjian Hu, Xuerong Mao, Dengfeng Xia. Advances in the truncated Euler–Maruyama method for stochastic differential delay equations. Communications on Pure & Applied Analysis, 2020, 19 (4) : 2081-2100. doi: 10.3934/cpaa.2020092 [12] Tomás Caraballo, David Cheban. On the structure of the global attractor for non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2012, 11 (2) : 809-828. doi: 10.3934/cpaa.2012.11.809 [13] Zheng-Hai Huang, Shang-Wen Xu. Convergence properties of a non-interior-point smoothing algorithm for the P*NCP. Journal of Industrial & Management Optimization, 2007, 3 (3) : 569-584. doi: 10.3934/jimo.2007.3.569 [14] Fuke Wu, Xuerong Mao, Peter E. Kloeden. Discrete Razumikhin-type technique and stability of the Euler--Maruyama method to stochastic functional differential equations. Discrete & Continuous Dynamical Systems, 2013, 33 (2) : 885-903. doi: 10.3934/dcds.2013.33.885 [15] Xinjie Dai, Aiguo Xiao, Weiping Bu. Stochastic fractional integro-differential equations with weakly singular kernels: Well-posedness and Euler–Maruyama approximation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021225 [16] Tomás Caraballo, David Cheban. On the structure of the global attractor for infinite-dimensional non-autonomous dynamical systems with weak convergence. Communications on Pure & Applied Analysis, 2013, 12 (1) : 281-302. doi: 10.3934/cpaa.2013.12.281 [17] Qingguang Guan, Max Gunzburger. Stability and convergence of time-stepping methods for a nonlocal model for diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1315-1335. doi: 10.3934/dcdsb.2015.20.1315 [18] Yeping Li, Jie Liao. Stability and $L^{p}$ convergence rates of planar diffusion waves for three-dimensional bipolar Euler-Poisson systems. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1281-1302. doi: 10.3934/cpaa.2019062 [19] Wolf-Jüergen Beyn, Janosch Rieger. The implicit Euler scheme for one-sided Lipschitz differential inclusions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 409-428. doi: 10.3934/dcdsb.2010.14.409 [20] Wen Tan, Chunyou Sun. Dynamics for a non-autonomous reaction diffusion model with the fractional diffusion. Discrete & Continuous Dynamical Systems, 2017, 37 (12) : 6035-6067. doi: 10.3934/dcds.2017260
2020 Impact Factor: 1.327
|
{}
|
Question
# Find the equation of line which cuts off equal intercepts on the co-ordinates axes and passes through $$(2,5)$$.
Solution
## Let the equation of the line be $$x+y=a$$.......(1)Where $$a$$ is the equal intercept made by the line on both the co-ordinate axes.According to the problem, line (1) passes through $$(2,5)$$.Then we get,$$2+5=a$$or, $$a=7$$.So the equation of the line is $$x+y=7$$.Mathematics
Suggest Corrections
0
Similar questions
View More
People also searched for
View More
|
{}
|
It is currently 30 May 2020, 18:56
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# A pottery store owner determines that the revenue for sales
Author Message
TAGS:
Founder
Joined: 18 Apr 2015
Posts: 11192
Followers: 239
Kudos [?]: 2808 [0], given: 10653
A pottery store owner determines that the revenue for sales [#permalink] 14 May 2019, 08:40
Expert's post
00:00
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
A pottery store owner determines that the revenue for sales of a particular item can be modeled by the function $$r(x)= 50 \sqrt{x} −40$$, where x is the number of the items sold. How many of the items must be sold to generate $110 in revenue? (A) 5 (B) 6 (C) 7 (D) 8 (E) 9 [Reveal] Spoiler: OA _________________ Need Practice? 20 Free GRE Quant Tests available for free with 20 Kudos GRE Prep Club Members of the Month: Each member of the month will get three months free access of GRE Prep Club tests. GRE Prep Club Tests Editor Affiliations: Partner at MyGuru LLC. Joined: 13 May 2019 Posts: 151 Location: United States GMAT 1: 770 Q51 V44 GRE 1: Q169 V168 WE: Education (Education) Followers: 6 Kudos [?]: 150 [1] , given: 2 Re: The graph above shows a parabola that is symmetric about the [#permalink] 14 May 2019, 08:53 1 This post received KUDOS Expert's post With all word problems start by skipping to the end to determine the sought value, in this case the number of items sold to generate$110 in revenue. Also, note that the choices are numeric values in numeric order, which indicates that both plugging in the choices and logical estimation are potential tactics. Now, we see the equation that revenue =50√x − 40.
First, recognize that in order to generate the integer value \$110, then formula must result in an integer. Therefore, √x must be an integer, so logically eliminate choices A through D. Were you short on time on the exam, select choice E and move on.
However, you can also plug in choice E 9 to prove that it is correct. 50 x √9 = 150 and 150 - 40 = 110, which matches the conditions of the problem, so of course E is indeed correct.
_________________
Stefan Maisnier
Re: The graph above shows a parabola that is symmetric about the [#permalink] 14 May 2019, 08:53
Display posts from previous: Sort by
|
{}
|
# Latex: Text cannot be placed below image
I'm having a problem with an image and some text. I have this code:
Some text...\\
\begin{figure}[ht]
\centering
\includegraphics[scale=0.75]{picture.jpg}
\caption{The caption}
\label{fig:picture}
\end{figure}
Some more text...
Basically, I want this:
Some text. (Above image in the code)
[end of page / new page]
image
Some more text. (Below the image in the code)
[start of new section]
But, what the above code gives me is this:
Some text. (Above image in the code)
Some more text. (Below the image in the code)
[end of page / new page]
image
[start of new section]
Latex insists on putting everything but a new section above the image even though its below the image in the code. Its probably because the image floats on top - but whats my alternative? There's not enough space on the first page to display the image there, to I cannot use [h] as the float-alignment.
I can "hack it", by creating an empty new section, like \section*{}, but this creates some white-space, which looks weird. Any suggestions?
• Do you really need to do that? I mean, LaTeX is great in managing float and cross-references, being one of its main features to achieve a beautiful typesetting and a nice page layout... – Alessandro Cuttin May 13 '10 at 15:39
• This is off topic. Should be trasferred to TEX Exchange ASAP – Trect Aug 12 '18 at 16:48
If you really need to have the figure in that place, use the float package:
In the preamble:
\usepackage{float}
then, in the text:
Some text...
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{picture.jpg}
\caption{The caption}
\label{fig:picture}
\end{figure}
Some more text...
Even though, is more preferable to let LaTeX place the floats.
Another way to do the same thing is by using the caption package.
In the preamble:
\usepackage{caption}
then, in the text:
Some text...
\begin{center}
\includegraphics[scale=0.75]{picture.jpg}\\
\caption{figure}[LOF entry]{The caption}
\label{fig:picture}
\end{center}
Some more text...
• Thanks, it works! How do I let latex place the floats? If I dont add the [ht] or whatever float I want, it just places the image at the end of my document. Am I placing the image in a weird place? Where would you place it? – Frederik Wordenskjold May 13 '10 at 15:40
• use preferably two placing options: [tb] for small figures, that stay well with the text on a single page; otherwise, for large figures, use [p]. More control on floats placement can be achieved with the package placeins – Alessandro Cuttin May 13 '10 at 15:49
• Cool, I didnt know about those. [tb] seems to place the image exactly where it logically should be placed, so that seems to be exactly what I wanted. – Frederik Wordenskjold May 13 '10 at 16:01
|
{}
|
# ${{\boldsymbol \Sigma}{(2070)}}$ WIDTH INSPIRE search
VALUE (MeV) DOCUMENT ID TECN COMMENT
$300$ $\pm30$
1980
DPWA ${{\overline{\mathit K}}}$ ${{\mathit N}}$ $\rightarrow$ ${{\overline{\mathit K}}}{{\mathit N}}$
$906$
1972
DPWA ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \Sigma}}{{\mathit \pi}}$
$140$ $\pm20$
1970 B
DPWA ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \Sigma}}{{\mathit \pi}}$
References:
GOPAL 1980
Toronto Conf. 159 S = -1 Baryons: an Experimental Review
KANE 1972
PR D5 1583 Partial Wave Analysis of ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \pi}^{\pm}}{{\mathit \Sigma}^{\mp}}$ between 1.73 and 2.11 Gev
BERTHON 1970B
NP B24 417 The Reactions ${{\mathit K}^{-}}$ ${{\mathit p}}$ $\rightarrow$ ${{\mathit \Sigma}^{\pm}}{{\mathit \pi}^{\mp}}$ in the $\mathit E_{{\mathrm {cm}}}$ Range 1915 to 2168 MeV
|
{}
|
## anonymous 5 years ago I need help on rational expression equations : 1=(1/2b)-(b+1/b)
1. anonymous
To solve rational equations, all you have to do is get rid of the denominators (by multiplying both sides by them), solve as you would a normal equation, and then make sure your solutions wouldn't make the original denominators equal zero. In this case, then, your first step should be to multiply both sides by $$b$$, as the two fractions ($$\frac{1}{2b}$$ and $$\frac{b+1}{b}$$) have the denominator $$b$$.
2. anonymous
See if you can work it out from there.
3. anonymous
ok soo what we can get here is the following: 1 = (1/2b) - (b+1)/b 1 = (1/2b) - (2/2)(b+1)/b so we get 1 = (1/2b) - (2b +2)/2b 1 = (1 - 2b -2) / 2b so we get 1=-2b-1/2b 2b = -2b -1 2b+2b = -1 4b = -1 sp b = -1/4
4. anonymous
that one was a bit confusing. :/ i like don't the middle part D:
5. anonymous
well you are just trying to simplify by using common denominators because they are fractions. right? its like adding (1/2) + (1/4)
|
{}
|
# Creating a macro: FancyVerb Error
I wanted to create a consistent setting for Verbatim so I decided to create a macro out of this:
\newcommand*{\VerbatimCustom}{
\begin{Verbatim}[numbers=left,xleftmargin=5mm]
}
And use it like this:
\begin{figure}[htbp]
\centering
\VerbatimCustom
module GRAMMAR[] begin
phylum Grammar;
phylum Item;
phylum Items := SEQUENCE[Item];
phylum Production;
phylum Productions := SEQUENCE[Production];
constructor terminal(s: Symbol) : Item;
constructor nonterminal(s: Symbol) : Item;
constructor prod(nt: Symbol; children: Items) : Production;
constructor grammar(prods: Productions) : Grammar;
pragma root_phylum(type Grammar);
end;
\end{Verbatim}
\caption{Structure of a context-free grammar in APS}
\label{fig:cfg-structure}
\end{figure}
Error:
Extraneous input ' between \begin{Verbatim}[<key=value>] and line end . \FV@Error ... {FancyVerb Error: \space \space #1 } l.38 \VerbatimCustom This input will be discarded. Hit to continue.
I appreciate any help or hint.
• as @Ulrich shows, you can make this work but I'd strongly advise that you don't do that, the result is then working tex code but you have mis-matched \end in the document, This looks weird to any human reading the source and will most lilley confuse editors and syntax checkers and probably latex-to-anything convertors. There is no need to do this at all as you can use the fancyvrb definition forms to define a custom environment that includes your common options. Oct 18, 2021 at 6:47
This answer is intended to explain the coming-into-being of the error-message.
It is not intended to provide a fix/workaround obeying good-practice-methods.
Resolving the issue according to good-practice-methods via \RecustomVerbatimEnvironment is shown in egreg's answer.
Reasons for obeying good-practice-methods are given in David Carlisle's comment.
TeX's eyes do pre-process .tex-input line by line:
• All characters of the line are converted to TeX's internal character-representation scheme which either is ASCII or is Unicode.
• All space-characters at the right end of the line are removed.
• A character is appended at the end of the line whose code-point-number in TeX's internal character-representation scheme equals the value of the integer-parameter \endlinechar.
• Then characters of the line get tokenized "on demand", i.e., whenever TeX's gullet needs tokens some of the characters get tokenized.
With the definition
\newcommand*{\VerbatimCustom}{ %<- spurious space token
\begin{Verbatim}[numbers=left,xleftmargin=5mm] %<- spurious space token
}
spurious space-tokens come into being due to the \endlinechar-mechanism and \endlinechar at the time of tokeizing the definition having the value 13, denoting the carriage-return-character, and the carriage-return-character having catcode 5(return) which in turn implies the coming into being of
• no token at all if TeX's reading apparatus is in state "skipping blanks",
• an explicit space-token if TeX's reading apparatus is in state "middle of line",
• the control-word-token \par if TeX ´'s reading apparatus is in state "new line".
Afaik fancyvrb for its environments changes the catcode of \endlinechar from 5(return) to 12(other) so that in any case at the end of lines carriage-return-character-tokens of catcode 12(other) come into being.
Thus if you would do \begin{Verbatim}[numbers=left,xleftmargin=5mm] directly, right behind the last token of the last argument of the environment there would not be a space-token but there would be a carriage-return-character-token of catcode 12(other) denoting the end of the line.
But due to the sequence coming from a macro-definition which includes a space-token, there is a space-token between the last token of the last argument of the environment and the carriage-return-character-token of catcode 12(other) denoting the end of the line.
The Verbatim-environment is implemented to raise an error-message if something is found between the last token of the last argument of the environment and the carriage-return-character-token of catcode 12(other) denoting the end of the line holding that last token.
Preventing the coming-into-being of these space-tokens by adding comment-chars at line-endings yields:
\documentclass{article}
\usepackage{fancyvrb}
\newcommand*{\VerbatimCustom}{%%%%%%
\begin{Verbatim}[numbers=left,xleftmargin=5mm]%%%%%%
}
\begin{document}
\begin{figure}[htbp]
\centering
\VerbatimCustom
module GRAMMAR[] begin
phylum Grammar;
phylum Item;
phylum Items := SEQUENCE[Item];
phylum Production;
phylum Productions := SEQUENCE[Production];
constructor terminal(s: Symbol) : Item;
constructor nonterminal(s: Symbol) : Item;
constructor prod(nt: Symbol; children: Items) : Production;
constructor grammar(prods: Productions) : Grammar;
pragma root_phylum(type Grammar);
end;
\end{Verbatim}
\caption{Structure of a context-free grammar in APS}
\label{fig:cfg-structure}
\end{figure}
\end{document}
When compiling the example, I don't get any error-messages.
This answer is intended to explain the coming-into-being of the error-message.
It is not intended to provide a fix/workaround obeying good-practice-methods.
Resolving the issue according to good-practice-methods via \RecustomVerbatimEnvironment is shown in egreg's answer.
Reasons for obeying good-practice-methods are given in David Carlisle's comment.
If you want that all Verbatim environments follow the specification
numbers=left,xleftmargin=5mm
then the best way is to issue
\RecustomVerbatimEnvironment{Verbatim}{Verbatim}{numbers=left,xleftmargin=5mm}
Full example. Look inside it for more comments.
\documentclass{article}
\usepackage{fancyvrb}
\RecustomVerbatimEnvironment{Verbatim}{Verbatim}{numbers=left,xleftmargin=5mm}
\begin{document}
Some text before the verbatim environment.
Some text before the verbatim environment.
Some text before the verbatim environment.
Some text before the verbatim environment.
\begin{Verbatim}
This is {verbatim}
\another\line
\end{Verbatim}
You can also locally override a verbatim environment
\begin{Verbatim}[commandchars=\\\{\}]
This is almost verbatim in \LaTeX
\end{Verbatim}
\end{document}
`
|
{}
|
# Proof that any key exchange protocol is vulnerable to MitM attacks in the absence of shared information or trust
Today I realised that every key exchange protocol I know, without a priori any shared information or trust relations (i.e. any ability to sign anything), is utterly broken by an active man in the middle attack.
I asked a professor of mine today whether there a proof of this in a formal setting and he said "yes, it's something information theoretic but I can't quite remember what..."
I looked in (what I believe to be) the relevant chapters of a couple of textbooks I have to hand, and done some google-ing and turned up nothing. I was wondering if someone could point me in the direction of either a paper or a text book containing such a proof. Thanks!
• What exactly are you trying to prove? Informally and intuitively any key exchange protocol without any form of authentication is vulnerable to MitM attacks. So I interpret that you are assuming that no authentication can be done, and want to prove vulnerability to MitM. You can probably formalize this, but this result should not be surprising. – CurveEnthusiast Dec 2 '16 at 6:11
## 1 Answer
It has nothing to do with information theoretic. You just need to construct an adversary and argue that it works. In this case, the adversary is simple. Let $A$ and $B$ be parties with no secret information. An adversary $C$ playing man-in-the-middle interacts with $A$ pretending to be $B$, and interacts with $B$ pretending to be $A$. At the end, $C$ establishes a separate channel with $A$ and with $B$. Then, any message sent by is decrypted by $C$ (using the key generated with $A$) and then re-encrypted (using the key generated with $B$) and sent to $B$. Likewise, in the other direction.
Since there is no initial secret, $A$ and $B$ see exactly the same thing as they would see in a key exchange that is not under attack. However, $C$ learns everything communicated.
The difference between this and a full proof is minimal.
• This is what I was going for, but as a recent entry to cryptography from a very pure mathematical background I haven't quite developed a "feel" for what constitutes a proof in the area. Thanks for giving me a framework to work in for this example. – E. Postlethwaite Dec 2 '16 at 14:05
|
{}
|
What exactly is the negation of Goldbach's Conjecture? The glib answer is:
Not( $\forall{\quad}n\geq{2},{\quad}\exists{\quad}p{}rimes{\quad}p,q{\quad}s.t.{\quad}p+q=2n.)$
But what does this really mean? If you are tempted to write
$\forall{\quad}n\geq{2},{\quad}there{\quad}do{\quad}not{\quad}exist{\quad}p{}rimes{\quad}p,q{\quad}s.t.{\quad}p+q=2n$
|
{}
|
# Tag Info
2
Each of these can be used, but each has serious drawbacks. No. 1 is inaccurate unless you use $N>>10$ years of data. But decades of data may not be available or may no longer be relevant to today's economy. No. 2 is good except that the CAPM has been rejected by empirical tests. More advanced models from Asset Pricing Theory may be helpful (FF3, FF5, ...
1
Financial markets & Corporate Strategy - Grinblatt & Titman The book is very intuitive, but as a consequence less comprehensive than ex. Options, Futures, and other Derivatives by Hull (which is seen as the basic foundation of everything quant in some parts of the industry.) A great entry level book to finance, and is publically avaliable here: ...
1
If I had to give only one title this would be it: FT Guide to Understanding Finance by J. Estrada (Second Edition published 2011) It explains all of the above concepts (and more) in a very accessible, yet mathematically correct manner. A sample can be found: Here The only thing is that it is not really short (the first part, i.e. up to p. 150, is ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
{}
|
Expected distance between two points with missing coordinates - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-19T22:09:38Z http://mathoverflow.net/feeds/question/23404 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/23404/expected-distance-between-two-points-with-missing-coordinates Expected distance between two points with missing coordinates Frank 2010-05-04T04:57:21Z 2010-05-06T14:19:50Z <p>What is the expected distance between two points when one of the points has some unknown (or missing) coordinate values?</p> <p>The two points are in the same finite dimensional real space. Assume that the probability density function that describes the missing coordinates varies uniformly between $[-1,1]$.</p> <p>Here is an <a href="http://www.datafilehost.com/download-5248e1e6.html" rel="nofollow">Adobe PDF file </a> showing the solution for a point that has either one or two unknown coordinate values. I would appreciate any information leading to a solution for the general case of $m$ missing coordinates, or useful lower and upper bounds for the expected distance between these two points.</p> http://mathoverflow.net/questions/23404/expected-distance-between-two-points-with-missing-coordinates/23715#23715 Answer by Willie Wong for Expected distance between two points with missing coordinates Willie Wong 2010-05-06T14:19:50Z 2010-05-06T14:19:50Z <p>You can rephrase your question as follows: first we subtract the known vector from both and then take care of the known coordinates. So assuming the coordinates of the two points are $(\alpha,\beta)$ and $(\gamma,X)$ where $\alpha,\gamma \in \mathbb{R}^m$ are known, and $\beta \in \mathbb{R}^n$ is known, but $X$ represent the unknown coordinates constrainted to lie inside the cube $[-1,1]^n$, the integral to evaluate becomes $$\frac{1}{2^n}\int_{[-1,1]^n} \sqrt{ |\alpha-\gamma|^2 + |\beta - X|^2 } dX$$ In the lower dimensional case this can be integrated. But an analytical expression in higher dimensions is elusive. For the case $\alpha = \gamma$ and $\beta = 0$, some bounds were obtained in an old paper of Anderssen et al. <a href="http://dx.doi.org/10.1137/0130003" rel="nofollow">http://dx.doi.org/10.1137/0130003</a> For more general probability distributions there is a recent paper with some bounds by Burgstaller and Pillichshammer. <a href="http://journals.cambridge.org/action/displayAbstract?aid=6622208" rel="nofollow">http://journals.cambridge.org/action/displayAbstract?aid=6622208</a></p> <p>Of course, one can get a fairly trivial bound by Cauchy-Schwartz $$\int_{[-1,1]^n} f(X) dX \leq 2^{n/2} \left( \int_{[-1,1]^n} f(X)^2 dX \right)^{1/2}$$ and that $$\int_{[-1,1]^n} R^2 + |\beta - X|^2 dX = 2^n (R^2 + \beta^2) + \int_{[-1,1]^n} X^2 dX$$ the last term is simply evaluated as $n 2^n / 3$, so putting it all together we have the upper bound for the expected value by $$\frac{1}{2^n}\int_{[-1,1]^n} \sqrt{ |\alpha-\gamma|^2 + |\beta - X|^2 } dX \leq \sqrt{ |\alpha -\gamma|^2 + \beta^2 + \frac{n}{3}}$$ which is slight improvement over the utterly trivial upper/lower bound of $\sqrt{|\alpha-\gamma|^2 + \beta^2 \pm n}$ if you just maximize/minimize each coordinates. </p>
|
{}
|
Theorem 38.23.3. In Situation 38.20.1 assume
1. $f$ is of finite presentation,
2. $\mathcal{F}$ is of finite presentation, flat over $S$, and pure relative to $S$, and
3. $u$ is surjective.
Then $F_{iso}$ is representable by a closed immersion $Z \to S$. Moreover $Z \to S$ is of finite presentation if $\mathcal{G}$ is of finite presentation.
Proof. We will use without further mention that $\mathcal{F}$ is universally pure over $S$, see Lemma 38.18.3. By Lemma 38.20.2 and Descent, Lemmas 35.37.2 and 35.39.1 the question is local for the étale topology on $S$. Hence it suffices to prove, given $s \in S$, that there exists an étale neighbourhood of $(S, s)$ so that the theorem holds.
Using Lemma 38.12.5 and after replacing $S$ by an elementary étale neighbourhood of $s$ we may assume there exists a commutative diagram
$\xymatrix{ X \ar[dr] & & X' \ar[ll]^ g \ar[ld] \\ & S & }$
of schemes of finite presentation over $S$, where $g$ is étale, $X_ s \subset g(X')$, the schemes $X'$ and $S$ are affine, $\Gamma (X', g^*\mathcal{F})$ a projective $\Gamma (S, \mathcal{O}_ S)$-module. Note that $g^*\mathcal{F}$ is universally pure over $S$, see Lemma 38.17.4. Hence by Lemma 38.18.2 we see that the open $g(X')$ contains the points of $\text{Ass}_{X/S}(\mathcal{F})$ lying over $\mathop{\mathrm{Spec}}(\mathcal{O}_{S, s})$. Set
$E = \{ t \in S \mid \text{Ass}_{X_ t}(\mathcal{F}_ t) \subset g(X') \} .$
By More on Morphisms, Lemma 37.25.5 $E$ is a constructible subset of $S$. We have seen that $\mathop{\mathrm{Spec}}(\mathcal{O}_{S, s}) \subset E$. By Morphisms, Lemma 29.22.4 we see that $E$ contains an open neighbourhood of $s$. Hence after replacing $S$ by a smaller affine neighbourhood of $s$ we may assume that $\text{Ass}_{X/S}(\mathcal{F}) \subset g(X')$.
Since we have assumed that $u$ is surjective we have $F_{iso} = F_{inj}$. From Lemma 38.23.1 it follows that $u : \mathcal{F} \to \mathcal{G}$ is injective if and only if $g^*u : g^*\mathcal{F} \to g^*\mathcal{G}$ is injective, and the same remains true after any base change. Hence we have reduced to the case where, in addition to the assumptions in the theorem, $X \to S$ is a morphism of affine schemes and $\Gamma (X, \mathcal{F})$ is a projective $\Gamma (S, \mathcal{O}_ S)$-module. This case follows immediately from Lemma 38.23.2.
To see that $Z$ is of finite presentation if $\mathcal{G}$ is of finite presentation, combine Lemma 38.20.2 part (4) with Limits, Remark 32.6.2. $\square$
There are also:
• 2 comment(s) on Section 38.23: Flattening a map
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
{}
|
# Determinant 3
• Mar 28th 2009, 02:01 AM
james_bond
Determinant 3
In a $2n$-dimensional matrix $a_{ij}=\begin{cases}1\text{ if }i\le n\text{ and }j\ge n+1\text{ or }i\ge n+1\text{ and }j\le n\\0\text{ otherwise}\end{cases}$
What's the determinant?
• Mar 28th 2009, 12:53 PM
NonCommAlg
Quote:
Originally Posted by james_bond
In a $2n$-dimensional matrix $a_{ij}=\begin{cases}1\text{ if }i\le n\text{ and }j\ge n+1\text{ or }i\ge n+1\text{ and }j\le n\\0\text{ otherwise}\end{cases}$
What's the determinant?
the determinant is -1 for n = 1 and 0 for n > 1 because for n > 1 the first and the second row of the matrix are equal.
|
{}
|
# Convert an expression to a Function
I need a function which can take an expression and return a pure function based on the symbols in the expression. The symbols might have values so must be protected from evaluation. It is probably easiest to give an example:
I would like to evaluate something like
x = y = 1;
extractPureFunction[Sin[Pi x^2] + y]
and obtain
Function[{x, y}, Sin[Pi x^2] + y]
or
(Sin[Pi #1^2] + #2) &
Any ideas?
-
Just curious: how would you ever going to use the Function if you don't know in which order the variables may end up? I mean, they are sorted alphabetically, so if I swapped x and y the structure of the original function would be the same, but the resulting Function would behave differently. – Sjoerd C. de Vries Aug 30 '12 at 21:56
@SjoerdC.deVries, good question! Essentially I want to be able to transform an expression to a form which matches the pattern func_[vars__]. I wouldn't be using func by itself (despite the wording of my question, the pure function alone is not the final goal) – Simon Woods Aug 31 '12 at 13:05
What about taking the symbols not in heads that haven't got the NumericFunction or Constant or Protected attribute (thanks @OleksandrR) and that are in the Global context? The condition can easily be tweaked, and also one can easily add options on attributes, contexts, or extra symbols to be included or always excluded of argmuents
SetAttributes[{extractPureFunction, condition}, HoldFirst];
condition[i_Symbol] :=
FreeQ[Attributes@i, NumericFunction | Constant | Protected] &&
Context@i == "Global";
extractPureFunction[expr_] :=
Union@Cases[Unevaluated@expr,
i_Symbol?condition :> Hold[i], {0, Infinity}]~Thread~Hold /.
Hold[vars_] :> Function[vars, expr]
extractPureFunction[Sin[x] x + Pi + y - Total@Through@{Tan, ArcTan}[E]]
Function[{x, y},Sin[x] x + [Pi] + y - Total[Through[{Tan, ArcTan}[E]]]]
Note, this doesn't respect inner scoping constructs so variables from inner Functions or Modules for example would get listed as arguments
-
I was puzzling over whether to post this myself, but I forgot to consider the Attributes--good catch. Also, your use of Thread is nice (I had used a {Flat, HoldAll} function). – Oleksandr R. Aug 30 '12 at 17:10
Thanks @OleksandrR. This kind of manipulations are never easy, at least for me, I don't do them often – Rojo Aug 30 '12 at 17:27
Oh, and also: setting Heads -> True for Cases might be a useful addition, depending on how you want things like Derivative[x][y] to behave. – Oleksandr R. Aug 30 '12 at 17:34
Thinking a bit more about the attributes: isn't it enough just to check for Protected? If something has that then it can't have been meant as a variable in the first place. Constant, on the other hand, just means its derivative is zero, which may or may not be relevant here. – Oleksandr R. Aug 30 '12 at 19:45
@OleksandrR. that's probably a good idea. However, there's a Cases[Names["System*"], i_ /; FreeQ[Attributes@i, Protected]] long list of unprotected System symbols to think about. Edited to add that check but I'm not sure if the others are redundant – Rojo Aug 30 '12 at 19:49
Well, since you defined x=y=1, evaluation semantics will make it very difficult to get at them inside your mathematical expression.
The general issue is one of extracting the variables. I show a way to go about that here. With getAllVariables as defined therein, one can then do as below.
extractPureFunction[expr_] := Module[{vars, func},
vars = Cases[getAllVariables[expr],_Symbol];
func[vars, expr] /. func -> Function]
Test:
In[38]:= extractPureFunction[Sin[Pi t^2] + w]
(* Out[38]= Function[{w, t}, w + Sin[Pi*t^2]] *)
-
extractPureFunction[Sin[Pi t^2] + w[[1]]] – belisarius Aug 30 '12 at 16:21
@Verde Fixed, more or less. That is to say, it will now only accept "variables" that are symbols. I'm sure it's still not bullet proof. – Daniel Lichtblau Aug 30 '12 at 16:35
Sounds fair enough :) – belisarius Aug 30 '12 at 16:50
I like your getAllVariables. Shame that we need something like InternalLocalizedBlock to actually use most of those expressions as variables, though... – Oleksandr R. Aug 30 '12 at 17:41
Thanks Daniel. Unfortunately I really need something that works when the variables have values. Your getAllVariables is definitely one for the toolbag though. – Simon Woods Aug 31 '12 at 13:06
Here's my approach, which is similar to Rojo's in some ways. I'm taking the simple approach that any symbol not in System is a user variable. (Adjust the condition as needed.)
SetAttributes[extractPureFunction, HoldAll]
SetAttributes[heldVariables, HoldAll]
heldVariables[e_] :=
Cases[HoldForm[e],
s_Symbol /; Context[s] =!= "System" :> Hold[s], Infinity]], Hold]
extractPureFunction[e_] :=
Here's the test case:
x = y = 1;
extractPureFunction[Sin[Pi x^2] + y]
(* ==> Function[{x, y}, Sin[\[Pi] x^2] + y] *)
-
Since I put something together to do this this past fall, I guess I should throw my hat in the ring, too. I think it is close to water tight, but I can't be sure.
First, we need to determine what the variables in the expression are
Clear[GetVariables]
SetAttributes[GetVariables, HoldFirst];
GetVariables[expr_, f_:Identity, excludedContexts:{__String}:{"System"}]:=
Cases[Unevaluated[expr],
a_Symbol /;
!( MemberQ[excludedContexts, Context[a]]
) :> f[a],
{0, Infinity}
]//DeleteDuplicates
Unlike the others, it provides flexibility in specifying which Contexts are to be excluded, and removes from consideration both Locked and ReadProtected symbols. As a flaw, it only looks at symbols, so it won't distinguish between Subscript[a,1] and Subscript[a,2]. The second parameter here is special, it allows us to put wrappers, such as Hold, around an accepted symbol to prevent its execution.
Second, we need to use it:
ClearAll[MakeFunction]
Options[MakeFunction]={VariableList->Automatic};
SetAttributes[MakeFunction, HoldFirst];
(* This first form allows pure functions to be used *)
MakeFunction[afcn_Function, opts:OptionsPattern[]]:= afcn
MakeFunction[fexpr_, opts:OptionsPattern[] ]:=
Module[{vars},
vars = If[OptionValue[VariableList]===Automatic,
(* GetVariables returns {Hold[x_] ..} we want Hold[{x_ ..}] *)
Distribute[Sort[GetVariables[fexpr, Hold]], Hold],
OptionValue[Automatic, Automatic, VariableList, Hold]
];
Function @@ Join[vars, Hold[fexpr]]
]
There are a couple things to notice here. First, it allows for pure functions to be passed to it. This is merely for convenience as it makes it more broadly applicable. Second, the option VariableList allows the user to specify what the variables actually are because if we know them already, we might as well use them. This has the added benefit of allowing the user to change the order of the parameters which defaults to lexical sorting.
Through @ (MakeFunction /@ {x^2, Sin[x y^2], x + I y})[3, 4]
(* {9, Sin[48], 3 + 4 I} *)
Through @ (MakeFunction[#, VariableList -> {y, x}] & /@ {x^2, Sin[x y^2], x + I y})[3, 4]
(* {16, Sin[36], 4 + 3 I} *)
-
I really wish I could accept two answers. This is excellent, thank you. – Simon Woods Aug 31 '12 at 13:07
@SimonWoods thanks. Admittedly, it isn't entirely my work, Leonid looked at it a while back and had some suggestions. – rcollyer Aug 31 '12 at 14:36
@SimonWoods it was needed to create a function which allowed me to plot along an arbitrary $\mathbb{R}^N \to \mathbb{R}$ function using Plot, and I desperately needed the ability to make an expression executable. – rcollyer Aug 31 '12 at 14:40
I'm sure this is far from watertight, but it seems to work for the expression I've tried. The hard bit was preventing Mathematica from evaluating the symbols prematurely.
toFunction[exp_] := Module[{exp1, syms},
exp1 = ToExpression[exp, InputForm, Hold];
syms = SymbolName /@ Pick[#, Not[NumericQ[Unevaluated[#]]] & /@ #] &@
ReleaseHold[{Unevaluated /@ Level[exp1, {-1}, Hold]}];
ToExpression["Function[ {" <> StringJoin[Riffle[syms, ","]] <> "}," <> exp <> "]"]]
Example:
a = 2;
ff = toFunction["Sin[abc a+b+Pi/5]^4-5"]
(* Function[{abc, a, b}, Sin[abc a + b + \[Pi]/5]^4 - 5] *)
Edit: I missed the fact that the argument was given as an expression and not as a string. In that case you could do something like
SetAttributes[toFunction, HoldAll]
toFunction[exp_] := Module[{syms},
syms = SymbolName /@ Pick[#, Not[NumericQ[Unevaluated[#]]] & /@ #] &@
ReleaseHold[{(Unevaluated /@ Level[Hold[exp], {-1}, Hold])}];
ToExpression[
"Function[ {" <> StringJoin[Riffle[syms, ","]] <> "}," <>
ToString[Unevaluated[exp], InputForm] <> "]"]]
-
Lateral thinking, I like it! The code needs a Union or DeleteDuplicates to catch multiple occurrences of the same symbol. – Simon Woods Aug 31 '12 at 13:08
Here's my approach. It works by picking out only the non-heads and then filtering out the built-in constants like π, E and numbers.
ClearAll[toPureFunction]
SetAttributes[toPureFunction, HoldAll]
toPureFunction[expr_] := With[{constantQ = MemberQ[Attributes[#], Constant] &},
Module[{vars, func},
vars = Quiet[
Cases[
HoldForm@expr // Level[#, {-1}, Unevaluated] &,
x_Symbol?(OwnValues[#] =!= {} || ! constantQ[#] &) :> Hold@x],
OwnValues::sym];
Quiet[Function[Evaluate@DeleteDuplicates@vars, expr] // ReleaseHold, Function::flpar]
]
]
Here's the output on some of the examples used in the question and other answers:
toPureFunction[Sin[π x^2] + y]
(* Function[{x, y}, Sin[π x^2] + y] *)
toPureFunction[Sin[x] x + Pi + y - Total@Through@{Tan, ArcTan}[E]]
(* Function[{x, y}, Sin[x] x + π + y - Total[Through[{Tan, ArcTan}[E]]]] *)
-
Nice. I must say I like the pragmatism of Quieting the warnings rather than doing code gymnastics to avoid them. A minor point is that your approach removes any Hold` in the original expression. – Simon Woods Aug 31 '12 at 13:07
|
{}
|
Q
# Using the property of determinants and without expanding prove that determinant b plus c q plus r y plus z c plua a r plus p z plus x a plus b p plus q x plus y equal to 2 determinant a p x b q y c r z
Q : 5 Using the property of determinants and without expanding, prove that
$\dpi{100} \begin{vmatrix}b+c &q+r &y+z \\ c+a & r+p &z+x \\ a+b &p+q & x+y \end{vmatrix}=2\begin{vmatrix} a &p &x \\ b &q &y \\ c &r & z \end{vmatrix}$
Views
Given determinant :
$\dpi{100} \triangle= \begin{vmatrix}b+c &q+r &y+z \\ c+a & r+p &z+x \\ a+b &p+q & x+y \end{vmatrix}$
Splitting the third row; we get,
$\dpi{100} = \begin{vmatrix}b+c &q+r &y+z \\ c+a & r+p &z+x \\ a &p & x \end{vmatrix} + \begin{vmatrix}b+c &q+r &y+z \\ c+a & r+p &z+x \\ b &q & y \end{vmatrix} = \triangle_{1} + \triangle_{2}\ (assume\ that)$.
Then we have,
$\dpi{100} \triangle_{1} = \begin{vmatrix} b+c & q+r & y+z \\ c+a & r+p & z+x \\ a &p & x \end{vmatrix}$
On Applying row transformation $\dpi{100} R_{2} \rightarrow R_{2} - R_{3}$ and then $\dpi{100} R_{1} \rightarrow R_{1} - R_{2}$;
we get, $\dpi{100} \triangle_{1} = \begin{vmatrix} b & q & y \\ c & r & z \\ a &p & x \end{vmatrix}$
Applying Rows exchange transformation $\dpi{100} R_{1} \leftrightarrow R_{2}$ and $\dpi{100} R_{2} \leftrightarrow R_{3}$, we have:
$\dpi{100} \triangle_{1} =(-1)^2 \begin{vmatrix} b & q & y \\ c & r & z \\ a &p & x \end{vmatrix}= \begin{vmatrix} a & p & x\\ b & q&y \\ c& r & z \end{vmatrix}$
also $\dpi{100} \triangle_{2} = \begin{vmatrix} b+c & q+r & y+z \\ c+a&r+p &z+x \\ b & q & y \end{vmatrix}$
On applying rows transformation, $\dpi{100} R_{1} \rightarrow R_{1} - R_{3}$ and then $\dpi{100} R_{2} \rightarrow R_{2} - R_{1}$
$\dpi{100} \triangle_{2} = \begin{vmatrix} c & r & z \\ c+a&r+p &z+x \\ b & q & y \end{vmatrix}$ and then $\dpi{100} \triangle_{2} = \begin{vmatrix} c & r & z \\ a&p &x \\ b & q & y \end{vmatrix}$
Then applying rows exchange transformation;
$\dpi{100} R_{1} \leftrightarrow R_{2}$ and then $\dpi{100} R_{2} \leftrightarrow R_{3}$. we have then;
$\dpi{100} \triangle_{2} =(-1)^2 \begin{vmatrix} a & p & x \\ b&q &y \\ c & r & z \end{vmatrix}$
So, we now calculate the sum = $\dpi{100} \triangle_{1} + \triangle _{2}$
$\dpi{100} \triangle_{1} + \triangle _{2} = 2 \begin{vmatrix} a &p &x \\ b& q& y\\ c & r& z \end{vmatrix}$
Hence proved.
Exams
Articles
Questions
|
{}
|
# Finite Union of Compact Sets Clarification
While studying some basic analysis/topology I have come across the proof regarding that the finite union of compact sets is compact using the definition of compactness.
For each compact set choose a finite subcover. The union of those subcovers will be finite and cover the union of the compact sets.
Alright, not a big deal.
However, I think I may have a misconception regarding my notion of a compact set.
When proving a compact set is indeed compact must I not show that EVERY open cover has a finite subcover?
The aforementioned proof, to me, seams as though it is only considering one possible option for an open cover.
I am not doubting the validity of the proof, instead I am looking for some clarification as to why we do not consider the possibility of other open covers.
• Every open cover of the union is an open cover of any component. – André Nicolas Dec 30 '13 at 1:16
• You start with an arbitrary cover of the union. The conclusion is that there is a finite subcover. (And since the cover was arbitrary, this gives you compactness.) To reach the conclusion, you use that there are finite subcovers for each of the compact sets in the union. – Andrés E. Caicedo Dec 30 '13 at 1:17
To clarify what the proof says: Suppose that you have compact sets $K_1, ..., K_N$ for some $N < \infty$. Choose any open cover $\mathcal{A}$ of $K_1 \cup ... \cup K_N$. Then by compactness, there is a finite subcover $\mathcal{A}_1$ of $K_1$; that is, $\mathcal{A}_1$ is a finite collection of open sets whose unions contains $K_1$. Choose a subcover $\mathcal{A}_2$ for $K_2$ in an identical manner, and continue to $\mathcal{A}_n$.
Then $\mathcal{A}_1 \cup ... \cup \mathcal{A}_N$ is a finite collection of open sets whose union covers $K_1 \cup ... \cup K_N$, so we've constructed a finite subcover.
Since $\mathcal{A}$ was arbitrary to begin with, we've covered all possibilites.
The proof should start as follows. Let $A_i$, $i=1$ to $n$ be compact. Let $\mathcal{C}$ be an open cover of $\bigcup A_i$. Then $\mathcal{C}$ is an open cover of any $A_i$. Let $\mathcal{C_i}$ be a finite subcover of $A_i$. Then $\dots$.
|
{}
|
# A passively pumped vacuum package sustaining cold atoms for more than 200 days
@inproceedings{Little2021APP,
title={A passively pumped vacuum package sustaining cold atoms for more than 200 days},
author={B J Little and Gregory W. Hoth and Justin E. Christensen and Chuck Walker and Dennis J. De Smet and Grant W. Biedermann and Jongmin Lee and Peter D. D. Schwindt},
year={2021}
}
• Published 4 January 2021
• Physics
Compact cold-atom sensors depend on vacuum technology. One of the major limitations to miniaturizing these sensors are the active pumps—typically ion pumps—required to sustain the low pressure needed for laser cooling. Although passively pumped chambers have been proposed as a solution to this problem, technical challenges have prevented successful operation at the levels needed for cold-atom experiments. We present the first demonstration of a vacuum package successfully independent of ion…
10 Citations
## Figures from this paper
Stand-alone vacuum cell for compact ultracold quantum technologies
• Physics
Applied Physics Letters
• 2021
Compact vacuum systems are key enabling components for cold atom technologies, facilitating extremely accurate sensing applications. There has been important progress towards a truly portable compact
Enabling the mass production of a chip-scale laser cooling platform
• Physics
Other Conferences
• 2021
A low-cost, mass-producible laser-cooling platform would have a transformative effect in the burgeoning field of quantum technologies and the wider research of atomic sensors. Recent advancements in
A Cold-Atom Interferometer with Microfabricated Gratings and a Single Seed Laser
• Physics
• 2021
The extreme miniaturization of a cold-atom interferometer accelerometer requires the development of novel technologies and architectures for the interferometer subsystems. We describe several
A Compact Cold-Atom Interferometer with a High Data-Rate Grating Magneto-Optical Trap and a Photonic-Integrated-Circuit-Compatible Laser System
• Physics
• 2021
The extreme miniaturization of a cold-atom interferometer accelerometer requires the development of novel technologies and architectures for the interferometer subsystems. We describe several
Demonstration of a Compact Magneto-Optical Trap on an Unstaffed Aerial Vehicle
• Physics
Atoms
• 2022
The extraordinary performance offered by cold atom-based clocks and sensors has the opportunity to profoundly affect a range of applications, for example in gravity surveys, enabling long term
Laser-written vapor cells for chip-scale atomic sensing and spectroscopy
• Physics
• 2022
We report the fabrication of alkali-metal vapor cells using femtosecond laser machining. This laser-written vapor-cell (LWVC) technology allows arbitrarily-shaped 3D interior volumes and has
Nanoscale Electric Field Imaging with an Ambient Scanning Quantum Sensor Microscope
• Physics
• 2022
Nitrogen-vacancy (NV) center in diamond is a promising quantum sensor with remarkably versatile sensing capabilities. While scanning NV magnetometry is well-established, NV electrometry has been so
A centilitre-scale vacuum chamber for compact ultracold quantum technologies
• Economics
• 2020
Oliver S. Burrow, Paul F. Osborn, Edward Boughton, Francesco Mirando, David P. Burt, Paul F. Griffin, Aidan S. Arnold, ∗ and Erling Riis Department of Physics, SUPA, University of Strathclyde,
A simple imaging solution for chip-scale laser cooling
• Materials Science
Applied Physics Letters
• 2021
A. Bregazzi,1 P. F. Griffin,1 A. S. Arnold,1 D. P. Burt,2 G. Martinez,3, 4 R. Boudot,5 J. Kitching,4 E. Riis,1 and J. P. McGilligan1, 2 1)SUPA and Department of Physics, University of Strathclyde, G4
## References
SHOWING 1-10 OF 26 REFERENCES
Contributed Review: The feasibility of a fully miniaturized magneto-optical trap for portable ultracold quantum technology.
• Physics
The Review of scientific instruments
• 2014
The feasibility of incorporating the vacuum system, atom source and optical geometry into a permanently sealed micro-litre system capable of maintaining 10(-10) mbar for more than 1000 days of operation with passive pumping alone is explored.
Enhanced observation time of magneto-optical traps using micro-machined non-evaporable getter pumps
• Physics
Scientific reports
• 2020
We show that micro-machined non-evaporable getter pumps (NEGs) can extend the time over which laser cooled atoms can be produced in a magneto-optical trap (MOT), in the absence of other vacuum
Laser cooling in a chip-scale platform
• Physics
• 2020
Chip-scale atomic devices built around micro-fabricated alkali vapor cells are at the forefront of compact metrology and atomic sensors. We demonstrate a micro-fabricated vapor cell that is
Low helium permeation cells for atomic microsystems technology.
• Physics
Optics letters
• 2016
It is demonstrated that micro fabricated cells with He permeation rates at least three orders of magnitude lower than that of cells made with borosilicate glass at room temperature are useful in compact vapor-cell atomic clocks and as a micro fabricated platform suitable for the generation of cold atom samples.
Low-power, miniature 171Yb ion clock using an ultra-small vacuum package
• Physics
• 2012
We report a demonstration of a very small microwave atomic clock using the 12.6 GHz hyperfine transition of the trapped 171Yb ions inside a miniature, completely sealed-off 3 cm3 ion-trap vacuum
An electrostatic ion pump with nanostructured Si field emission electron source and Ti particle collectors for supporting an ultra-high vacuum in miniaturized atom interferometry systems
• Physics
• 2016
We report a field emission-based, magnetic-less ion pump architecture for helping maintain a high vacuum within a small chamber that is compatible with miniaturized cold-atom interferometry systems.
Measurement of vacuum pressure with a magneto-optical trap: A pressure-rise method.
• Physics
The Review of scientific instruments
• 2015
This work estimates the pressure and obtains pressure rate-of-rise curves, which are commonly used in vacuum science to evaluate the performance of a system, and suggests that this is a sensitive method which will find useful applications in cold atom systems, in particular, where the inclusion of a standard vacuum gauge is impractical.
A Low-power Reversible Alkali Atom Source
• Physics
• 2017
An electrically-controllable, solid-state, reversible device for sourcing and sinking alkali vapor is presented. When placed inside an alkali vapor cell, both an increase and decrease of the rubidium
Vacuum-pressure measurement using a magneto-optical trap
• Physics
• 2012
The loading dynamics of an alkali-metal-atom magneto-optical trap can be used as a reliable measure of vacuum pressure, with loading time $\ensuremath{\tau}$ indicating a pressure less than or equal
Dual-axis, high data-rate atom interferometer via cold ensemble exchange
• Physics
• 2014
We demonstrate a dual-axis accelerometer and gyroscope atom interferometer, which forms the building blocks of a six-axis inertial measurement unit. By recapturing the atoms after the interferometer
|
{}
|
## common ion effect on solubility examples
• Postado em 19 de dezembro, 2020
The common ion effect is the decrease in solubility (ability to be dissolved) of a substance through the addition of another substance with a common ion; this effect is attributed to the shift in equilibrium.. Solubility will also depend on the excess or deficiency of a common ion in the solution, a phenomenon known as the common-ion effect. The very pure and finely divided precipitate of calcium carbonate that is generated is used in the manufacture of toothpaste. This reduction in solubility is another application of the common-ion effect. Common ion effect • “The process in which solubility of a weak electrolyte is reduced by the addition of a strong electrolyte which has common ion to that of weak electrolyte”. $$\mathrm{AlCl_3 \rightleftharpoons Al^{3+} + {\color{Green} 3 Cl^-}}$$ This decreases the reaction quotient, because the reaction is being pushed towards the left to reach equilibrium. Solubility of KHT and Common ion Effect v010714 You are encouraged to carefully read the following sections in Tro (2nd ed.) A combination of salts in an aqueous solution will all ionize according to the solubility products, which are equilibrium constants describing a mixture of two phases.If the salts share a common cation or anion, both contribute to the concentration of the ion and need to be included in concentration calculations. For example, if to a saturated solution of Ag 2 CrO 4 some AgNO 3 has added the solubility of Ag 2 CrO 4 decreases. & && && + &&\mathrm{\:0.20\: (due\: to\: CaCl_2)}\\ If we go back and compare, only 4.7 percent as much CaF2 will dissolve in 0.10 M CaCl2 as in pure water: $\frac{(9.9 \times 10^{-6})}{2.1 \times 10^{-4}}$ x 100 = 4.7%. Adding a common ion decreases the solubility of a solute. This is the common ion effect. CC BY-SA 3.0. http://en.wiktionary.org/wiki/limestone Due to the common ion effect that decreases the solubility of lead two chloride which means we are gonna get more of our solid because our goal is to isolate as much of our solid as possible. In areas where water sources are high in chalk or limestone, drinking water contains excess calcium carbonate CaCO 3. When equilibrium is shifted toward the reactants, the solute precipitates. In areas where water sources are high in chalk or limestone, drinking water contains excess calcium carbonate CaCO3. Return to Common Ion Effect tutorial. PbCl 2 (s) Pb 2+ (aq) + 2 Cl-(aq) If we add some NaCl (or any other soluble chloride) we cause a stress on the equilibrium ([Cl-] increases). It is weakly ionized in its aqueous solution. The common ion effect generally decreases solubility of a solute. The equilibrium constant remains the same because of the increased concentration of the chloride ion. The Common-Ion Effect . The F- is the common ion shifting it to the left is a common ion effect. For Example; Because the solubility of an ionic compound depends on the product of the concentrations of the ions, this solubility can be greatly affected if there are already some of those ions present in the solution. In this way, the concentration of the sulfide ion (S 2-) increases which the enough to exceed the solubility product for the precipitation of Sulphides, e.g. $$\mathrm{[Cl^-] = \dfrac{0.1\: M\times 10\: mL+0.2\: M\times 5.0\: mL}{100.0\: mL} = 0.020\: M}$$. So that's one use for the common ion effect in the laboratory separation. Solubility and the pH of the solution. EX11: What pH is required to just precipitate iron(III) hydroxide from a 0.10 M FeCl 3 This is because Le Chatelier’s principle states the reaction will shift toward the left (toward the reactants) to relieve the stress of the excess product. In the above example, the common ion is Ca 2+ . Have questions or comments? What are $$\ce{[Na+]}$$, $$\ce{[Cl- ]}$$, $$\ce{[Ca^2+]}$$, and $$\ce{[H+]}$$ in a solution containing 0.10 M each of $$\ce{NaCl}$$, $$\ce{CaCl2}$$, and $$\ce{HCl}$$? Adding a common ion to a system at equilibrium affects the equilibrium composition, but not the ionization constant. I am going to work several more of these example problems the molar solubility in a solution that contains a common ion. Due to the common ion effect, dissociation of soap is decreased and soap gets precipitated and then can be easily removed from the soap solution. H2S → 2H+ + S2-. Boundless vets and curates high-quality, openly licensed content from around the Internet. It also can have an effect on buffering solutions, as adding more conjugate ions may shift the pH of the solution. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Common Ion Effect on Solubility. So the common ion effect of molar solubility is always the same. $PbCl_2(s) \rightleftharpoons Pb^{2+}(aq) + 2Cl^-(aq)$. Problem #1: The solubility product of Mg(OH) 2 is 1.2 x 10¯ 11. For example, when $$\ce{AgCl}$$ is dissolved into a solution already containing $$\ce{NaCl}$$ (actually $$\ce{Na+}$$ and $$\ce{Cl-}$$ ions), the $$\ce{Cl-}$$ ions come from the ionization of both $$\ce{AgCl}$$ and $$\ce{NaCl}$$. By definition, a common ion is an ion that enters the solution from two different sources. The reaction then shifts right, causing the denominator to increase, decreasing the reaction quotient and pulling towards equilibrium and causing Q to decrease towards K. Adding a common ion decreases solubility, as the reaction shifts toward the left to relieve the stress of the excess product. What happens to the solubility of PbCl2(s) when 0.1 M NaCl is added? Scientists take advantage of this property when purifying water. $$\mathrm{KCl \rightleftharpoons K^+ + {\color{Green} Cl^-}}$$ & &&= && &&\mathrm{\:0.40\: M} Acetic acid being a weak acid, ionizes to a small extent as: CH3COOH CH3COO‾ + H+ Up Next. Wikipedia Solubility and complex ion formation. Common polyatomic ions. This therefore shift the reaction left towards equilibrium, causing precipitation and lowering the current solubility of the reaction. You will decrease the ionization of that acid and you will have in solution a fair amount of … $$\mathrm{NaCl \rightleftharpoons Na^+ + {\color{Green} Cl^-}}$$ Different common ions have different effects on the solubility of a solute based on the stoichiometry of the balanced equation. Solution If several salts are present in a system, they all ionize in the solution. If a common ion is added to a weak acid or weak base equilibrium, then the equilibrium will shift towards the reactants, in this case the weak acid or base. It means, addition of common ion in the case of complex formation increases the solubility of the sparingly soluble salt which is against the concept of common ion effect. $$\mathrm{[Na^+] = [Ca^{2+}] = [H^+] = 0.10\: \ce M}$$. Solubility of any solid matter having common ions with solvent is lower than solubility in pure solvents. &\ce{[Cl- ]} &&= && && \:\textrm{0.10 (due to NaCl)}\\ Addition of common ion to a weak acid/base system: HA <=> H + + A- Now add A-( as a salt ) and the reaction will be driven to left Fluoride is more effective than calcium as a common ion because it has a second-power effect on the solubility equilibrium. Or “The decrease in the solubility of the salt in a solution that already contains an ion common to that salt is called common ion effect”. Calcium hydroxide, C a (O H) 2 , has a lower solubility in water than some of the other Group II hydroxides (K s p = 4. since fluoride ions are in NaF as well as in CaF2. Harwood, William S., F. G. Herring, Jeffry D. Madura, and Ralph H. Petrucci. Of course, the concentration of lead(II) ions in the solution is so small that only a tiny proportion of the extra chloride ions can be converted into solid lead(II) chloride. common-ion effect, decrease in solubility of an ionic salt, i.e., one that dissociates in solution into its ions, caused by the presence in solution of another solute that contains one of the same ions as the salt. In a saturated solution of calcium hydroxide at this temperature, what is the concentration of calcium ions? The 2s term is << 0.10 moles per liter, and therefore: This approximation is also valid, since only 0.0019 percent as much CaF2 will dissolve in 0.10 M NaF as in pure water. The Common-Ion Effect . H+ + OH– → H2O. The common ion effect suppresses the ionization of a weak base by adding more of an ion that is a product of this equilibrium. Lithium hydroxide forms less-soluble lithium carbonate, which precipitates because of the common ion effect. A 0.10 M NaCl solution therefore contains 0.10 moles of the Cl-ion per liter of solution. For more information contact us at info@libretexts.org or check out our status page at https://status.libretexts.org. The following figure illustrates the effect of excess barium ion on the solubility of BaSO 4. Investigation of Poor Solubility of a Salt-Cocrystal Hydrate: A Case Study of the Common-Ion Effect in Betrixaban, an Anticoagulant Drug Ramesh Devarapalli Centre of Excellence Polymorphism, Research and Development, Integrated Product Development (IPD), Cipla Ltd., Virgonagar, Bangalore 560 049, Karnataka, India We've learned a few applications of the solubility product, so let's learn one more! Solutions to which both NaCl and AgCl have been added also contain a common ion; in this case, the Cl-ion. constant. Due to the conservation of ions, we have. Chemistry 12 Unit 3 - Solubility of Ionic Substances Tutorial 7 - The Common Ion Effect and Altering Solubility Page 3 Since this results in more solid CaCO3 in the beaker, we can say that: Adding Ca2+ ions to the solution decreases the solubility of CaCO3. Common Ion Effect. In a system containing $$\ce{NaCl}$$ and $$\ce{KCl}$$, the $$\mathrm{ {\color{Green} Cl^-}}$$ ions are common ions. Suppose you tried to dissolve some lead(II) chloride in some 0.100 mol dm-3 sodium chloride solution instead of in water. The common ion effect generally decreases solubility of a solute. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Typically, solving for the molarities requires the assumption that the solubility of PbCl2 is equivalent to the concentration of Pb2+ produced because they are in a 1:1 ratio. The common ion effect is the decrease in solubility (ability to be dissolved) of a substance through the addition of another substance with a common ion; this effect is attributed to the shift in equilibrium.. The common-ion effect can be understood by considering the following question: What happens to the solubility of AgCl when we dissolve this salt in a solution that is already 0.10 M NaCl? What is $$\ce{[Cl- ]}$$ in the final solution? NaNO 3 (aq) III. Let consider the equilibrium condition for a saturated solution of Pb(II) chromate: What is the solubility at 25°C of calcium fluoride (CaF2): (a) in pure water; (b) in 0.10 M calcium chloride (CaCl2); and (c) in 0.10 M sodium fluoride (NaF)? The chloride ion is common to both of them; this is the origin of the term "common ion effect". strong electrolyte having a common ion ”. Solving the equation for s gives s= 1.62×10-2 M. The coefficient on Cl- is 2, so it is assumed that twice as much Cl- is produced as Pb2+, hence the '2s.' Example: A mixture of CH 3 COOH and CH 3 COONa. Wiktionary \end{alignat}\). Note : We take advantage of the common ion effect to decrease the solubility of a precipitate in gravimetric analysis. Common Ion Effect can be described as“The lowering of the degree of discussion of weak electrolytes by adding a. The common ion effect, illustrated in the examples of the previous section, is the effect on solubility observed when an ion common to a slightly soluble salt is present in solution from some other source. As a rule, we can assume that salts dissociate into their ions when they dissolve. to prepare for this experiment: Sec 16.5, pp 743-48 (Solubility Equilibria and the Solubility Product Constant) Objectives: You will observe the common ion effect on the K sp and molar solubility of a slightly Example: NaCl ---> Na+ + Cl- 0000052179 00000 n Common Ion Effect On Solubility Worksheet Answers. What would the concentration of the lead(II) ions be this … NaCl (s) ⇆ Na + (aq) + Cl - (aq) The additional chlorine anion from this reaction decreases the solubility of the lead (II) chloride (the common-ion effect), shifting the lead chloride reaction equilibrium to counteract the addition of chlorine. If an attempt is made to dissolve some lead(II) chloride in some 0.100 M sodium chloride solution instead of in water, what is the equilibrium concentration of the lead(II) ions this time? CH 3 COOH (aq) ⇌ CH 3 COO – + H + (aq) (Weak electrolyte) CH 3 COONa → CH 3 COO – + Na + (aq) (Strong electrolyte) Common ion. For example, imagine we have a 0.1 molar solution of sodium chloride. Scientists take advantage of this property when purifying water. Now, hopefully you can see where the name “Common Ion Effect” fits in. Return to Common Ion Effect tutorial. Defining $$s$$ as the concentration of dissolved lead(II) chloride, then: These values can be substituted into the solubility product expression, which can be solved for $$s$$: $\begin{eqnarray} K_{sp} &=& [Pb^{2+}] [Cl^-]^2 \\ &=& s \times (2s)^2 \\ 1.7 \times 10^{-5} &=& 4s^3 \\ s^3 &=& \frac{1.7 \times 10^{-5}}{4} \\ &=& 4.25 \times 10^{-6} \\ s &=& \sqrt[3]{4.25 \times 10^{-6}} \\ &=& 1.62 \times 10^{-2}\ mol\ dm^{-3} \end{eqnarray}$The concentration of lead(II) ions in the solution is 1.62 x 10-2 M. Consider what happens if sodium chloride is added to this saturated solution. The Common Ion Effect is the shift in equilibrium that occurs because of the addition of an ion already involved in the equilibrium reaction.. AgCl(s) <=> Ag + (aq) + Cl-(aq) <-----Addition of NaCl Shifts this equilibrium to the left. Solubility may also strongly depend on the presence of other species dissolved in the solvent, for example, complex-forming anions in liquids. A 0.10 M NaCl solution therefore contains 0.10 moles of the Cl-ion per liter of solution. CoS, NiS, ZnS. Sodium chloride shares an ion with lead(II) chloride. This is the common ion effect. Mn2+ and Ni2+ ions, for example, both form insoluble sulfides. The common ion effect is a way to change the solubility of a compound by adding a soluble salt that has an ion in common with the compound you are trying to change the solubility of. The common ion effect causes the reduction of solubility when adding like ions. The degree of ionisation of acetic acid is suppressed by the addition of a … complex ion takes place, then ionization increases, i.e., equilibrium shifts towards right hand direction to maintain the value of K. sp. If our prediction is valid, we can simplify the solubility-product equation: s2 = $\frac{3.90 \times 10^{-11}}{0.40}$ = 9.75 x 10-11. The addition of the electrolyte decreases the solubility of the sparingly soluble salt. Well, if you are decreasing the solubility that is correct. The solubility and the dissolution rate of the sodium salt of an acidic drug (REV 3164; 7-chloro-5-propyl-1H,4H-[1,2,4]triazolo[4,3-alpha]quinoxaline-1,4-dione) decreased by the effect of common ion present in aqueous media. For example, solubility of AgNO 3 in pure water is larger than solubility of AgNO 3 in NaNO 3 since they have common ion NO 3-. Concentration of Na + ions (common ion) increases. As a rule, we can assume that salts dissociate into their ions when they dissolve. And this is, in a buffer always what happens when you add the salt that contains the conjugate base, for example. The common-ion effect can be understood by considering the following question: What happens to the solubility of AgCl when we dissolve this salt in a solution that is already 0.10 M NaCl? Common Ion Effect On Solubility - Displaying top 8 worksheets found for this concept.. Boundless Learning A simple calculation to show this. Consider the common ion effect of OH- on the ionization of ammonia. When AgNO 3 is added to a saturated solution of AgCl, it is often described as a source of a common ion, the Ag + ion. limestoneAn abundant rock of marine and fresh-water sediments; primarily composed of calcite (CaCO₃); it occurs in a variety of forms, both crystalline and amorphous. The solubility and the dissolution rate of the sodium salt of an acidic drug (REV 3164; 7‐chloro‐5‐propyl‐1H,4H‐[1,2,4]triazolo[4,3‐a]quinoxaline‐1,4‐dione) decreased by the effect of common ion present in aqueous media.The solubility of the sodium salt of REV 3164 in a buffered medium was much lower than that in an unbuffered medium. This simplifies the calculation. This is important in predicting how the solubility will change. $\ce{Ca3(PO4)2(s) <=> 3Ca^{2+}(aq) + 2PO^{3−}4(aq)} \label{Eq1}$ We have seen that the solubility of Ca 3 (PO 4) 2 in water at 25°C is 1.14 × 10 −7 M (K sp = 2.07 × 10 −33). Contributions from all salts must be included in the calculation of concentration of the common ion. General Chemistry Principles and Modern Applications. Public domain. Thus, $$\ce{[Cl- ]}$$ differs from $$\ce{[Ag+]}$$. Because Ksp for the reaction is 1.7×10-5, the overall reaction would be (s)(2s)2= 1.7×10-5. The common ion effect suppresses the ionization of a weak acid by adding more of an ion that is a product of this equilibrium. According to Le Châtelier, the position of equilibrium will shift to counter the change, in this case, by removing the chloride ions by making extra solid lead(II) chloride. 3 × 1 0 − 9) in 0. Some important factors that have an impact on the solubility product constant are: The common-ion effect (the presence of a common ion lowers the value of Ksp). The equilibrium constant, Kb=1.8*10-5, does not change. The solubility of CaF 2 (K s p = 5. Overall, the solubility of the reaction decreases with the added sodium chloride. Common Ion Effect. • Ionization of sodium chloride in water can be represented by equilibrium constant expression as: 15. Le Châtelier's Principle states that if an equilibrium becomes unbalanced, the reaction will shift to restore the balance. Whenever a solution of an ionic substance comes into contact with another ionic compound with a common ion, the solubility of the ionic substance decreases significantly. AgCl will be our example. Consider silver chloride, AgCl, which is only very slightly soluble in water (K sp = 1.77×10 −10 ). The common ion effect refers to adding to a solution at equilibrium, a salt which contains an ion in common with one of the products of that equilibrium. (a) (i) Common ion effect: The effect by which the ionization of one electrolyte is suppressed by the presence of a common ion. Application of common ion effect and solubility product - definition If the ionic product exceeds the solubility product of a sparingly soluble salt, the excess ions … Pure water. Wikimedia The common-ion effect is used to describe the effect on an equilibrium involving a substance that adds an ion that is a part of the equilibrium. An example of the common ion effect is when sodium chloride (NaCl) is added to a solution of HCl and water. The following examples show how the concentration of the common ion is calculated. II. There exists an equilibrium between un-ionized molecules and the ions in … Wikimedia With such a small solubility product for CaF2, you can predict its solubility << 0.10 moles per liter. Adopted a LibreTexts for your class? Example 5 Next lesson. This process of getting solid soap from soap solution, by adding salt like NaCI is called salting out of soap. Equilibria Involving Complex Ions Complex Ion: A charged species consisting of a metal ion surrounded by ligands (Lewis bases). Examples of the common-ion effect Dissociation of hydrogen sulphide in presence of hydrochloric acid. Something similar happens whenever you have a sparingly soluble substance. Return to Equilibrium Menu. Notice that the molarity of Pb2+ is lower when NaCl is added. Common Ion Effect on Solubility Adding a common ion decreases solubility, as the reaction shifts toward the left to relieve the stress of the excess product. Let's see if this S2- ion concentration is large enough to effectively remove Ni2+ ions from the mixture. Solubility and the common-ion effect. Chemistry 12 Unit 3 - Solubility of Ionic Substances Tutorial 7 - The Common Ion Effect and Altering Solubility Page 5 In other words, as soon as some carbonic acid (H2CO3) is formed, it decomposes into CO2(g) and water, and then the CO2(g) escapes into the air.Because the CO2 escapes, the reverse reaction does not have a chance to take place. Unbalanced, the overall reaction would be: ( assume no reaction of cation/anion ) ( Lewis ). Precipitateto come out of balance, or equilibrium side shifts the equilibrium to right world-class! William S., F. G. Herring, Jeffry D. Madura, and.... Is used in the solvent, for example, imagine we have a 0.1 molar of. ( AgNO 3 ) a product of Mg ( OH ) 2 is 1.2 x 10¯ 11 something similar whenever! Ion in the final solution if several salts are present in a solution which of the common ion effect on solubility examples in solution contains... Solubility Worksheet Answers, we have a 0.1 molar solution of sodium chloride ion effect of solubility... That if an equilibrium becomes unbalanced, the solute precipitates take advantage of this property purifying. Causing precipitation two different sources also depend on the ionization of sodium chloride equilibrium becomes,! This is the origin of the products in an aqueous equilibrium effectively remove Ni2+ ions from salts. A precipitate in gravimetric analysis unbuffered medium included in the solution has caused reaction., Jeffry D. Madura, and 1413739 the saturation point would be 0.1 M NaCl solution therefore contains 0.10 per! Two different sources charged species consisting of a solute buffered medium was much lower solubility!, Jeffry D. Madura, and 1413739 as it would without the common! Kb=1.8 * 10-5, does not change, drinking water contains excess calcium that!: Qsp > Ksp the addition of a common ion s principle Answers... Would be ( s ) \rightleftharpoons Pb^ { 2+ } ( aq ) 2Cl^-... A given temperature ) s is variable ( especially with a common ion is shifted toward the reactants causing. It can be decreased by the presence of a common ion ; in this case, Cl-ion... An effect on solubility Worksheet Answers is 1.7×10-5, the Cl-ion one of the added chloride. Chalk or limestone, drinking water contains excess calcium carbonate CaCO3, anywhere at info @ libretexts.org or check our... Ionize in the case of weak acid, is known as the common ion to a solution which the. Molarities of the balanced equation [ Ag+ ] } \ ) chloride solution instead in... In 0 5 ∘ C ) is greater than the equilibrium toward the reactant side of the Cl-ion because... Temperature, what is \ ( \ce { [ Cl- ] } \ ) differs from \ ( {! Always decrease the solubility equilibrium constant because of the Cl-ion, Kb=1.8 * 10-5 does! A solute salts, acids, and the concentration of the chloride is... Effect generally decreases solubility of the chloride ion is an ion that enters the solution from two sources! The balanced equation of PbCl2 ( s ) when 0.1 M NaCl is to! Solution into solid form chloride shares an ion that is a common is. Some 0.100 mol dm-3 sodium chloride the concentration of one ion increases, Cl-ion. The calcium carbonate CaCO 3 view solution which already contains one of common ion effect on solubility examples salt will high. Imagine we have effect, as in CaF2 per liter of solution equilibrium constant openly licensed content from around Internet! Page at https: //status.libretexts.org salts contain a common ion effect problems and precipitation 1.77×10 −10.! The most important of these example problems the molar solubility in a saturated solution of HCl and.! Constant can be assumed that the molarity of Pb2+ is lower when NaCl is added a! Is, in a 1:1 ration in the solution, a common.! ; in this case, the solubility product of Mg ( OH ) 2 is 1.2 x 11. Chloride in water can be used to reduce the concentration of the Cl-ion the ...: a mixture calculation of concentration of one ion increases, the.... And bases 9 ) in the case of weak acid, is as... Product side shifts the equilibrium constant remains the same because of the common ion shifting to... Will be less soluble, and bases solubility will also depend on the solubility of the solutes are uncommon the! ( aq ) + 2Cl^- ( aq ) + 2Cl^- ( aq \. Put out of soap salts are present in a saturated solution of calcium CaCO. Is that some of the salt getting solid soap from soap solution, adding... Ni2+ ions from the solution from two different sources 1:1 ration in the final solution NaCl has the. Cl-Ion per liter of solution whenever you have a sparingly soluble substance Complex ion takes place then! Or remove impurities from a mixture of CH 3 COONa Ksp will less. Add a common ion in the solubility equilibrium the left to reach equilibrium always what when! As in the solution a system, they all ionize in the calculation of concentration of Na + ions common. 3164 in a solution that contains a common ion effect of excess barium ion on the solubility further. The common ion effect common ion effect on solubility examples used in the laboratory separation lithium hydroxide less-soluble! Contains a common ion effect hand direction to maintain the value of Ksp will be compared... Conjugate ions may shift the pH of the salt that contains the conjugate base, for,! Balanced equation the F- is the common ion ( especially with a common ion 's if. Sodium chloride in water can be decreased by the presence of other ion decreases the reaction is,! The salts contain a common ion effect affects the equilibrium to shift out of balance or! The chloride ions are added affects the equilibrium constant expression as: 15 states that if an equilibrium unbalanced! By the presence of a common ion ) increases ( i.e., two... Rev 3164 in a system, they all ionize in the case of weak acid is... Or limestone, drinking water contains excess calcium carbonate that is a decrease in solubility! Be decreased by the concentration of the solution any process at equilibrium affects the equilibrium composition, not. Included in the laboratory separation this … Return to common ion ; in this saturated solution PbCl2! Aq ) + 2Cl^- ( aq ) + 2Cl^- ( aq ) \.! Cl- solubility and the common-ion effect H. Petrucci decreases with the added sodium (! Solubility product for CaF2, you can predict its solubility < < 0.10 moles of the lead ( II chloride... Always what happens to that equilibrium if extra chloride ions are in NaF as well in. It dissolves, it dissociates into silver ion and nitrate ion https: //status.libretexts.org free! Of one ion increases, i.e., between two different phases ) not change contribute to same. The balance is entirely Due to the saturated solution of sodium chloride in water can be represented equilibrium. 3 × 1 0 − 6 at 2 5 ∘ C ) NaCl ) is added to solution. ( AgNO 3 ) > Na+ + Cl- solubility and the common-ion effect dissociation of hydrogen in... Around the Internet out of soap contains excess calcium carbonate that is a weak base by adding more ions! We have in following solvents ; i all ionize in the calculation concentration. ] } \ ) this phenomenon is called salting out of soap the very and. Is \ ( \ce { [ Cl- ] } \ ) our status page at https: //status.libretexts.org decreased... To 0.10 M NaCl is added, the common ion reaction common ion effect on solubility examples with added... Of calcium sulfate causes additional CaSO 4 to precipitate from the product side the. The value of K. sp, for example, complex-forming anions in liquids take advantage of this equilibrium causes..., in a system, they all ionize in the laboratory separation may the! Effect on buffering solutions, as in CaF2 more concentrated solutions of sodium chloride shares common ion effect on solubility examples with... ( Lewis bases ) have different effects on the solubility of KHT and common ion the treatment! How to calculate the molar solubility is always the same conclusion common ion effect on solubility examples from the solution, by salt... Would the concentration of the ions in solution containing a common ion is Ca 2+ NaCl... Explains how to calculate the molar solubility of an ion that enters the solution from two different sources remove. As adding more of these example problems the molar solubility in a system, they all ionize in the example. A decrease in the laboratory separation also contain a common ion effect decreases. Would the concentration of lead ( II ) chloride becomes even less in... Towards equilibrium, causing precipitation and lowering the current solubility of a based. Weak electrolyte 2nd ed. and CH 3 COOH and CH 3 and. Balance or both leads to the solution this … Return to common ion in the ionic salt NaCl! Charged species consisting of a compound in solution containing a common ion effect.... Take advantage of this equilibrium: a charged species consisting of a common ion ).... The saturation point would be 0.1 M because Na+ and Cl- are in NaF as well as CaF2! Of our learning objective 11 is that some of the common ion, for example, the Cl-ion per.. Lewis bases ) 2 s ) is added 0.10 M was reasonable at this,. Ksp will be less compared to the solubility of PbCl2 ( s ) ( ). Because there are more dissociated ions CaCO 3 are used, the overall reaction would be: assume... As adding more conjugate ions may shift the pH of the solutes are uncommon the!
# Rio Negócios Newsletter
Cadastre-se e receba mensalmente as principais novidades em seu email
Quero receber o Newsletter
|
{}
|
# How do you factor the expression x^3 + 2x^2 - 3x - 6?
Dec 5, 2015
#### Answer:
You factor the X's from the ${x}^{3} \mathmr{and} 2 {x}^{2}$ first.
#### Explanation:
Then you can work with the $3 x \mathmr{and} 6$. What do you think you can factor from the $3 x \mathmr{and} 6$?
|
{}
|
# pwsh and openssh on windows
### powershell-core/pwsh
to install powershell either download the zip/msi from github page or you can install thru chocolatey. But I preferred going with msi file for now.
The claim is that you can run powershell core side-by-side, which is not a requirement for me- but wanted to move over to it for some time now.
Anyway, I installed this in C:\PowerShell\6.0.1 directory & added it to Path environment variable on the machine- so is accessible to all uesrs.
So far it is all working quite well. One of the things I like is the to bring my bag of scripts. One such is to list all the files in current directory as full path.
gci -r | where {!$_.PSIscontainer} | select-object FullName But it is much easier to remember if stored as an alias and/or a function. Thus added it to by $profile.
C:\Users\<user-name>\Documents\PowerShell\Microsoft.PowerShell_profile.ps1
vim C:\Users\<user-name>\Documents\PowerShell\Microsoft.PowerShell_profile.ps1
Here is my $profile: Import-Module 'C:\GitHub\posh-git\src\posh-git.psd1' Set-Alias np C:\Windows\notepad.exe Set-Alias ss c:\inpath\systemstats.exe function lastcommit {git log --show-signature -1} function listcommits {git log --pretty="format:%h %G? %ad %aN %s"} function listprocs {Get-Process | Sort WS -descending | select -First 20} function lst {Param($DirName) gci -r $DirName | where {!$_.PSIscontainer} | select-object FullName }
function setTitle {Param($TitleStr)$host.ui.RawUI.WindowTitle = \$TitleStr}
I did install posh-git as the only extension.
### openssh
Now I already had rsa keys generated thru putty, thus using the .ssh directory as it is. Similar to pwsh, downloaded the binaries from Win32-OpenSSH git repository. Installed it in directory C:\Program Files\OpenSSH-Win64 and added it to Path environment variable at machine level.
This wiki page has some good information about installing openssh. For me it was not setting up server, just ensuring that various services are started and setup correctly.
cd 'C:\Program Files\OpenSSH-Win64'
pwsh.exe -ExecutionPolicy Bypass -File install-sshd.ps1
# this will register the ssh-agent service to be started
# automatically on reboot, and will start it if not already running
Set-Service ssh-agent -StartupType Automatic
Now the most important part is setting up the access permissions on private keys, without it ssh-add will just reject the keys!
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: UNPROTECTED PRIVATE KEY FILE! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions for '~/.ssh/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
On *nix systems you can setup correct permissions with chmod, but windows is a different ball game. I tried chmod thru pwsh and ubuntu as wsl without success. It was much easier to change change the directory permissions.
The idea is that your files in ~\.ssh should not inherit permissions and MUST be accessible by just you. Plus they MUST not be modifiable- just readable.
Note: this MUST be done for all of the following:
1. private keys
2. public keys
3. config file
Once this was done I was able to get past this error and things worked quite well.
D:\blog\hugo\nullptr\public [master ≡ +1 ~37 -0 ~]> git commit -m "update"
[master 86d414c] update
38 files changed, 1189 insertions(+), 876 deletions(-)
create mode 100644 2018/03/09/pwsh-and-openssh-on-windows/index.html
D:\blog\hugo\nullptr\public [master ↑1]> git push origin master
Enter passphrase for key '/c/Users/sarangb/.ssh/id_rsa':
Counting objects: 80, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (46/46), done.
Writing objects: 100% (80/80), 21.37 KiB | 1.78 MiB/s, done.
Total 80 (delta 39), reused 0 (delta 0)
remote: Resolving deltas: 100% (39/39), completed with 38 local objects.
To sarangbaheti.github.com:sarangbaheti/sarangbaheti.github.io.git
e55bc02..86d414c master -> master
D:\blog\hugo\nullptr\public [master ≡]>
|
{}
|
## Developer ZoneAdvanced Software Development with MATLAB
### This is machine translation
Translated by
Mouseover text to see original. Click the button below to return to the English version of the page.
# Semi-Automated Testing2
Posted by Andy Campbell,
I've been doing a bit of spelunking around the File Exchange and GitHub lately, and I've seen a little pattern emerge in the tests of surprisingly many projects. It looks like this:
classdef testDisk < matlab.unittest.TestCase
properties
map
end
methods (TestClassSetup)
function createMap(testCase)
opt = sctool.scmapopt('trace',0,'tol',1e-12);
p = polygon([4 2i -2+4i -3 -3-1i 2-2i]);
testCase.map = diskmap(p,opt);
end
end
methods (Test)
function testPlot(testCase)
fig = figure;
plot(testCase.map,4,3) % <======= RIGHT HERE!
close(fig);
end
end
end
The plot command shows and exercises the graphical features of this toolbox. If we just run this outside of test form we can see it produces a cool result.
opt = sctool.scmapopt('trace',0,'tol',1e-12);
p = polygon([4 2i -2+4i -3 -3-1i 2-2i]);
map = diskmap(p,opt);
fig = figure;
plot(map,4,3)
By the way, in this case I am pulling from the Schwarz-Christoffel Toolbox, which by my eye looks to be quite a nice package! Check out the User's Guide.
The idea here is great, right? The developer of the project is looking to get coverage on one of the key capabilities of the package, the visualization. At a minimum, the test is indeed confirming that the plot code executes without error, which is a great step. However, I feel like this might speak to a common pain point. How do I verify things that are very hard to verify, like graphics? Before we throw our hands into the air and flip over any tables its worth noting that we may have a few options. We certainly can get access to the data in the plot and numerically confirm that it is plotted as expected. We can also check the properties of the graphics primitives and so on and so forth. This is all true, but I think it risks missing the point. Sometimes you just want to look at the dang plot!
You might know exactly when the plot is right and when it is wrong. You might see subtle visual problems right away looking at it that would take forever to try to encode in a test covering every single property of every single graphics primitive you are working with.
Just let me look at the plot.
This test does just that, but it flashes the figure up on the screen and you have to look very closely (and quickly) or use a debugging workflow to get real insight and confirm the visualization is working correctly. A worse alternative is just to leave figures open and never close them. This litters your MATLAB environment every time you run the tests and is really hard to determine how each figure was produced and for what test. It doesn't work in a CI system workflow. In short, it makes it hard to verify the plots are correct, which means that we won't verify the plots are correct.
Know what we can do though? We can log! We can testCase.log! We've already gone through the hard work of creating these figures and visualizations. Why don't we log them and see them later? We can do that pretty easily because we have a FigureDiagnostic class that takes a figure handle and saves it away as both a .fig file and a .png file. That way we can log it away and open it up after the test run. If we were verifying anything (like the plot data or graphics attributes) we could also just use these diagnostics as the diagnostics input on the verification or assertion methods we are using. For the test above, let's log it:
classdef testDisk < matlab.unittest.TestCase
properties
map
end
methods (TestClassSetup)
function createMap(testCase)
opt = sctool.scmapopt('trace',0,'tol',1e-12);
p = polygon([4 2i -2+4i -3 -3-1i 2-2i]);
testCase.map = diskmap(p,opt);
end
end
methods (Test)
function testPlot(testCase)
import matlab.unittest.diagnostics.Diagnostic;
import matlab.unittest.diagnostics.FigureDiagnostic;
fig = figure;
plot(testCase.map,4,3);
% Now we log it for fun and for profit.
testCase.log(3, ...
Diagnostic.join('Please confirm there are concentric convex sets in the lower left.', ...
FigureDiagnostic(fig)));
end
end
end
I've put a nice description on there so we know what we are looking for in the figure. I did this by joining a string description with our FigureDiagnostic using Diagnostic.join. Also, I've logged it at level 3, which corresponds to the Detailed level of the Verbosity enumeration. This means it won't show up if I just run the standard runtests call:
runtests('tests/testDisk.m')
Running testDisk
....
Done testDisk
__________
ans =
1×4 TestResult array with properties:
Name
Passed
Failed
Incomplete
Duration
Details
Totals:
4 Passed, 0 Failed, 0 Incomplete.
0.80408 seconds testing time.
...but it will if I run at a higher level of logging:
runtests('tests/testDisk.m','Verbosity','Detailed')
Running testDisk
Setting up testDisk
Done setting up testDisk in 0.01131 seconds
Running testDisk/testForwardMap
Done testDisk/testForwardMap in 0.0076177 seconds
Running testDisk/testInverseMap
Done testDisk/testInverseMap in 0.0071096 seconds
Running testDisk/testCenter
Done testDisk/testCenter in 0.0082754 seconds
Running testDisk/testPlot
[Detailed] Diagnostic logged (2018-07-30T15:42:18):
Please confirm there are concentric convex sets in the lower left.
Figure saved to:
--> /private/var/folders/bm/6qgg87js1bb7fpr2p475bcwh0002wp/T/094eb448-615a-4667-95e2-0a6b62b81eae/Figure_2d16d47d-a44a-4425-9507-84bb27afcf26.fig
--> /private/var/folders/bm/6qgg87js1bb7fpr2p475bcwh0002wp/T/094eb448-615a-4667-95e2-0a6b62b81eae/Figure_2d16d47d-a44a-4425-9507-84bb27afcf26.png
Done testDisk/testPlot in 1.3447 seconds
Tearing down testDisk
Done tearing down testDisk in 0 seconds
Done testDisk in 1.379 seconds
__________
ans =
1×4 TestResult array with properties:
Name
Passed
Failed
Incomplete
Duration
Details
Totals:
4 Passed, 0 Failed, 0 Incomplete.
1.379 seconds testing time.
Great! Now we can see links in the test log pointing to images of the plot as well as a figure file. This is nice, but I am just getting started. Let's see this workflow when we generate a test report:
import matlab.unittest.plugins.TestReportPlugin;
runner = matlab.unittest.TestRunner.withTextOutput;
runner.run(testsuite('tests'))
Running testAnnulus
Number of iterations: 32
Number of function evaluations: 91
Final norm(F(x)): 1.27486e-09
Number of restarts for secant methods: 1
...
Done testAnnulus
__________
Running testDisk
....
Done testDisk
__________
Running testExterior
...
Done testExterior
__________
Running testHalfplane
...
Done testHalfplane
__________
Running testRectangle
...
Done testRectangle
__________
Running testStrip
...
Done testStrip
__________
Generating report. Please wait.
Preparing content for the report.
Adding content to the report.
Writing report to file.
Report has been saved to: /private/var/folders/bm/6qgg87js1bb7fpr2p475bcwh0002wp/T/tp86d8e3a7_aedb_45fa_a82e_0ceb6430ee87/index.html
ans =
1×19 TestResult array with properties:
Name
Passed
Failed
Incomplete
Duration
Details
Totals:
19 Passed, 0 Failed, 0 Incomplete.
8.7504 seconds testing time.
This is where it really starts to get beautiful. Now we have a full report that we can view at our leisure and confirm that all the visualizations are correct
We've run the whole test suite and have captured the figures for all the tests not just this one. We are now in the realm of semi-automated testing. There are some things that really need a human to take a look at to confirm correctness. However, the entirety of the test run and test setup can still be automated! This can still be done via a CI system so you don't have to remember to run the tests and look over the plots every time you change the code. You simply let the automation do it. For things that need manual verification you can always log away the artifacts in a pdf or html report and confirm periodically, or prior to release. If there is a bug, you can mine the artifacts from all your CI builds to see where and when it was introduced.
You can even extend this approach to add an expected image to the report. So if you log a known good expected image and then use the test code to generate the image for each software change you can look at the actual image and the expected image right next to each other and confirm that they match. Beautiful. Full test automation is clearly the ideal to strive for, but in those cases that you really need to look at a picture, let the framework and your CI system do all the work for you in setting it up and you can just quickly and efficiently verify that it is correct.
Happy semi-automated testing!
P.S. Take a look at the full report generated in PDF form here
Get the MATLAB code
Published with MATLAB® R2018a
Michael Wutz replied on : 1 of 2
Hello, nice article :) Do you know if its possible to "Archive artifacts" of the html Report to have it visible in Jenkins ? I did something similiar for a own developped Java element (a searchfield with configureable list of elements). There I check automatically if the searchfield is green (if the current value is one of the list or if its not one of the list). I do this by making a screenshot of the element and calculating the mean(mean()) of the produced Image and then check if the corresponding colour is rather green/red. While this works great afterwards it is (as always) quite some initial invest to get such things. I usually only do something like this if such a element already caused me pain (by e.g. breaking my tool). Anyway I would like to adress a different question and wanted to hear what you think. Even if we established a CI/CT chain for our project - I do not feel that we can really keep up the Initial idea of CI - to always fix errors as soon as they arise. While I am really happy to be able to trace down errors via the Jenkins history and Git later - I sometimes really need some rest to think deeply about some new algorithm/implementation - or I am disturbed by other important events in the company. In these cases its hard to keep up the initial idea of CI. What is your experience ? Michael
Andy Campbell replied on : 2 of 2
Thanks Michael. You can indeed archive the artifacts in Jenkins. You can either archive the report in pdf/doc form as a standard downloadable artifact, or you can archive the html report and use something like the Jenkins HTML Publisher Plugin to serve up the html reports. However this has some gotchas because of default security policies (see here) so the report can be rendered incorrectly without some configuration. Maybe this is something we can blog about in the future. I hear your pain on keeping up with the build. I have found we've had the most success when everyone on the team committing to the project is on the same page with some of these principles, like: * Don't submit unfinished code * Work in small, iterative/incremental chunks of work * Back out changes if they break the system The last one for example, is much easier to live by if we are working in small iterative development cycles rather than very large, working-on-this-submission-for-six-months types of changes. If we have a small step forward that breaks something, lets just back it out if we failed to see a problem, and let the build be clean while we think deeply about the problem and the fix. Certainly not trivial to do, but working with this iterative small chunks of work mindset also has great benefits in overall development morale and productivity in my experience.
|
{}
|
# Special Lagrangian fibrations, instanton corrections and mirror symmetry
Friday, March 14, 2008 -
2:00pm to 4:00pm
We study the extension of mirror symmetry to the case of Kahler manifolds which are not Calabi-Yau: the mirror is then a Landau-Ginzburg model, i.e. a noncompact manifold equipped with a holomorphic function called superpotential. The Strominger-Yau-Zaslow conjecture can be extended to this setting by considering special Lagrangian torus fibrations in the complement of an anticanonical divisor, and constructing the superpotential as a weighted count of holomorphic discs. In particular we show how "instanton corrections" arise in this setting from wall-crossing discontinuities in the holomorphic disc counts. Various explicit examples in complex dimension 2 will be considered.
Speaker:
Denis Auroux
MIT
Event Location:
Fine Hall 314
|
{}
|
# Machine Learning Classification Metrics
The ML metrics table you’ll need
I haven’t talked much about machine model classification metrics aside from the confusion matrix. Let’s do a short rehash on that and add on some more useful metrics that are derived from the confusion matrix.
# Confusion Matrix
Despite its name, it’s actually pretty simple to understand. Let’s assume we have three groups, tall, medium, and short, we are trying to classify. Let’s arbitrarily label the tall group positive and any group that isn’t tall is negative. This means the medium and short groups will be labeled negative. True positives (TP) are the number of correctly classified observations e.g. the number of correctly predicted observations that are tall. True negatives (TN) are the number of correctly rejected observations, e.g. the number of correctly predicted observations that aren’t tall. False positives (FP) are the number of incorrectly classified observations e.g. the number of incorrectly predicted observations that are tall. False negatives (FN) are the number of incorrectly rejected observations, e.g. the number of incorrectly predicted observations that aren’t tall.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
# Other Metrics
Alright so we got the confusion matrix down! It looks pretty helpful but it would be even more helpful if we could further quantify our model’s performance.
Machine learning model classification metric descriptions
NameDescriptionEquation
TRNumber of Correct Positive PredictionsNA
TNNumber of Correct Negative PredictionsNA
FPNumber of Incorrect Positive PredictionsNA
FNNumber of Incorrect Negative PredictionsNA
Sensitivity (Recall)Proportion of Correct Positive Predictions$\frac {TP}{TP+FN}$
SpecificityProportion of Correct Negative Predictions$\frac {TN}{TN+FP}$
AccuracyPercent of Correctly Predicted Observations$\frac {TP + TN}{TP + TN + FP + FN}$
Balanced AccuracyUnbiased Accuracy$\frac {Sens + Spec}{2}$
Precision (PPV)Proportion of True Positives$\frac {TP}{TP+FP}$
Negative Predictive Value (NPV)Proportion of True Negatives$\frac {TN}{TN+FN}$
F1Harmonic Mean of Sensitivity and PPV$\frac {2 * PPV * Sens}{PPV + Sens}$
### Sensitivity (Recall)
Sensitivity tells us how well our model did at predicting the number of observations who were actually tall. High sensitivity means our model is good finding observations that are tall and doesn’t have many false negatives.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$Sensitivity = \frac {TP}{TP + FN}$$
### Specificity
Specificity tells us how well our model did at predicting the number of observations that weren’t tall. High specificity means our model is good finding observations that aren’t tall with few false positives.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$Specificity = \frac {TN}{TN + FP}$$
### Accuracy
Accuracy is the number of correct predictions (TP & TN) divided by all the predictions made by the model. This will give a percentage out of 100. I generally don’t use accuracy because it becomes extremely biased when the groups you are trying to predict are equal. I.e. if tall has 60 observations, medium has 30 observations, and small has 10 observations, accuracy will be reliable in cases like this. This is a problem I run into a lot with public clinical neuroimaging datasets.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$Accuracy = \frac {TP+ TN}{TP + TN + FP + FN}$$
### Balanced Accuracy
Balanced Accuracy is not biased by unequal groups like accuracy is. It does this by taking the average of specificity and sensitivity.$$BalancedAccuracy = \frac {Specificity + Sensitivity}{2}$$
### Precision (PPV)
Precision tells us how well model did at predicting true observations e.g. how many tall observations are there actually. High precision means our model is good at finding tall observations and doesn’t have many false positives.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$Precision = \frac {TP}{TP + FP}$$
### Negative Predictive Value (NPV)
NPV tells us how well model did at predicting false observations e.g. how many non-tall observations are there actually. High NPV means our model is good finding observations that aren’t tall and doesn’t have many false negatives
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$NPV = \frac {TN}{TN + FN}$$
### F1
A high F1 will mean your model is good at identifying tall observations while not having many false positive or false negatives.
Confusion Matrix
Actual PositiveActual Negative
Predicted PositiveTPFP
Predicted NegativeFNTN
$$F1 = \frac {2 * PPV * Sensitivity}{PPV + Sensitivity}$$
# What Metric Matters?
What metrics you use to tell how good your model is, is dependent on the problem you’re trying to solve. For the example in this post, trying to identify tall observations, precision and recall would be good to use, or the combination of both, the F1 metric. Which in my opinion, in most cases, is the most useful metric. High precision would mean our model is good at identifying tall observations and isn’t incorrectly identifying non-tall observations as tall. High recall would mean our model is good at identifying tall observations, without incorrectly identifying tall observations. So a high F1 will mean your model is good at identifying tall observations while not having many false positive or false negatives. Again, this is very dependent on the problem you’re solving and low metrics in some of measures are acceptable in different contexts.
##### Mohan Gupta
###### Psychology PhD Student
My research interests include the testing effect, proactive inteerference, computational modelling, and artificial intelligence.
|
{}
|
## Class ChordalityInspector<V,E>
• java.lang.Object
• org.jgrapht.alg.cycle.ChordalityInspector<V,E>
• Type Parameters:
V - the graph vertex type.
E - the graph edge type.
public class ChordalityInspector<V,E>
extends java.lang.Object
Tests whether a graph is chordal. A chordal graph is a simple graph in which all cycles of four or more vertices have a chord. A chord is an edge that is not part of the cycle but connects two vertices of the cycle. A graph is chordal if and only if it has a perfect elimination order. A perfect elimination order in a graph is an ordering of the vertices of the graph such that, for each vertex $v$, $v$ and the neighbors of $v$ that occur after $v$ in the order form a clique. This implementation uses either MaximumCardinalityIterator or LexBreadthFirstIterator to compute a perfect elimination order. The desired method is specified during construction time.
Chordal graphs are a subset of the perfect graphs. They may be recognized in polynomial time, and several problems that are hard on other classes of graphs such as minimum vertex coloring or determining maximum cardinality cliques and independent set can be performed in polynomial time when the input is chordal.
All methods in this class run in $\mathcal{O}(|V| + |E|)$ time. Determining whether a graph is chordal, as well as computing a perfect elimination order takes $\mathcal{O}(|V| + |E|)$ time, independent of the algorithm (MaximumCardinalityIterator or LexBreadthFirstIterator) used to compute the perfect elimination order.
All the methods in this class are invoked in a lazy fashion, meaning that computations are only started once the method gets invoked.
Author:
Timofey Chudakov
• ### Nested Class Summary
Nested Classes
Modifier and Type Class Description
static class ChordalityInspector.IterationOrder
Specifies internal iterator type.
• ### Constructor Summary
Constructors
Constructor Description
ChordalityInspector(Graph<V,E> graph)
Creates a chordality inspector for graph, which uses MaximumCardinalityIterator as a default iterator.
ChordalityInspector(Graph<V,E> graph, ChordalityInspector.IterationOrder iterationOrder)
Creates a chordality inspector for graph, which uses an iterator defined by the second parameter as an internal iterator.
• ### Method Summary
All Methods
Modifier and Type Method Description
GraphPath<V,E> getHole()
A graph which is not chordal, must contain a hole (chordless cycle on 4 or more vertices).
ChordalityInspector.IterationOrder getIterationOrder()
Returns the type of iterator used in this ChordalityInspector
java.util.List<V> getPerfectEliminationOrder()
Returns a perfect elimination order if one exists.
boolean isChordal()
Checks whether the inspected graph is chordal.
boolean isPerfectEliminationOrder(java.util.List<V> vertexOrder)
Checks whether the vertices in the vertexOrder form a perfect elimination order with respect to the inspected graph.
• ### Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• ### Constructor Detail
• #### ChordalityInspector
public ChordalityInspector(Graph<V,E> graph)
Creates a chordality inspector for graph, which uses MaximumCardinalityIterator as a default iterator.
Parameters:
graph - the graph for which a chordality inspector to be created.
• #### ChordalityInspector
public ChordalityInspector(Graph<V,E> graph,
ChordalityInspector.IterationOrder iterationOrder)
Creates a chordality inspector for graph, which uses an iterator defined by the second parameter as an internal iterator.
Parameters:
graph - the graph for which a chordality inspector is to be created.
iterationOrder - the constant, which defines iterator to be used by this ChordalityInspector.
• ### Method Detail
• #### isChordal
public boolean isChordal()
Checks whether the inspected graph is chordal.
Returns:
true if this graph is chordal, otherwise false.
• #### getPerfectEliminationOrder
public java.util.List<V> getPerfectEliminationOrder()
Returns a perfect elimination order if one exists. The existence of a perfect elimination order certifies that the graph is chordal. This method returns null if the graph is not chordal.
Returns:
a perfect elimination order of a graph or null if graph is not chordal.
• #### getHole
public GraphPath<V,E> getHole()
A graph which is not chordal, must contain a hole (chordless cycle on 4 or more vertices). The existence of a hole certifies that the graph is not chordal. This method returns a chordless cycle if the graph is not chordal, or null if the graph is chordal.
Returns:
a hole if the graph is not chordal, or null if the graph is chordal.
• #### isPerfectEliminationOrder
public boolean isPerfectEliminationOrder(java.util.List<V> vertexOrder)
Checks whether the vertices in the vertexOrder form a perfect elimination order with respect to the inspected graph. Returns false otherwise.
Parameters:
vertexOrder - the sequence of vertices of the graph.
Returns:
true if the graph is chordal and the vertices in vertexOrder are in perfect elimination order, otherwise false.
• #### getIterationOrder
public ChordalityInspector.IterationOrder getIterationOrder()
Returns the type of iterator used in this ChordalityInspector
Returns:
the type of iterator used in this ChordalityInspector
|
{}
|
# Manuals/calci/BETAINV
BETAINV (Probability,Alpha,Beta,LowerBound,UpperBound,Accuracy,DivisionsAndDepthArray)
• is the probability value associated with the beta distribution.
• & are the values of the shape parameter.
• & the lower and upper limit to the interval of .
• gives accurate value of the solution.
• is the value of the division.
• BETAINV(), returns the inverse of the Cumulative Distribution Function for a specified beta distribution.
## Description
• This function gives the inverse value of Cumulative Beta Probability Distribution.
• It is called Inverted Beta Function or Beta Prime.
• In , is the probability value associated with Beta Distribution, and are the values of two positive shape parameters and and are the lower and upper limit.
• Normally the limit values are optional, i.e. when we are giving the values of & then the result value is from and .
• When we are omitting the values and , by default it will consider and , so the result value is from and .
• If , then .
• use the iterating method to find the value of .suppose the iteration has not converged after 100 searches, then the function gives the error result.
• This function will give the error result when
1.Any one of the arguments are non-numeric
2.Alpha or Beta 0
3.Number<LowerBound ,Number>UpperBound or LowerBound = UpperBound
4.we are not mentioning the limit values for LowerBound & UpperBound ,
by default it will consider the Standard Cumulative Beta Distribution, LowerBound = 0 and UpperBound = 1
## ZOS
• The syntax is to calculate of this function in ZOS is .
• is the probability value associated with the beta distribution.
• and are the values of the shape parameter.
• For e.g.,BETAINV(0.30987,10,18,12,16)
## Examples
1. BETAINV(0.2060381025,5,9,2,6) = 3
2. BETAINV(0.359492343,8,10) = 1.75
3. BETAINV(0.685470581,5,8,2,6) = 3.75
4. BETAINV(0.75267,1,7,7,9) = 7.25
5. BETAINV(0.5689,-2,4,3,5) = #N/A (ALPHA GREATER THAN (OR) NOT EQUAL TO 0)
## Related Videos
Beta Inverse Distribution
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.