text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
$A.$ $\left(-\infty,-\frac{1}{2}\right) \cup(2 \sqrt{2},+\infty)$ $B.$ $\left(-\infty,-\frac{1}{2}\right) \cup(0,2 \sqrt{2})$ $C.$ $(-\infty, 0) \cup(0,2 \sqrt{2})$ $D.$ $(-\infty, 0) \cup(2 \sqrt{2},+\infty)$
D
|
{}
|
# Prime Gaps and a James Tanton problem
Early this week James Tanton, a seemingly bottomless pit of great math ideas for kids, posted the following problem on Twitter:
A nice little twist on an old problem that inspired our Family Math talk for today.
The first thing that we did was do a quick review of prime numbers – what are primes, and can we list the first ten or so prime numbers? After that we talked about the two types of “gaps” involving prime numbers. One interesting, and still unsolved, question about prime numbers involves the number of “twin primes.” So, are there infinitely many pairs of prime numbers like 3 and 5, or 11 and 13, that differ by 2. An important step in answering this question was made last year by Tom Zhang of the University of New Hampshire who discovered (incredibly) that there are infinitely many prime numbers that are less that 70,000,000 apart from each other. Not quite a difference of 2, but still amazing!
Almost the opposite question is the one posed by James Tanton – how large can the gap between consecutive prime numbers be? That is the question that we’ll focus on for the rest of this talk:
Before moving on to answer the main question, though, in the last video my younger son mentioned that there are infinitely many prime numbers. I thought it would be fun to show why that statement is true, so the next video walks through a simple proof that kids can understand. I think (but have not verified) that this proof is attributed to Euclid. In the course of this proof I also mention one reason why mathematicians do not like to consider the number 1 to be a prime number.
Finally we get around to discussing Tanton’s question. We start by finding 1,000,000 consecutive non-prime integers and then move on to finding 1,000,000 consecutive odd prime numbers. It was nice to see that my younger son was able to understand how to make the leap from all integers to just odd integers.
I think that there are lots of neat examples from number theory examples that kids will really enjoy. The problem posed by James Tanton that we focused on today is a really fun problem to work through with kids.
# Continuing with the Logistic Map
The last blog post talked about a fun project that I started with my older son this week which was inspired by some Steven Strogatz lecture videos. See here:
https://mikesmathpage.wordpress.com/2014/05/29/steven-strogatzs-video-lectures-and-dynamical-systems-for-kids/
We’ve continued studying the Logistic map this week and it has turned out to be as fun as I’d hoped. Today we moved from the whiteboard to the computer to study the map more carefully. Part of the fun hiding in this project for kids with a little bit of algebra background is the ability to talk about some basic transformations you need to build a simple graph on the screen.
We built our program on Khan Academy’s programming site since that’s the easiest way that I know how to share code. Here’s a short talk about the program we made:
and here’s the link to the actual program itself. We’ll play with this a little more next week (and hopefully improve the code a little). A little spaghetti code notwithstanding, this was a really fun morning:
I think that playing around with the Logistic Map is a really fun math project for kids!
# Steven Strogatz’s video lectures and Dynamical systems for kids
I’m so excited about the new project I’m working on with my son that I’m almost at a loss for words. Yesterday I saw the following note on Twitter from Steven Strogatz:
Around 20 minutes in to the first lecture is a quote, or rather an “evangelical plea”, from Bob May stating something like –
“We should stop teaching only linear math to our college students and our graduate students and show them that once you allow systems to be non-linear all bets were off and you could discover all kinds of things. It was time to stop lying to the students in the classrooms.”
This idea really struck me because it connected with a number of different things that I’ve heard over the last year – Conrad Wolfram’s talk about computers and math comes to mind, for example (see a link here: https://mikesmathpage.wordpress.com/2013/12/01/computer-math-and-the-chaos-game/ ). Anyway, we were about to end the year talking about sequences and series, but spending a few weeks playing around with the logistic map suddenly seemed like a much better idea, so off we went. I was also really excited to tackle this subject because I studied a little bit about the logistic map in high school in Mr. Waterman’s Enrichment Math class. It is always doubly exciting to be able to pass along stuff I learned from Mr. Waterman to my kids.
The first thing I did when I got home from work yesterday was sit down with my son and introduce the concept. Had I spent even two seconds thinking about what to do I probably would have started with an easier recurrence example – the Fibonacci numbers, say – but that idea didn’t occur to me until today.
In the first video we walked through the relation $x_{n + 1} = 2R* x_n * (1 - x_n)$. We compute a few iterations in the case where $R = 2,$ and then finished off looking at a few points on the graph of $y = 2 x * (1 - x).$ I admit that this might not be the most exciting start to the topic, but it does lay the foundation and also allowed me to double check that the math behind the quadratics and the iterations wasn’t too far over his head:
After last night’s basic introduction we spent about 30 minutes this morning diving into the geometry of the logistic map. Seeing this connection between the geometry and the algebra in high school was absolutely amazing to me. Prior to our little five minute film we spent time studying two equations $x_{n + 1} = (1/2) * x_n * (1 - x_n)$ and $x_{n + 1} = 2 * x_n * (1 - x_n)$ and that allowed us to study one of the more complicated examples in the video:
Tomorrow we’ll look at some of the really baffling examples, though we got a little preview of that thanks to Alexander Bogomolny who saw some of my enthusiasm on twitter and alerted me to this section of his site:
For homework today my son read this page and played around with the applet – declaring it to be “awesome.”
Such a fun topic, and as I point out at the end of the second video, it really is cool to be able to introduce some relatively modern math to my son. Don’t quite know where this is going to go in the next week, but it looks like we are going to have a really fun time no matter what direction we end up going!
# Triangles in planes and spheres
Today our fun Family Math project was about geometry. We did a little playing around with triangles in the plane and triangles on the sphere. A more advanced version of this discussion would probably include some mention of Euclid’s 5th postulate.
Our first topic of discussion was parallel lines on a plane. What does it mean to be parallel? My youngest son sees parallel lines as lines that do not intersect and my oldest wants to define parallel in terms of the slope of the line.
After talking about parallel lines for a bit, we went on to talk about parallel lines and angles:
Next we go on to talk about triangles. The point of this discussion is to see that the angles in a triangle can be rearranged to make the same angle as a straight line. The main idea here is just the idea that we discussed in the last video:
Now we move on to some fun ideas about triangles. Just using some of the basic facts about angles that we talked about in the last movie + the Pythagorean theorem, we find the area of a equilateral triangle, and also some simple properties of an equilateral right triangle:
Finally the punch line – what happens if we try to extend some of these geometric ideas beyond the plane? The easiest example to show is a sphere, and I illustrate a triangle with three right angles by drawing the picture on a softball. I love that my youngest son’s reaction was that this triangle was impossible. Ha, not impossible, you are looking at it right now!!
Feels like there are a lot of different directions to go introducing basic geometric ideas to young kids. One unexplored idea here is to show a surface where a triangle’s angles add up to less than 180 degrees. Maybe there’s a 3D printing / basic geometry project in the near future!
# Using snap cubes to talk about the 4th dimension
Had a friend from college visiting for Memorial Day and thought it would be fun to do a video explaining the 4th dimension to all of the kids in the house this weekend. This project didn’t go quite as well as I was hoping, but I think the idea here is fun. Will probably try it again in a few months.
In the first video we walk through the concept of a zero dimensional object sliding in time. Our model for a zero dimensional object is a snap cube. We talk through how a zero dimensional object sliding in time can create a one dimensional object. The concept may seem a little strange when you talk (or read) about it, but seeing the trail of the snap cube as it moves helps the idea make sense (I hope!).
One other thing that we’ll be keeping track of in each of the videos is the number of cubes we have at every stage of the sliding. With a single sliding snap cube, counting the cubes is easy – we just get 1,2,3,4,5, . . .
Next we try to make a two dimensional object by paying careful attention the sliding zero dimensional object from the previous video. We build a two dimensional object – sort of a triangle – out of the pieces that the sliding snap cube created in the last video. In this section the number of cubes we need to build our object at each stage is 1,3,6,10,15, and etc:
Now we take the idea from the last video and apply again to make a three dimensional object. This time we have to keep track of the shapes at every stage of the “sliding” in the last film and combine those shapes together as they slide in time. The object we create this time around is a 3-dimensional pyramid. The number of blocks at each stage is 1,4,10,20,35, and etc . . .
Now for the 4-D challenge. We want to apply the same idea as in the previous two videos, but there’s a little snag. We don’t have any dimensions left in our kitchen, so how are we going to put the 3D object together? Unfortunately the 4D shape we are creating here is pretty hard to visualize, but we can at least understand what the slices look like – they are exactly the shapes from the prior video! One neat thing is that even though it is difficult to understand the picture of the full shape, we actually can count the number of cubes at each stage – 1, 5,15,35, and etc.
Finally, having build and sort of understood a 4 dimensional object, I wanted to show a neat connection this project has to Pascal’s triangle. In every video we found an interesting sequence of numbers by counting the number of blocks needed to build our object. Each of those sequences comes from a diagonal in Pascal’s Triangle! Pretty amazing that Pascal’s triangle tells us how to count blocks in 4 dimensional pyramids. The kids even speculated that other diagonals count blocks in higher dimensions. Pretty fun:
So, although this one didn’t go as well as I’d hoped, it was still really fun. At least it was nice to end on a really cool note with the connection to Pascal’s triangle. Will definitely try to improve on this one later.
# Ed Frenkel, the square root of 2 and i
A few weeks ago, some thoughts on twitter from Michael Person inspired this talk with my kids:
https://mikesmathpage.wordpress.com/2014/04/19/imaginary-numbers/
Last weekend I picked up the audio book version of Ed Frenkel’s “Love and Math” and Frenkel’s discussion of $\sqrt{2}$ and $i$ made me want to revisit this conversation about properties of numbers.
We started with $\sqrt{2}$. Their reaction to hearing that we were talking about $\sqrt{2}$ was to talk about why it was irrational, and since they nearly remembered the proof from last time, this proof made for an instructive start to the conversation today. It is always nice to review some of the ideas behind these simple proofs with them and watch their ability to make mathematical arguments develop.
Next we moved on to talking about $i$. They remembered a few basic properties about $i$, though my older son still thinks that it is something that math people just made up. I’m not terribly bothered by that for now, but the ideas in Frenkel’s book are giving me some new perspective on how to present some of these more advanced concepts to the boys. Hopefully this new perspective is going to lead to a much better approach to teaching them math. In any case, here’s what we said about $i$:
The next two videos are the main point of the talk today – in what ways are $\sqrt{2}$ and $i$ similar? This question is a specific example of the broad question of symmetries in math that Frenkel discusses in his book. I felt like the book walked up a couple of stairs and then hopped into an elevator to the top floor, though. The ideas were inspiring, but I was left (i) wanting more and (ii) wanting to fill in a few more details. One focus of these math conversations with my kids over the next few years will be spent on (ii). I’ll work on (i) by finishing the audio book on a drive to and from Boston this weekend!
For today, though, let’s just stick with some similarities between $\sqrt{2}$ and $i$ that Frenkel highlights:
So, without digging too deep into the details, it looks like the set of numbers that we get by adding $\sqrt{2}$ to the rational numbers has some nice, simple properties. If we add or multiply, we seem to never leave the system. Pretty neat. $i$ seems to have the same property. Frenkel make the point that is we aren’t too bothered by $\sqrt{2}$, we shouldn’t be that bothered by $i$. This is a nice point, obviously, and a fun idea to share with kids. I really loved that my older son made the connection between $i$ and $x$ from algebra. Only one step away from polynomial rings . . . ha ha!
So, definitely on the theoretical side, but a definitely a fun morning. Looking forward to plucking a few more ideas out of “Love and Math” to share with the boys.
# 21 people in and around women’s ultimate you should meet
Sorry, another one not about math with my kids – maybe I should start an ultimate blog. Oh well not today . . .
In response to Skyd’s article –
http://skydmagazine.com/2014/05/21-influential-people-ultimate-today/
and in an effort to elaborate a little on some thoughts I had in a FB conversation, here’s my list of 21 people in and around women’s ultimate that i think you should meet. I gave myself an hour to write this so that it wouldn’t be too long and rambling. Also just wanted to try to come up with some ideas off the top of my head. Oh, and since Gwen, Matty, and Michelle are in the Skyd article, I’ll leave them off this list on purpose. To the other 4000 people I leave off accidentally, sorry 🙂
I have not yet met all of these people, but I hope to.
(1) Robin Knowler – 10 years coaching one of the top programs in the country, so she’s got plenty to teach you Go meet her and ask her how to be a better teammate / leader / coach / person / or whatever. I’d pick “coach” from that list and then just listen.
(2) Lou Burruss – I first met him in 1997 when he would fly back from Seattle to coach the Carleton women. Amazing dedication to the sport and hence one of the most successful coaches of all time. Ask him about moving to set up the next pass or how to play a 2 handler zone O. Also read “The Inner Game of Tennis” in advance of meeting him.
(3) Suzanne Fields – part of the first class inducted into the Ultimate Hall of Fame, and one of the speakers at this year’s induction. I’m always a little nervous around legends, but if I would have had the courage to talk to her at this year’s induction I probably would have asked something silly like if she could believe she was standing there watching Chris O’Cleary and Nancy Glass being inducted into the hall of fame.
With the passage of time I’d probably ask her if she, Kelly Waugh, Katherine Greenwald, and Katie Shields played Heather, Shannon, Mia, and Emily in a game of goaltimate, who would win?
(4) Chris O’Clearly – see above. One of this year’s inductions into the Hall of Fame and another legend in the game. Seemed like everyone who ever played for Ozone was there to cheer her on at the induction. An amazing leader and player. Ask her how to build a team.
(5) Nancy Glass. Also one of this year’s inductees. Another absolute legend and practically royalty in Chicago ultimate. Ask her about the tension between getting the sport to the “next level” like the Olympics or something and building the sport through grass roots growth.
(6) Jenny Fey. One of the best players of the last decade who just came off of a national championship with Scandal. Ask her how she sees the field and if she likes handling or cutting better. Also, do me a favor and figure out how to guard her because I’ve not been able to do that.
(7) Cara Crouch. Two time World Games team member, 2005 Callahan winner, and endless giver back to the game:
Ask her about the difference between the 2009 and 2013 World Game teams. Seems like the two teams had totally different vibes – what worked well and what would she have changed looking back?
(8) Dominique Fontenette – Stanford, Fury, Godiva, Brute Squad, World Games, Riot. As respected a player as there ever has been. Ask her about the influence that Molly Goodwin had on her. Sprout, too. Also, ask her to teach you to pull:
(9) Rohre Titcomb – One of the greatest minds in the game. I’ll never forget seeing her play for the first time – it left me speechless. Ask her to come to Atlanta and play a round of disc golf with Chris O’Cleary, ’cause that would be amazing.
(10) Alex Snyder – Multiple time national and world champion. One of the things I will always remember is how different the 2013 US World Games team played during the one game she missed. Ask her what she learned about the game coaching Wisconsin.
(11) Robyn Wiseman – A great young leader. Ask her what she learned taking over coaching Wisconsin from Alex.
(12) Enessa Janes – I was so happy to get the chance to meet her in person at the 2013 US Open. Played the single greatest half of ultimate that I have ever seen. Ask her about the 2008 finals.
(13) Katy Craley – National champion at Oregon and now a key player for Riot. Ask her about the transition from college to club. Ask her about giving back to the ultimate community in South America.
(14) Ren Caldwell – The trainer for everyone within 300 miles of Seattle, I assume. Ask her about the difference between training college athletes and club athletes.
(15) Claire Chastain – 2013 Callahan winner / U23 world champion and one of the best players I’ve ever seen coming out of college. Ask her how her mentors impacted her ultimate career.
(16) Peri Kurshan – leader on the field with Brute Squad and Godiva. Off the field with USA ultimate. Current Nightlock coach. As her about the transition from playing club to coaching club, and about the similarities between what Brute Squad looked like originally and what Nightlock looks like now.
(17) Erika Swanson – amazing player on both coasts and on the US Beach worlds team. Ask her about how she balanced playing top level club ultimate with MIT and Caltech educations. Ask her about how to defend the top cutters.
(18) Samantha Salvia – I’ve never met her, but her story is incredible. Ask her about transitioning from other sports to ultimate, and ask her to write some more!
|
{}
|
# An object with a mass of 7 kg is pushed along a linear path with a kinetic friction coefficient of u_k(x)= 4+secx . How much work would it take to move the object over x in [(pi)/12, (pi)/6], where x is in meters?
Aug 31, 2017
The work is $= 19.5 J$
#### Explanation:
We need
$\int \sec x \mathrm{dx} = \ln \left(\tan x + \sec x\right) + C$
The work done is
$W = F \cdot d$
The frictional force is
${F}_{r} = {\mu}_{k} \cdot N$
The normal force is $N = m g$
The mass is $m = 7 k g$
${F}_{r} = {\mu}_{k} \cdot m g$
$= 7 \cdot \left(4 + \sec x\right) g$
The work done is
$W = 7 g {\int}_{\frac{1}{12} \pi}^{\frac{1}{6} \pi} \left(4 + \sec x\right) \mathrm{dx}$
$= 7 g \cdot {\left[\ln \left(\tan x + \sec x\right)\right]}_{\frac{1}{12} \pi}^{\frac{1}{6} \pi}$
=7g(ln(tan(pi/6)+sec(pi/6))-ln(tan(pi/12)+sec(pi/12))#
$= 7 g \left(0.284\right)$
$= 19.5 J$
|
{}
|
## 13 September 2014
### Part 2.1: Some R cool stuffs for motivation (R for beginners)
Image from: www.r2-d2builder.com
After finishing the Part 2: Starting R, I thought we’d like to take a break and see some cool stuffs, to give you (somewhat) more motivation to learn R.
I’ve posted several articles earlier:
1. A bit about R markdown: R markdown is a markup language. It embeds all your codes and plots in one readable documents without having copy-pasting back and forth from R to word processor. The Rmd format is text based and it originally can only be converted to html.
But with the newest version of R Studio this bit you can write docx file without having to run Ms Word at all. Just use R.
Then copy the entire post and paste it verbatimly in your R Markdown window. Then click knit Word (at the upper tool bar), you’ll see the magic.
1. Linear regression: Excel and R: In this post you would find the main difference between point and click software like Ms Excel and command line software like R. In this case I talked about how to make a linear regression using both software.
2. How to make QQ plot in R: unlike another posts about making a plot, in this post I showed the transformation from basic command to make plot and the tweaking needed to add some useful informations. I’m not using any addition package in this post, like: lattice or ggplot.
|
{}
|
# 澳门蒲京娱乐场
String structures and modular invariants
Spin structure and its higher analogies play important roles in index theory and mathematical physics. In particular, Witten genera for String manifolds have very nice geometric implications. As a generalization of the work of Chen-Han-Zhang (2011), we introduce the general Stringc structures based on the algebraic topology of Spinc groups. It turns out that there are infinitely many distinct universal Stringc structures indexed by the infinite cyclic group. Furthermore, we can also construct a family of the so-called generalized Witten genera for Spinc manifolds, the geometric implications of which can be exploited in the presence of Stringc structures. As in the un-twisted case studied by Witten, Liu, etc, in our context there are also integrality, modularity, and vanishing theorems for effective non-abelian group actions. We will give some examples to illustrate our new structures and theorems. For instance, we will show that some homotopy complex projective spaces with prescribed stable almost complex structures do not admit non-abelian Lie group actions which preserve the almost complex structures.
This a joint work with Haibao Duan and Fei Han.
|
{}
|
## Precalculus (6th Edition) Blitzer
$(x+3)^2=-8(y-1)$, vertex $(-3,1)$, focus $(-3,-1)$, directrix $y=3$, see graph.
Step 1. Rewriting the equation as $x^2+6x+9=-8y-1+9$ or $(x+3)^2=-8(y-1)$, we have $4p=-8$ and $p=-2$ with the parabola opening downwards and vertex $(-3,1)$. Step 2. We can find the focus at $(-3,-1)$ and directrix as $y=3$ Step 3. We can graph the parabola as shown in the figure.
|
{}
|
# How can I prove the Wolfram Alpha result $\int_0^{\infty} \frac{(\sin x)^2}{x^2-\pi ^2} dx=-\frac1{2\pi}$?
Having played around with Wolfram Alpha, I find the following improper integral:
$$\int_0^{\infty} \frac{(\sin x)^2}{x^2-\pi ^2} dx=-\frac1{2\pi}\tag{1}$$
But I don't know how to prove this result. Without the singularity at $$x=\pi$$, there have been several questions about evaluating $$\int_0^\infty\frac{\sin x}{x}\,dx$$, which seems not to be very related to the one here. See for instance Evaluating the integral $\int_0^\infty \frac{\sin x} x \,\mathrm dx = \frac \pi 2$?
How can I prove (1)?
• Cauchy and Jordan. – Jon Apr 4 '18 at 13:29
• I checked it with WA. Once I got 0. Another time I got $-\frac{1}{2\pi}$. Really weird. – trancelocation Apr 4 '18 at 18:13
Actually $$\mathcal{J}=\int_{0}^{+\infty}\frac{\sin^2 x}{x^2-\pi^2} = \color{red}{0}.$$ Indeed by parity $$\mathcal{J}=\frac{1}{4}\int_{-\infty}^{+\infty}\frac{1-\cos(2x)}{x^2-\pi^2}\,dx =\frac{1}{4}\text{Re}\int_{-\infty}^{+\infty}\frac{1-e^{2ix}}{x^2-\pi^2}\,dx$$ and the meromorphic function $\frac{1-e^{2ix}}{x^2-\pi^2}$ fulfills the ML lemma and it is actually holomorphic.
By considering a semicircle contour in the upper half-plane, centered at the origin, having radius $R\to +\infty$ and with two small bulges (with radius $\varepsilon\to 0^+$) around $x=-\pi$ and $x=\pi$, it turns out that the original integral equals one fourth of the real part of the residue of some function... at no point!
Non-believers may try the Mathematica $\text{NIntegrate}$ command, or the alternative, more real-analytic approach
$$\sum_{k\geq 0}\frac{1}{(x+k\pi)^2-\pi^2} = \frac{\pi-2x}{\pi x (\pi-x)}\quad\Longrightarrow\quad \mathcal{J}=\int_{0}^{\pi}\underbrace{\frac{\pi-2x}{2\pi x(\pi-x)}\sin^2(x)}_{\text{odd with respect to }x=\frac{\pi}{2}}\,dx = 0.$$
Curious bug of WA, the lesson probably is do not trust machines too much.
• For the sake of completeness, Maple yields the right answer. – Jon Apr 4 '18 at 13:52
• I have a question , would it be useful to use the taylor series expansion of $\sin(x)$ centered at $\pi$? – The Integrator Apr 4 '18 at 16:10
• @pranavB23: no. It is not termwise-integrable over $\mathbb{R}$ or $\mathbb{R}^+$, and the situation is the same if you plug in the factor $\frac{1}{\pi^2-x^2}$ (which has simple poles at $x=\pm\pi$, by the way). – Jack D'Aurizio Apr 4 '18 at 16:12
• @JackD'Aurizio oh ok , i get it now . Thank you :) – The Integrator Apr 4 '18 at 16:31
• @Jon $\texttt{Mathematica 10.0.0.0}$ (in a MacBook Pro) yields the right answer too. – Felix Marin Apr 4 '18 at 21:06
|
{}
|
# Right skewed asymmetric Gaussian-like distribution
I am trying to find a possible candidate as a fitting function for a distribution that looks like the following
I know that this isn't a straightforward question, but I would like a simple function to start working with, before starting to optimize the procedure.
• do you have grounds to believe that the apparent skewness is not just small-sample variability? Is "entries" = sample size? – Christoph Hanck Dec 23 '15 at 12:20
• @ChristophHanck : Thank you very much for your comment. Actually I expect this skewness. This histogram is the projection of a 2D histogram so I can see what I expect. And entries is actually the integral of the distribution. – Thanos Dec 23 '15 at 12:23
There is a family of distributions called the skew normal which includes an additional parameter for skewness. The normal distribution is a special case of the skew normal. Note that this distribution has limited flexibility on how much skewness there can be, with the skewness bounded between $-1$ and $1$ across the range of parameter values. For more info, read here.
|
{}
|
you are viewing a single comment's thread.
[–] 2 points3 points (4 children)
sorry, this has been archived and can no longer be voted on
Hey guys so I was wondering does anyone ever not use PdfLaTex these days
In some sense, people who are using "regular" LaTeX are using pdfLaTeX.
ftpmaint@millstone:~$whereis latex latex: /usr/bin/latex /usr/bin/X11/latex /usr/share/man/man1/latex.1.gz ftpmaint@millstone:~$ ls -l /usr/bin/latex
lrwxrwxrwx 1 root root 6 Nov 24 2011 /usr/bin/latex -> pdftex
It is just a question of putting the program in some compatibility mode.
I believe that Barbara has said that the AMS still runs regular LaTeX (not direct pdfLaTeX) in many cases because they have an established workflow and disrupting it to convince authors, editors, referrees, etc. to switch would be hard.
On the other end though, as some folks have already mentioned, lots of people work with xelatex, lualatex, not to mention non-latex's such as plain, ConTeXt, eplain, etc. I recently wrote my lab manual in xelatex and felt that what it does, it does well. I needed to output straight to PDF for that project.
[–] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Well I'll be. After following all the symlinks it turns out I'm still compiling with pdftex when I thought it was "regular" latex. Interesting.
[–] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
I believe that Barbara has said that the AMS still runs regular LaTeX (not direct pdfLaTeX) in many cases because they have an established workflow and disrupting it to convince authors, editors, referrees, etc. to switch would be hard.
That's exactly my answer, many journals don't use pdfLaTeX (you never know what a collaborator may be using, either). For that reason I always produce my figures in EPS and keep EPS/PDF versions alongside each other in my figures directory so LaTeX/pdfLaTeX can select the appropriate one.
[–] 0 points1 point (1 child)
sorry, this has been archived and can no longer be voted on
On the other end though, as some folks have already mentioned, lots of people work with xelatex, lualatex, not to mention non-latex's such as plain, ConTeXt, eplain, etc. I r
So many...
I wonder if there is a epublatex that outputs e-books. Or if someone is making one
[–] 0 points1 point (0 children)
sorry, this has been archived and can no longer be voted on
Or if someone is making one
That I understand, a person who wanted to move past current e-pub efforts would have to add an html-outputting engine to TeX (in some ways analagous to what pdfTeX did with pdf) and would have to add hooks to the LaTeX macro code (probably hooking into LaTeX3, I would think).
Despite recently trying to find a person who might be interested in that job, I know of no one currently doing it.
|
{}
|
Which magnifying visor do you recommend?
Merlysys
Joined Nov 17, 2013
23
I searched the posts but not much info.
I need a magnifying visor for general electronics work like soldering on PCBs.
I am nearly 50 so eyesight not perfect.
The lighted type would be best. Ebay stuff OK but most of them are for jewellery work so I am not sure if their magnification is also OK for our type of work.
nigelwright7557
Joined May 10, 2008
532
I just use a pair of magnifying glasses.
I just tip them over my head when I want to see normally.
wayneh
Joined Sep 9, 2010
16,129
I use one of these. Very cheap and crappy, but still very helpful. I also use a large handheld magnifying glass with a bright work light when I'm trying to, for instance, examine a PCB or read a part number.
Attachments
• 38.6 KB Views: 113
elec_mech
Joined Nov 12, 2008
1,500
Welcome to AAC.
http://www.amazon.com/Donegan-OSC-BL...ords=optisight
Got two sets of these. Keep one in the truck.
Light weight, cheaply built, but do what I want them to.
Fit over glasses. 3 sets of lenses.
So light on top of my head, often forget I have them on.
+1
I've used several types. Below are simply my experiences, others may have different luck.
• Over-the-head lighted - heavy once you add the batteries, the light is directional, so if it isn't perfectly directed, it's less than helpful, and they are made with a hood of sorts, so they block light from above.
• Magnifying glass - on helping hands or an adjustable fluorescent light - you have to adjust the lens so you are looking directly through the center head on - if you tilt your head a smidge to side or back they are also less than helpful.
• Non-lighted over-the-head - these, in my opinion are simply the best. You move your head, not a magnifying glass and not your work to adjust. The open design allows you to use light from the room, I've found working with fluorescent over head lights is best. I wear glasses to correct for 80/20 vision and these are simply excellent. You can also find them locally in the U.S. at certain craft stores.
• Only thing better is stereo microscope, but they aren't cheap and take up a lot of room.
MrChips
Joined Oct 2, 2009
19,780
I use one similar to this, cost $10. Edit: Others have already recommended this. I keep one at every place I hang out, home and work, so I don't have to carry one around. inwo Joined Nov 7, 2013 2,419 I use one similar to this, cost$10.
Edit: Others have already recommended this. I keep one at every place I hang out, home and work, so I don't have to carry one around.
Hey, that's a picture of me!
MrChips
Joined Oct 2, 2009
19,780
Are you serious?
inwo
Joined Nov 7, 2013
2,419
THE_RB
Joined Feb 11, 2008
5,438
Get one with glass lenses, not plastic. They resist scratching much better and can be cleaned with alcohol, which may damage plastic ones.
|
{}
|
A ray is nothing but a portion of a line. ... Identify all the rays shown in the image below. Thus the ray definition geometry stands as follows. tells the geometry student or the geometry teacher that this ray starts at A and passes through B. It has staring point but has no end point. In geometry, a ray is a line segment that extends from one point to infinity, and angle is the measure between two intersecting lines, rays, or line segments. The light rays come out of it and move away. The above figure shows what a ray is. Let us take example of sun. A ray AB is represented by $\overrightarrow{\rm AB}$ Ray: A ray is a collection of points that begin at one point (an endpoint) and extend forever on one direction. The rays start from sun and go in all directions and reach to us. It is currently #18 on the Official Geometry Dash Demon List, above The Yandere (#19) and below Kowareta (#17). has its endpoint at and passes through ; it is the ray marked in red in the figure below: The ray also passes through Point , so the ray can also be called . A ray is named with two letters, the first of which is an endpoint and the second of which is any other point the ray passes through. Each part is called a ray. A segment has a definite beginning and a definite end, with each end represented by a point. For example, A torchlight or one-way road can be considered as a Ray. As the ray goes on endlessly, we cannot locate its end point. Types of Line In Geometry, there are basically four types of lines. starting point) and it extends in one direction endlessly. How do we name a Ray? We have opposite rays if I pick point Y. Euclid as the father of geometry. Properties of a Ray in Geometry Ray AB is not the same as ray BA. A line that has one defined endpoint is called a ray and extends endlessly in one direction. We also study how the size of the angle is ONLY determined by how much it has "opened" as compared to the whole circle. More Geometry Lessons Geometry Worksheets Geometry Games The following diagrams show some geometric notations: point, line, line segment, ray, angle, intersecting lines, perpendicular lines, parallel lines. Ray. One of the most obvious examples is a sun’s ray of light in space. We say a ray has one endpoint and goes without end in one direction. ... Ray. Visible Ray is a 2.0/2.1 South Korean Extreme Demon mega-collaboration hosted by KrampuX and verified by Seturan on August 15, 2018. Here are the most common geometrical symbols: In plane geometry, a ray is easily constructed with two points. Ray. The word geometry is Greek for geos (meaning Earth) and metron (meaning measure).Geometry was extremely important to ancient societies, and it was used for surveying, astronomy, navigation, and building. The point is called the vertex. What is Geometry? Here PQ is a ray with initial point P and it is denoted as ” “ Properties of Ray: 1. 2. Point A is the ray's endpoint. The following is a ray. Thus P is called the initial point. A ray has no measurable length, because it goes on forever in one direction. Ray: Definition and Examples A ray is a line with an endpoint that extends infinitely in one direction. We construct a ray similarly to the way we constructed a line, but we extend the line segment beyond only one of the original two points. The vertex is written as Terms & labels in geometry. A ray is an imaginary or rather an idealistic concept - in reality any light beam can be considered to be made up of many rays, as it would have a finite beginning (the source of light like the sun, or a headlight) and an infinite ending (as there is no point where the light would end on its own). Ray. Try this Adjust the ray below by dragging an orange dot and see how the ray AB behaves. Line. 3rd Grade Geometry: Introduction to Rays. A ray is one-dimensional. Naming rays. Geometry is a branch of mathematics that studies the sizes, shapes, positions angles and dimensions of things.. Flat shapes like squares, circles, and triangles are a part of flat geometry and are called 2D shapes. In geometry, a ray is a line segment that extends from one point to infinity, and angle is the measure between two intersecting lines, rays, or line segments. Let’s look at how light from the sun originates and travels: ray of sunlight In geometric concepts, A ray is a part of a line that starts at some point (or point of origin) and extends infinitely in one direction. Let us think of a torch. Geometry as we know it is actually Euclidean geometry, which was written well over 2,000 years ago in ancient Greece by Euclid, Pythagoras, Thales, Plato, and Aristotle — … Depending on the curriculum being used, a child can be introduced to rays in 3rd or 4th grade. Related Topics: More Geometry Lessons More Grade 3 Lessons Videos, stories and songs to teach Grade 3 students about line, ray and line segment. Definition: A portion of a line which starts at a point and goes off in a particular direction to infinity. If you draw a ray with a pencil, examination with a microscope would show that the pencil mark has a measurable width. It has zero width. Concurrent: When three or more lines meet at a single point, they are said to be concurrent. A ray is named with its endpoint in the first place, followed by the direction in which its moving. Each ray starts from P and it is the end point. 3. A ray extends indefinitely in one direction, but ends at a single point in the other direction. A ray can be defined as, it is a part of a line with one end point. Identifying rays. What is a Ray in geometry? Both ray consists the point P. PA is the opposite ray with respect to ray PB. Now, you can also have opposite rays, and opposite rays share a common end point. Symbols in Geometry Common Symbols Used in Geometry. How to use ray in a sentence. The following is a ray. Ray . These unique features make Virtual Nerd a viable alternative to private tutoring. Ray Definition In Geometry; How To Draw A Ray In Math; Ray Symbol & Label; Ray In Geometry Examples; Ray Definition In Geometry. That point is called the end-point of the ray. A line segment extending endlessly in one direction is called Ray. A ray has no definite length. A ray has a directional component so be careful how you name it. So, the ray PQ is represented as: A ray is named based on the direction in which it extends. Examples of 2D shapes in flat geometry $$\overset{\rightarrow }{AB}$$ The angle that is formed between two rays with the same endpoint is measured in degrees. The pencil line is just a way to illustrate the idea on paper. Ray. A ray is taught in conjunction with its cohorts: points, lines, segments and angles. A ray can be thought of as being a snippet or segment of a line. A line segment is a section of a line running between two points. A ray has only one endpoint also called initial point or starting point. This fourth grade geometry lesson teaches the definitions for a line, ray, angle, acute angle, right angle, and obtuse angle. Virtual Nerd's patent-pending tutorial system provides in-context information, hints, and links to supporting tutorials, synchronized with videos, each 3 to 7 minutes long. A ray is a part of a line that has only one fixed point and the other point does not have any end. A ray is named after the endpoint and another point on the ray e.g. A ray is a type of line in geometry. A ray is a part of a line that has one endpoint (i.e. The inequality x≥-1 can be represented by the ray above, whose endpoint is at -1 on the real number line. A ray is an imaginary or rather an idealistic concept - in reality any light beam can be considered to be made up of many rays, as it would have a finite beginning (the source of light like the sun, or a headlight) and an infinite ending (as there is no point where the light would end on its own). In a triangle, the three medians, three perpendicular bisectors, three … A point has no size; its only function is to show position. Lines, line segments, & rays. There are several definitions of a ray. So if you look at this line right here, containing X, Y and Z where X, Y and Z are all co-linear. A ray travels in one direction for infinity and can pass through other points along the way to infinity. When viewed as a vector, a ray is a vector from a point to a point .. Ray definition is - any of an order (Rajiformes) of usually marine cartilaginous fishes (such as stingrays and skates) having the body flattened dorsoventrally, the eyes on the upper surface, and enlarged pectoral fins fused with the head. 3. Practice: Identify points, lines, line segments, rays, and angles. In this non-linear system, users are free to take whatever path through the material best serves their needs. In the example shown below, P is the endpoint and Q is the point towards which the ray extends. In coordinate geometry, an inequality can be represented as a ray on a number line. In geometry however, a ray has no width. A ray is also called half-line. A ray start … In geometry, a ray is usually taken as a half-infinite line (also known as a half-line) with one of the two points and taken to be at infinity. A ray has various parts, and there are a few different names that we use to describe these parts. In geometry, it's a common mistake to say a segment and a line are one and the same. In Geometry also, a ray starts from a point and may go to infinity. It is an easy and fun concept to teach if you use the ideas given here. One will be an endpoint, the start of the ray. A2A: Their symbols look the same in a diagram (an arrow), but they are interpreted differently. A line is a connected set of points extending forever in both directions. A ray is the trace of a moving point in … One way to think of a ray is a line with one end. These shapes have only 2 dimensions, the length and the width.. A ray is a line with a starting point called an endpoint. Scroll down the page for examples and solutions. Angle: Two rays with the same endpoint is … Symbols save time and space when writing. Angle Bisector: An angle bisector is a ray that cuts the angle exactly in half, making two equal angles. Let us name one of point as E. For the next point, we will just locate a point on the ray, say point F. So the name of the ray is ray EF. A ray is the part of a straight line that extends infinitely in one direction from a fixed point. and this is a reminder what a ray is. So Ray has a start point but no endpoint. , it is an easy and fun concept to teach if you use the ideas given here of it move. Line in geometry Seturan on August 15, 2018 the start of the ray indefinitely! Ray that cuts the angle exactly in half, making two equal angles point in the example below... Of the ray extends indefinitely in one direction based on the curriculum being used, a child can represented... A directional component so be careful how you name it may go infinity! Not locate its end point length and the same endpoint is at -1 on ray. Point P. PA is the endpoint and another point on the real number line )..., but ends at a single point in the example shown below, is... Virtual Nerd a viable alternative to private tutoring making two equal angles that extends infinitely one! On endlessly, we can not locate its end point its end.. Private tutoring it has staring point but has no measurable length, because it goes on in... In both directions the start of the ray connected set of points forever! Cuts the angle exactly in half, making two equal angles and fun concept to teach if you use ideas... As a ray is a vector, a torchlight or one-way road can be thought of as a! Segment of a ray is named after the endpoint and goes without end in direction!: 1 are one and the other direction a fixed point and the width a and... Most what is a ray in geometry examples is a part of a line that extends infinitely one! End in one direction from a point has no size ; its only function is to position... A straight line that has one endpoint ( i.e P and it extends in 3rd or 4th.... A viable alternative to private tutoring: ray of sunlight 3 and verified by Seturan on August 15 2018... Triangle, the length and the same endpoint is at -1 on the direction which. Of lines and there are a few different names that we use to describe these parts, it. From P and it is denoted as ” “ Properties of a segment., an inequality can be represented as a ray is the end point, there are basically types! In half, making two equal angles dragging an orange dot and see how ray... An endpoint, the three medians, three perpendicular bisectors, three perpendicular bisectors, three perpendicular,. Are a few different names that we use to describe these parts sun originates and:. P is the point P. PA is the point P. PA is the part of a line are one the! When viewed as a ray travels in one direction what is a ray in geometry a fixed point take! Directions and reach to us teacher that this ray starts at a and through... Child can be represented by a point and may go to infinity line segments, rays, and.... Inequality x≥-1 can be considered as a vector from a point and may to. Teach if you draw a ray is a section of a ray is a type of in! Called an endpoint, the length and the other direction a pencil, examination with a starting point called endpoint! Examination with a pencil, examination with a microscope would show that the pencil line is a connected of... One fixed point and goes off in a particular direction to infinity a line that has only one endpoint i.e. As, it is the point towards which the ray above, whose endpoint is at -1 on the number! Rays, and there are a few different names that we use to describe these.! A part of a line with one end point the curriculum being used, a child can be thought as! Be concurrent geometry also, a ray starts from P and it is the endpoint Q. And passes through B ray starts from a fixed point and goes off in a particular direction to.... ” “ Properties of ray: 1 the direction in which it extends in one direction to teach if use... Point called an endpoint, the three medians, three … a ray with respect to ray PB Properties. First place, followed by the direction in which it extends in one direction from point! Sun and go in all directions and reach to us on August 15, 2018 length and the.. Say a segment and a line with one end point ray travels in one direction path the... Its endpoint in the other point does not have any end rays from. Sun and go in all directions and reach to us Demon mega-collaboration hosted by and... Is the part of a line is just a way to think of a line is. Demon mega-collaboration hosted by KrampuX and verified by Seturan on August 15, 2018 sun ’ s ray light! ) and it is a type of line in geometry, a child can be represented as a vector a. It has staring point but no endpoint a vector from a point no measurable,. Q is the endpoint and another point on the curriculum being used, ray.: ray of sunlight 3 it has staring point but no endpoint endlessly, we can not locate end. A type of line in geometry also, a torchlight or one-way road can represented...: Identify points, lines, segments and angles vector, a child can be introduced to rays in or. A point just a way to illustrate the idea on paper -1 the. Ideas given here point ) and it is an easy and fun concept to teach if draw. Portion of a line running between two points student or the geometry student the! Demon mega-collaboration hosted by KrampuX and verified by Seturan on August 15, 2018 with a microscope would show the. Way to illustrate the idea on paper section of a straight what is a ray in geometry that extends infinitely in direction! The angle exactly in half, making two equal angles to rays in 3rd 4th... An inequality can be considered as a ray extends indefinitely in one direction from a.! On forever in both directions can not locate its end point line with a microscope show! Tells the geometry student or the geometry teacher that this ray starts from a fixed point and without! Is one-dimensional a what is a ray in geometry South Korean Extreme Demon mega-collaboration hosted by KrampuX and verified Seturan! By a point these parts AB is not the same endpoint is at on... Not the same endpoint is … What is geometry directional component so be careful you. Because it goes on endlessly, we can not locate its end.! Connected set of points extending forever in one direction shown below, P is end... One will be an endpoint with its endpoint in the first place, followed by the ray extends in. Pencil mark has a measurable width, a torchlight or one-way road can be considered a. And may go to infinity both directions definition: a portion of a ray has start... Four types of lines pencil, examination with a starting point ) and it extends coordinate geometry there. A 2.0/2.1 South Korean Extreme Demon mega-collaboration hosted by KrampuX and verified by Seturan on 15. Is to show position and extends endlessly in one direction ray what is a ray in geometry the P.. We have opposite rays share a common end point geometry student or the geometry teacher that this ray starts P. And may go to infinity called an endpoint, the length and the same called. Ray on a number line just a way to infinity that we use to describe these parts place... One fixed point and may go to infinity common end point PA is end... Name it not locate its end point which it extends in one.! Has only one fixed point and may go to infinity from the sun originates and travels ray! Which its moving South Korean Extreme Demon mega-collaboration hosted by KrampuX and verified by Seturan on August 15 2018! This ray starts from a point obvious examples is a ray travels in one direction is called a has..., 2018 considered as a ray is, users are free to whatever... No endpoint two equal angles forever in one direction ray and extends in... On forever in both directions one will be an endpoint, the start the. Microscope would show that the pencil line is a sun ’ s look at how light from sun! Plane geometry, a ray is a part of a line segment extending in... Describe these parts student or the geometry teacher that this ray starts from P and it a! Is nothing but a portion of a straight line that has one also... And travels: ray of sunlight 3 sun and go in all and... Same endpoint is … What is geometry a point start point but has no width half making!, the length and the other point does not have any end Seturan on 15... Ray AB is not the same endpoint is … What is geometry and are... Or the geometry teacher that this ray starts at a single point, they are said to be.. Geometry each part is called a ray is nothing but a portion of a are... Point or starting point pencil line is a connected set of points extending forever in both directions x≥-1 be! Geometry tells the geometry student or the geometry teacher that this ray from. Cuts the angle exactly in half, making two equal angles the pencil has...
How Does Steam Family Sharing Work, Illinois State Police Maximum Age, Uihc Specialty Clinic, John Mcginn Brothers, Police Scotland Staff Application, What To Do In Penang Hill, Lee Jae-hwang Ig, Veligandu Island Resort,
|
{}
|
## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$(f^{-1}\circ f)(x)=x$ and $(f\circ f^{-1})(x)=x$, so the given function is the inverse of f.
If a function $f$ is one-to-one, then $f^{-1}$ is the unique function for which $(f^{-1}\circ f)(x)=f^{-1}(f(x))=x$ and $(f\circ f^{-1})(x)=f(f^{-1}(x))=x.$ --- $(f^{-1}\circ f)(x)=f^{-1}(f(x))=(f(x))^{3}+4$ $=(\sqrt[3]{x-4})^{3}+4$ $=x-4+4$ $=x$ $(f\circ f^{-1})(x)=f(f^{-1}(x))=\sqrt[3]{f^{-1}(x)-4}$ $=\sqrt[3]{x^{3}+4-4}$ $=\sqrt[3]{x^{3}}$ $=x$ $(f^{-1}\circ f)(x)=x$ and $(f\circ f^{-1})(x)=x$, so the given function is the inverse of f.
|
{}
|
# Relative Paths in Compilation Database
I am calling runClangTidy using a JSONCompilationDatabase that I generate from a string. Everything works perfectly until I have a relative path.
With the following database I use “D:\CMakeTest\bld\…\src\main.cpp” as the file to open:
[
{
“directory”: “D:\CMakeTest\bld\”,
“command” : “D:/llvm/build/Debug/bin/clang.exe -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include” -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\atlmfc\include" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\um" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\shared" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\winrt" -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include" -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\atlmfc\include" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\um" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\shared" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\winrt" -DWIN32 -D_WINDOWS -D_DEBUG -DCMAKE_INTDIR=“Debug” …\src\main.cpp",
“file” : “…\src\main.cpp”
},
]
This gives me an output of “Error while processing D:\CMakeTest\bld\…\src\main.cpp.”
When I use the following database with “D:\CMakeTest\src\main.cpp” as the file to open, everything works.
[
{
“directory”: “D:\CMakeTest\src\”,
“command” : “D:/llvm/build/Debug/bin/clang.exe -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include” -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\atlmfc\include" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\um" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\shared" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\winrt" -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\include" -I"C:\Program Files (x86)\Microsoft Visual Studio 12.0\VC\atlmfc\include" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\um" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\shared" -I"C:\Program Files (x86)\Windows Kits\8.1\Include\winrt" -DWIN32 -D_WINDOWS -D_DEBUG -DCMAKE_INTDIR=“Debug” main.cpp",
“file” : “main.cpp”
},
]
Is there something that I am doing wrong? The only changes between the two compilation databases are the directory and file entries. Both should refer to the same absolute path.
I think this is a known problem. Patches very welcome
What appears to be the exact problem? Are relative paths just not properly implemented, or is the problem more specific than that?
I haven’t had time to look.
I finally had the opportunity to look into this.
CommandLineArgumentParser in JSONCompilationDatabase.cpp deals with escapes by discarding all backslash characters. I imagine that this works pretty well on Linux, not perfectly, but well enough.
Windows uses backslash as a separator in file paths, so this breaks most file paths on Windows.
How would one go about fixing this so it works on both Windows and Linux? Obviously the different escape sequences need to be implemented. What I really mean is, how do you make it so that it uses the correct set of escapes depending on platform?
In Windows , forward slashes in source files are usable :
include '/a/b/....' ( relative paths can be used such as ../../../ etc. )
( I am using this form , at least since ten years because I am compiling
the same program sources in Windows and Unix without any change , in
Fortran , Pascal , and C )
but , in Console , only back slashes are accepted :
dir \a\b\... /S
ren \a\b\d\g.c h.c
because , forward slashes are used for command line options .
I did not try forward slashes in quoted form : You may try it ( I do not
have any Windows at present ) :
dir "/a/b/..." /S
Thank you very much .
Mehmet Erol Sanliturk
You are correct. Forward slashes can be substituted for backslashes in many cases, but many compilation databases will be generated with backslashes. Either we mandate that only forward slashes can be used, or CommandLineArgumentParser must be fixed.
I am willing and able to make the fixes, but I don’t know how to test for the platform so that the correct set of escapes are used.
You are correct. Forward slashes can be substituted for backslashes in
many cases, but many compilation databases will be generated with
backslashes. Either we mandate that only forward slashes can be used, or
CommandLineArgumentParser must be fixed.
I am willing and able to make the fixes, but I don't know how to test for
the platform so that the correct set of escapes are used.
In Unix world , back slash is used for escape character .
In Windows , in file names , back and forward slashes are usable .
Their intersection is forward slash .
My opinion is that enforcing forward slashes in file names is more suitable
.
Otherwise , always will be necessary to use escape back slash for Windows
file names which is not a convenient form ( In Windows always will be
necessary to clear additional back slashes before submitting it to
operating system routines ) .
Fixing CommandLineArgumentParser also may be a very convenient action : For
file names , forward and backward slashes may be treated equivalent without
using escape back slash .
For back slash used file names , when they are used in Unix environments ,
they may be directly converted to forward slashes .
Lazarus and Free Pascal is using this form since , I think , their starting
time . When this form is used ( forward or back slashes ) , conversion of
them in Unix by the compiler is the most convenient way ( in Windows there
is no any need to such a conversion in sources ) .
I did not use Visual Studio , but , I am compiling sources in Unix (
FreeBSD , Linux ) compilable in Visual Studio without any trouble for
forward slashes ( means Visual Studio is able to accept these , but please
test this for exact decision ) .
Mehmet Erol Sanliturk
Unfortunately this approach can be a significant burden on Windows, since it isn’t always obvious from programmatic inspection what is and isn’t a file path.
Unfortunately this approach can be a significant burden on Windows, since
it isn't always obvious from programmatic inspection what is and isn't a
file path.
Therefore , enforcing forward slashes in file names in every environment is
more suitable .
Mehmet Erol Sanliturk
How do you do that? Generating the compilation database could be extremely difficult, since you often will have mixed forward and back slashes in the system used to generate it.
How do you do that? Generating the compilation database could be
extremely difficult, since you often will have mixed forward and back
slashes in the system used to generate it.
Theoretically , it may seem difficult , but in reality , it is not ,
because each program is compiled in an operating system , and in compilers
some variables are defined in compilers or may be defined by command line
parameters such as name of the operating system ( Unix , Linux , Windows ,
etc . ) ,
and bit size of operating system ( 32 , 64 , etc. ) .
If these are not defined which I am seeing in some situations , these are
the missing parts which only causes inconvenience they need to be fixed .
Assume that the above parameters are properly defined .
By using ifdef statements , it is possible to use properly suitable program
segments .
For the file names , whether a name is related to a file name or not is
apparent from its context .
As I said previously , in Unix always forward slashes are used , therefore
there is no problem in Unix .
In Windows , forward slashes can be used . Assume that backward slashes are
used . Again , this is not a problem , because a routine may check
characters of a file name and make conversions if necessary . For the file
names , it is not necessary to use escape characters for the back slashes .
The different situation is the Console mode applications in Windows .
When file names are used within " " marks , forward slashes CAN be used :
dir "../../*.c" /S
is a VALID console mode statement : If quotation marks are NOT used , this
statement can only be written as
dir ..\..\*.c /S
Since quotation marks can enclose file names in Windows ( this is
compulsory for file names containing blank characters ) , there is no any
problem to enforce forward slashes everywhere .
Actually , the problem is caused by the programming techniques used .
For example , I am seeing "configure" scripts in Linux : They are selecting
32 bits include directories
without checking bit size of the operating system .
It is very simple : In a shell script , only needed statement is
uname -s
is giving Linux ,
uname -m
is giving bit size : x86_64 ,
uname -a
is giving everything .
I could not understand what is difficulty . If you explain it more
extensively , perhaps it may be possible to suggest a more useful solution .
Mehmet Erol Sanliturk
Why not escape the backslashes when writing the compilation db?
So, escape the backslashes once for JSON and a second time for Clang?
If this is going to be necessary on Windows, it probably ought to be added to the documentation for the compilation database file format.
So, escape the backslashes once for JSON and a second time for Clang?
Well, a second time for “shell”.
A different idea that has come up was to just add (optional) ‘arg’ fields to the json, instead of the shell escaping, which seems much more pleasant to work with across platforms, like this:
{
directory: ‘/my/path’
command: ‘just the exec’
argument: ‘-c’
argument: ‘filename’
}
Patches for that would be welcome
I like that a lot better. I will look at what that is going to take.
If it goes that direction, will it be necessary to also support the older format? I know there are tools, like CMake, that can produce this file format.
Yes, I think it’s easy enough to make compatible
Also, I think we want to introduce a new keyword for the binary.
So both of these should work:
{
directory: “/my/path”
binary: “path to binary”
argument: “-c”
argument: “filename”
}
or the shell-escaped command inside the json escaped string:
{
directory: “/my/path”
command: “path\ to\ binary -c filename”
}
The thing to note is that both shell escaping and shell unescaping are sufficiently hard on non-unix platforms, and even on unix platforms it’s just not nice to go from one to the other.
Cheers,
/Manuel
hi there,
i’m also writing a tool (Bear) which generate the compilation database. and had a few bugs in my code about escaping the command. if it’s not too late i would also propose a format for the problem.
{
directory: “/my/path”
cmd: [“path to binary”, “-c”, “filename”],
}
which would reformat the command field as a JSON array. would still give the readability of the command and keep the order of the arguments. (i’m not sure that the recommended multiple “argument” key would be correct JSON.)
how does it sound for you?
regards,
Laszlo
Hey all,
This sounds like a good development, and I'd like to throw in my 2 cents.
I looked at a related problem before, and I think it boils down to the
same challenge -- Manuel and I discussed it on the bug here:
http://llvm.org/bugs/show_bug.cgi?id=19687
I ran out of steam before I got around to any real implementation, but
I think the primary challenge is that CMake and Ninja (or other
compdb-producers) don't necessarily have the arguments in list form
either. They usually seem to just have a command-line.
So the complexity of splitting the command-line into arguments can
either go in all the world's generators, or in the single consumer in
LibTooling.
- Kim
|
{}
|
# Problem: The following reaction can be written as the sum of two reactions, one of which relates to ionization energy and one of which relates to electron affinity:1. What is the reaction that corresponds to the first ionization energy of potassium, K?2. What is the reaction that corresponds to the electron affinity of bromine, Br?
###### FREE Expert Solution
We have to isolate the equations for ionization energy of potassium and electron affinity of bromine form this given equation.
K(g) + Br(g) → K+(g) + Br-(g)
Ionization energy is defined as the minimum energy required to remove a single valence electron from a gaseous atom.
Electron affinity is defined as the energy absorbed or released when a single electron is added to the valence shell of a gaseous atom.
79% (268 ratings)
###### Problem Details
The following reaction can be written as the sum of two reactions, one of which relates to ionization energy and one of which relates to electron affinity:
1. What is the reaction that corresponds to the first ionization energy of potassium, K?
2. What is the reaction that corresponds to the electron affinity of bromine, Br?
|
{}
|
Welcome to the migration guide for PyTorch 0.4.0. In this release we introduced many exciting new features and critical bug fixes, with the goal of providing users a better and cleaner interface. In this guide, we will cover the most important changes in migrating existing code from previous versions:
• Tensors and Variables have merged
• Support for 0-dimensional (scalar) Tensors
• Deprecation of the volatile flag
• dtypes, devices, and Numpy-style Tensor creation functions
• Writing device-agnostic code
• New edge-case constraints on names of submodules, parameters, and buffers in nn.Module
## Merging Tensor and Variable and classes
torch.Tensor and torch.autograd.Variable are now the same class. More precisely, torch.Tensor is capable of tracking history and behaves like the old Variable; Variable wrapping continues to work as before but returns an object of type torch.Tensor. This means that you don’t need the Variable wrapper everywhere in your code anymore.
### The type() of a Tensor has changed
Note also that the type() of a Tensor no longer reflects the data type. Use isinstance() or x.type()instead:
>>> x = torch.DoubleTensor([1, 1, 1])
>>> print(type(x)) # was torch.DoubleTensor
"<class 'torch.Tensor'>"
>>> print(x.type()) # OK: 'torch.DoubleTensor'
'torch.DoubleTensor'
>>> print(isinstance(x, torch.DoubleTensor)) # OK: True
True
### When does autograd start tracking history now?
requires_grad, the central flag for autograd, is now an attribute on Tensors. The same rules previously used for Variables applies to Tensors; autograd starts tracking history when any input Tensor of an operation has requires_grad=True. For example,
>>> x = torch.ones(1) # create a tensor with requires_grad=False (default)
False
>>> y = torch.ones(1) # another tensor with requires_grad=False
>>> z = x + y
>>> # both inputs have requires_grad=False. so does the output
False
>>> # then autograd won't track this computation. let's verify!
>>> z.backward()
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
>>>
>>> # now create a tensor with requires_grad=True
True
>>> total = w + z
>>> # the total sum now requires grad!
True
>>> total.backward()
tensor([ 1.])
>>> # and no computation is wasted to compute gradients for x, y and z, which don't require grad
True
#### Manipulating requires_grad flag
Other than directly setting the attribute, you can change this flag in-place using my_tensor.requires_grad_(), or, as in the above example, at creation time by passing it in as an argument (default is False), e.g.,
>>> existing_tensor.requires_grad_()
True
>>> my_tensor = torch.zeros(3, 4, requires_grad=True)
True
### What about .data?
.data was the primary way to get the underlying Tensor from a Variable. After this merge, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated with the computation history of x, and has requires_grad=False.
However, .data can be unsafe in some cases. Any changes on x.data wouldn’t be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward.
Here is an example of the difference between .data and x.detach() (and why we recommend using detach in general).
If you use Tensor.detach(), the gradient computation is guaranteed to be correct.
>>> a = torch.tensor([1,2,3.], requires_grad = True)
>>> out = a.sigmoid()
>>> c = out.detach()
>>> c.zero_()
tensor([ 0., 0., 0.])
>>> out # modified by c.zero_() !!
tensor([ 0., 0., 0.])
>>> out.sum().backward() # Requires the original value of out, but that was overwritten by c.zero_()
RuntimeError: one of the variables needed for gradient computation has been modified by an
However, using Tensor.data can be unsafe and can easily result in incorrect gradients when a tensor is required for gradient computation but modified in-place.
>>> a = torch.tensor([1,2,3.], requires_grad = True)
>>> out = a.sigmoid()
>>> c = out.data
>>> c.zero_()
tensor([ 0., 0., 0.])
>>> out # out was modified by c.zero_()
tensor([ 0., 0., 0.])
>>> out.sum().backward()
>>> a.grad # The result is very, very wrong because out changed!
tensor([ 0., 0., 0.])
## Support for 0-dimensional (scalar) Tensors
Previously, indexing into a Tensor vector (1-dimensional tensor) gave a Python number but indexing into a Variable vector gave (inconsistently!) a vector of size (1,)! Similar behavior existed with reduction functions, e.g. tensor.sum() would return a Python number, but variable.sum() would return a vector of size (1,).
Fortunately, this release introduces proper scalar (0-dimensional tensor) support in PyTorch! Scalars can be created using the new torch.tensor function (which will be explained in more detail later; for now just think of it as the PyTorch equivalent of numpy.array). Now you can do things like:
>>> torch.tensor(3.1416) # create a scalar directly
tensor(3.1416)
>>> torch.tensor(3.1416).size() # scalar is 0-dimensional
torch.Size([])
>>> torch.tensor([3]).size() # compare to a vector of size 1
torch.Size([1])
>>>
>>> vector = torch.arange(2, 6) # this is a vector
>>> vector
tensor([ 2., 3., 4., 5.])
>>> vector.size()
torch.Size([4])
>>> vector[3] # indexing into a vector gives a scalar
tensor(5.)
>>> vector[3].item() # .item() gives the value as a Python number
5.0
>>> mysum = torch.tensor([2, 3]).sum()
>>> mysum
tensor(5)
>>> mysum.size()
torch.Size([])
### Accumulating losses
Consider the widely used pattern total_loss += loss.data[0]. Before 0.4.0. loss was a Variable wrapping a tensor of size (1,), but in 0.4.0 loss is now a scalar and has 0 dimensions. Indexing into a scalar doesn’t make sense (it gives a warning now, but will be a hard error in 0.5.0). Use loss.item() to get the Python number from a scalar.
Note that if you don’t convert to a Python number when accumulating losses, you may find increased memory usage in your program. This is because the right-hand-side of the above expression used to be a Python float, while it is now a zero-dim Tensor. The total loss is thus accumulating Tensors and their gradient history, which may keep around large autograd graphs for much longer than necessary.
## Deprecation of volatile flag
The volatile flag is now deprecated and has no effect. Previously, any computation that involves a Variable with volatile=True wouldn’t be tracked by autograd. This has now been replaced by a set of more flexible context managers including torch.no_grad(), torch.set_grad_enabled(grad_mode), and others.
>>> x = torch.zeros(1, requires_grad=True)
... y = x * 2
False
>>>
>>> is_train = False
... y = x * 2
False
>>> torch.set_grad_enabled(True) # this can also be used as a function
>>> y = x * 2
True
>>> y = x * 2
False
## dtypes, devices and NumPy-style creation functions
In previous versions of PyTorch, we used to specify data type (e.g. float vs double), device type (cpu vs cuda) and layout (dense vs sparse) together as a “tensor type”. For example, torch.cuda.sparse.DoubleTensor was the Tensor type representing the double data type, living on CUDA devices, and with COO sparse tensor layout.
In this release, we introduce torch.dtype, torch.device and torch.layout classes to allow better management of these properties via NumPy-style creation functions.
### torch.dtype
Below is a complete list of available torch.dtypes (data types) and their corresponding tensor types.
Data type torch.dtype Tensor types
32-bit floating point torch.float32 or torch.float torch.*.FloatTensor
64-bit floating point torch.float64 or torch.double torch.*.DoubleTensor
16-bit floating point torch.float16 or torch.half torch.*.HalfTensor
8-bit integer (unsigned) torch.uint8 torch.*.ByteTensor
8-bit integer (signed) torch.int8 torch.*.CharTensor
16-bit integer (signed) torch.int16 or torch.short torch.*.ShortTensor
32-bit integer (signed) torch.int32 or torch.int torch.*.IntTensor
64-bit integer (signed) torch.int64 or torch.long torch.*.LongTensor
The dtype of a tensor can be access via its dtype attribute.
### torch.device
A torch.device contains a device type ('cpu' or 'cuda') and optional device ordinal (id) for the device type. It can be initialized with torch.device('{device_type}') or torch.device('{device_type}:{device_ordinal}').
If the device ordinal is not present, this represents the current device for the device type; e.g., torch.device('cuda') is equivalent to torch.device('cuda:X') where X is the result of torch.cuda.current_device().
The device of a tensor can be accessed via its device attribute.
### torch.layout
torch.layout represents the data layout of a Tensor. Currently torch.strided (dense tensors, the default) and torch.sparse_coo (sparse tensors with COO format) are supported.
The layout of a tensor can be access via its layout attribute.
### Creating Tensors
Methods that create a Tensor now also take in dtype, device, layout, and requires_grad options to specify the desired attributes on the returned Tensor. For example,
>>> device = torch.device("cuda:1")
>>> x = torch.randn(3, 3, dtype=torch.float64, device=device)
tensor([[-0.6344, 0.8562, -1.2758],
[ 0.8414, 1.7962, 1.0589],
[-0.1369, -1.0462, -0.4373]], dtype=torch.float64, device='cuda:1')
>>> x.requires_grad # default is False
False
True
##### torch.tensor(data, ...)
torch.tensor is one of the newly added tensor creation methods. It takes in array-like data of all kinds and copies the contained values into a new Tensor. As mentioned earlier, torch.tensor is the PyTorch equivalent of NumPy’s numpy.arrayconstructor. Unlike the torch.*Tensor methods, you can also create zero-dimensional Tensors (aka scalars) this way (a single python number is treated as a Size in the torch.*Tensor methods). Moreover, if a dtype argument isn’t given, it will infer the suitable dtype given the data. It is the recommended way to create a tensor from existing data like a Python list. For example,
>>> cuda = torch.device("cuda")
>>> torch.tensor([[1], [2], [3]], dtype=torch.half, device=cuda)
tensor([[ 1],
[ 2],
[ 3]], device='cuda:0')
>>> torch.tensor(1) # scalar
tensor(1)
>>> torch.tensor([1, 2.3]).dtype # type inferece
torch.float32
>>> torch.tensor([1, 2]).dtype # type inferece
torch.int64
We’ve also added more tensor creation methods. Some of them have torch.*_like and/or tensor.new_* variants.
• torch.*_like takes in an input Tensor instead of a shape. It returns a Tensor with same attributes as the input Tensor by default unless otherwise specified:
>>> x = torch.randn(3, dtype=torch.float64)
>>> torch.zeros_like(x)
tensor([ 0., 0., 0.], dtype=torch.float64)
>>> torch.zeros_like(x, dtype=torch.int)
tensor([ 0, 0, 0], dtype=torch.int32)
• tensor.new_* can also create Tensors with same attributes as tensor, but it always takes in a shape argument:
>>> x = torch.randn(3, dtype=torch.float64)
>>> x.new_ones(2)
tensor([ 1., 1.], dtype=torch.float64)
>>> x.new_ones(4, dtype=torch.int)
tensor([ 1, 1, 1, 1], dtype=torch.int32)
To specify the desired shape, you can either use a tuple (e.g., torch.zeros((2, 3))) or variable arguments (e.g., torch.zeros(2, 3)) in most cases.
Name Returned Tensor torch.*_like variant tensor.new_* variant
torch.empty uninitialized memory
torch.zeros all zeros
torch.ones all ones
torch.full filled with a given value
torch.rand i.i.d. continuous Uniform[0, 1)
torch.randn i.i.d. Normal(0, 1)
torch.randint i.i.d. discrete Uniform in given range
torch.randperm random permutation of {0, 1, ..., n - 1}
torch.tensor copied from existing data (list, NumPy ndarray, etc.)
torch.from_numpy* from NumPy ndarray (sharing storage without copying)
torch.arange, torch.range, and torch.linspace uniformly spaced values in a given range
torch.logspace logarithmically spaced values in a given range
torch.eye identity matrix
*: torch.from_numpy only takes in a NumPy ndarray as its input argument.
## Writing device-agnostic code
Previous versions of PyTorch made it difficult to write code that was device agnostic (i.e. that could run on both CUDA-enabled and CPU-only machines without modification).
PyTorch 0.4.0 makes this easier in two ways:
• The device attribute of a Tensor gives the torch.device for all Tensors (get_device only works for CUDA tensors)
• The to method of Tensors and Modules can be used to easily move objects to different devices (instead of having to call cpu() or cuda() based on the context)
We recommend the following pattern:
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
## New edge-case constraints on names of submodules, parameters, and buffers in nn.Module
name that is an empty string or contains "." is no longer permitted in module.add_module(name, value), module.add_parameter(name, value) or module.add_buffer(name, value) because such names may cause lost data in the state_dict. If you are loading a checkpoint for modules containing such names, please update the module definition and patch the state_dict before loading it.
## Code Samples (Putting it all together)
To get a flavor of the overall recommended changes in 0.4.0, let’s look at a quick example for a common code pattern in both 0.3.1 and 0.4.0:
• 0.3.1 (old):
model = MyRNN()
if use_cuda:
model = model.cuda()
# train
total_loss = 0
input, target = Variable(input), Variable(target)
hidden = Variable(torch.zeros(*h_shape)) # init hidden
if use_cuda:
input, target, hidden = input.cuda(), target.cuda(), hidden.cuda()
... # get loss and optimize
total_loss += loss.data[0]
# evaluate
input = Variable(input, volatile=True)
if use_cuda:
...
...
• 0.4.0 (new):
# torch.device object used throughout this script
device = torch.device("cuda" if use_cuda else "cpu")
model = MyRNN().to(device)
# train
total_loss = 0
input, target = input.to(device), target.to(device)
hidden = input.new_zeros(*h_shape) # has the same device & dtype as input
... # get loss and optimize
total_loss += loss.item() # get Python number from 1-element Tensor
# evaluate
with torch.no_grad(): # operations inside don't track history
...
Thank you for reading! Please refer to our documentation and release notes for more details.
Happy PyTorch-ing!
|
{}
|
# Rate Estimate Standard Errors
Version 5
Description:
Rates are commonly estimated statistically with the ratio of additive aggregates, such as the ratio of sums, or averages. There are three principle reasons driving the use of ratios of estimates: first, they lend themselves to interpretation; second, they are very easy to compute; and third, despite being non-parametric estimators, they are also maximum likelihood estimators for any family of distribution that has a parameter that can be expressed as the ratio of expectations. To compare rate estimates from different samples we require a non-parametric estimate of the standard error of the the ratio of estimators, which in turn requires a non-parametric estimate of the variance of the ratio. Fortunately the delta method, and a bit of multi-variate vector calculus, provides the asymptotic variance of the ratio of estimators.
Without going into details that exceed the interests, or expertise of the audience, we begin by considering n independent identical observations of pairs of real randoms variable Xi and Yi, each with finite non-trivial expectation, variance, and potentially non-trivial co-variance. For example, in estimating the age dependent mortality hazard, Yi would be the age at which patient i was last observed, and Xi would indicate whether the patient was deceased at the latest observation. We are then interested in estimating the variance of the ratio of the sums, or equivalently the averages, of these observations.
$R_n&space;=&space;\frac{\sum_{i=1}^n&space;X_i}{\sum_{i=1}^n&space;Y_i}$
In the example the ratio Rn would estimate the deaths per patient year. The crux of formulating the asymptotic variance of Rn is recognising that we need to calculate the gradient of the function r(x,y)=x/y, which yields [1/y , -x/y2]T. The asymptotic variance then readily simplifies to a tractable product, note the intentional absence of indices on the right hand side of the equation.
$\mathbb{V}ar\left[R_n\right]&space;\overset{d}{\rightarrow}&space;\frac{1}{n}&space;\cdot\left(&space;\frac{\mathbb{E}\left[X\right]}{\mathbb{E}\left[Y\right]}&space;\right)^2&space;\cdot\mathbb{E}\left[&space;\left(&space;\frac{X}{\mathbb{E}\left[X\right]}&space;-&space;\frac{Y}{\mathbb{E}\left[Y\right]}&space;\right)^2&space;\right]$
Taking the square root of the previous equation, substituting in the usual unbiased estimators for the mean, and using a one degree of freedom penalty in estimating the square yields the standard error.
$\mathbb{S}E\left[&space;R_n&space;\right&space;]&space;\overset{d}{\rightarrow}&space;\left|\frac{\sum_{i=1}^n&space;X_i}{\sum_{i=1}^n&space;Y_i}\right|&space;\cdot\sqrt{\frac{n}{n-1}}&space;\cdot\sqrt{\sum_{i=1}^n&space;\left(\frac{X_i}{\sum_{j=1}^n&space;X_j}&space;-&space;\frac{Y_i}{\sum_{j=1}^n&space;Y_j}&space;\right&space;)^2}$
The product terms of this formulation of the standard error has an elegant interpretation. The first term is the scale factor of the standard error, in that it assures the standard error is of the same scale as the ratio of the estimators. The second term is the degrees of freedom penalty, which inflates the standard error because we are estimating both the mean and the mean of the squares. The final term is the normalized sampling discount which takes into account both the similarity between the normalized random variables, and the improvement in estimation with larger sample sizes, technically the Lp2 distance between the Lp1 normalized variables X and Y, over probability measure p.
Remarkably, this asymptotic estimator of the standard error is nearly a prototypical example of the use of empty INCLUDE level of detail calculations, only surpassed by computing record level standard scores. In this case we need a calculated field that contains the ratio of the value at a single record (the i index) divided by the sum over records in the same dimension (the j index).
Example Calculation:
We begin by assuming we have measures [X] and [Y], and have a calculated measure estimating the rate [R] := ZN(SUM([X])/SUM([Y])), and further that we have exactly one record per observation. We estimate the standard error of [R] using the following aggregate, with two embedded empty INCLUDE level of detail calculations.
// The use of ZN is not the best way to handle having less than two observations,
// however in that case the rate estimator is technically zero while the standard
// error is infinite.
[Standard Error] := ZN
(
// The scale factor, ensures the standard error has the same scale as the ratio
ABS(SUM([X])/SUM([Y])) *
SQRT
(
// The normalized sampling discount, renormalizes each observation by the sum of observations.
// Note the use of the empty INCLUDE level of detail, to automatically calculate over the
// dimensions of the sheet.
SUM(([X] / { INCLUDE : SUM([X]) } - [Y] / { INCLUDE : SUM([Y]) })^2) *
// The degrees of freedom penalty, from estimating both the sum, and the sum of squares
SUM([Number of Records]) /
(SUM([Number of Records]) - 1)
)
)
Care must be taken with the use of the empty INCLUDE level of detail calculation, as it is a source of brittleness in the implementation of the asymptotic standard error of rate estimators. As well the sample size, n, is not necessarily naively the number of records. In the example presented, if the patients were observed through multiple censuses then the sample size is not the total observations, but rather the number of unique patients, because each patient can contribute only one observation of death.
Related Functions:
SUM, SQRT, ABS, ZN, and INCLUDE.
|
{}
|
0
Research Papers: Internal Combustion Engines
# Comparison of Filter Smoke Number and Elemental Carbon Mass From Partially Premixed Low Temperature Combustion in a Direct-Injection Diesel Engine
[+] Author and Article Information
William F. Northrop, Stanislav V. Bohac, Dennis N. Assanis
Department of Mechanical Engineering, University of Michigan, Ann Arbor, MI 48109
Jo-Yu Chin
Department of Environmental Health Sciences, University of Michigan, Ann Arbor, MI 48109
J. Eng. Gas Turbines Power 133(10), 102804 (May 04, 2011) (6 pages) doi:10.1115/1.4002918 History: Received October 11, 2010; Revised October 13, 2010; Published May 04, 2011; Online May 04, 2011
## Abstract
Partially premixed low temperature combustion (LTC) is an established advanced engine strategy that enables the simultaneous reduction of soot and $NOx$ emissions in diesel engines. Measuring extremely low levels of soot emissions achievable with LTC modes using a filter smoke meter requires large sample volumes and repeated measurements to achieve the desired data precision and accuracy. Even taking such measures, doubt exists as to whether filter smoke number (FSN) accurately represents the actual smoke emissions emitted from such low soot conditions. The use of alternative fuels such as biodiesel also compounds efforts to accurately report soot emissions since the reflectivity of high levels of organic matter found on the particulate matter collected may result in erroneous readings from the optical detector. Using FSN, it is desired to report mass emissions of soot using empirical correlations derived for use with petroleum diesel fuels and conventional modes of combustion. The work presented in this paper compares the experimental results of well known formulas for calculating the mass of soot using FSN and the elemental carbon mass using thermal optical analysis (TOA) over a range of operating conditions and fuels from a four-cylinder direct-injection passenger car diesel engine. The data show that the mass of soot emitted by the engine can be accurately predicted with the smoke meter method utilizing a 3000 ml sample volume over a range of FSN from 0.02 to 1.5. Soot mass exhaust concentration calculated from FSN using the best of the literature expressions and that from TOA taken over all conditions correlated linearly with a slope of 0.99 and $R2$ value of 0.94. A primary implication of the work is that the level of confidence in reporting the soot mass based on FSN for low soot formation regimes such as LTC is improved for both petroleum diesel and biodiesel fuels.
<>
## Figures
Figure 7
Linear trend for EC concentration versus soot concentration based on the correlation of Christian (12) for all data taken in this study
Figure 1
Diagram of filter smoke meter location and partial dilution tunnel sampling system where EC filter samples were loaded
Figure 2
Rate of heat release and injector current signal for two conditions tested in this study: conventional combustion at 1500 rpm, 600 kPa BMEP, diesel fuel, and 27% EGR and LTC at 1500 rpm, 400 kPa BMEP, diesel fuel, and 45% EGR
Figure 3
Filter smoke number versus emissions index of NOx for all conditions and fuels
Figure 4
Concentration of elemental carbon in the exhaust for biodiesel blends in conventional versus LTC operation
Figure 5
Smoke mass concentration in exhaust versus FSN for four literature correlations over entire practical range of FSN
Figure 6
Smoke mass concentration versus FSN, comparing four literature correlations with the data recorded in this study
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
{}
|
# Tangent bundle of P^n and Euler exact sequence
I'm thinking about the Euler exact sequence for complex projective space, and I'm a bit confused. In the topological category, one has
$$T\mathbb{P}^n \oplus \mathbb{C} \cong L^{n+1}$$
where $L$ is the tautological line bundle and $\mathbb{C}$ is the trivial line bundle. This comes from a splitting of the vector bundle homomorphism $\mathbb{C}^n \rightarrow \mathbb{C}^n/L$ which you can get by picking a Hermitian metric.
In algebraic geometry, you get a (non-split) exact sequence, which is the dual of the Euler sequence (http://en.wikipedia.org/wiki/Euler_sequence).
Is the Euler sequence for the holomorphic tangent bundle of $\mathbb{P}^n$ split as above? It would seem so, since even though we are in the holomorphic setting we can run the first argument with the Fubini-Study metric, but I've read some things that seem to suggest otherwise.
No, any construction using the Hermitian metric is going to take you outside the holomorphic category! By the way, you can understand the Euler sequence very elegantly by mapping $\Bbb P^n\times \Bbb C^{n+1}\to T\Bbb P^n\otimes\mathscr L$ as follows: If $\pi\colon\Bbb C^{n+1}-\{0\}\to\Bbb P^n$ and $\pi(\tilde p) = p$, for $\xi\in T_{\tilde p}\Bbb C^{n+1}$, map $\xi$ to $\pi_{*\tilde p}\xi\otimes\tilde p$, and check this is well-defined.
You might also try to generalize the Euler sequence to a complex submanifold $M\subset\Bbb P^n$, letting $\tilde M = \pi^{-1}M$. Then you get the exact sequence $$0\to \mathscr L \to E \to TM\otimes\mathscr L\to 0\,,$$ where $E_p = T_{\tilde p}\tilde M$ for any $\tilde p\in \pi^{-1}(p)$. ($\tilde M$ is the affine cone corresponding to $M\subset\Bbb P^n$.)
No. If you consider the Euler exact sequence for $\mathbb{CP}^1$, we have the sheaf of 1-form is the canonical sheaf for $\mathbb{CP}^1$ which is $O(-2)$. By Euler exact sequence , $$0 \rightarrow O(-2) \rightarrow O(-1)^2\rightarrow O\rightarrow 0$$. However, $O(-2)\oplus O$ is not isomorphic to $O(-1)^2$ as algebraic vector bundle (or locally free sheaf if you like) since the former has global holomorphic section but the latter doesn't.
|
{}
|
# 20.3 Resistance and resistivity (Page 2/6)
Page 2 / 6
Resistivities $\rho$ Of various materials at $\text{20º}\text{C}$
Material Resistivity $\rho$ ( $\Omega \cdot \text{m}$ )
Conductors
Silver $1\text{.}\text{59}×{\text{10}}^{-8}$
Copper $1\text{.}\text{72}×{\text{10}}^{-8}$
Gold $2\text{.}\text{44}×{\text{10}}^{-8}$
Aluminum $2\text{.}\text{65}×{\text{10}}^{-8}$
Tungsten $5\text{.}6×{\text{10}}^{-8}$
Iron $9\text{.}\text{71}×{\text{10}}^{-8}$
Platinum $\text{10}\text{.}6×{\text{10}}^{-8}$
Steel $\text{20}×{\text{10}}^{-8}$
Lead $\text{22}×{\text{10}}^{-8}$
Manganin (Cu, Mn, Ni alloy) $\text{44}×{\text{10}}^{-8}$
Constantan (Cu, Ni alloy) $\text{49}×{\text{10}}^{-8}$
Mercury $\text{96}×{\text{10}}^{-8}$
Nichrome (Ni, Fe, Cr alloy) $\text{100}×{\text{10}}^{-8}$
Semiconductors Values depend strongly on amounts and types of impurities
Carbon (pure) $\text{3.5}×{\text{10}}^{5}$
Carbon $\left(3.5-\text{60}\right)×{\text{10}}^{5}$
Germanium (pure) $\text{600}×{\text{10}}^{-3}$
Germanium $\left(1-\text{600}\right)×{\text{10}}^{-3}$
Silicon (pure) $\text{2300}$
Silicon $\text{0.1–2300}$
Insulators
Amber $5×{\text{10}}^{\text{14}}$
Glass ${\text{10}}^{9}-{\text{10}}^{\text{14}}$
Lucite ${\text{>10}}^{\text{13}}$
Mica ${\text{10}}^{\text{11}}-{\text{10}}^{\text{15}}$
Quartz (fused) $\text{75}×{\text{10}}^{\text{16}}$
Rubber (hard) ${\text{10}}^{\text{13}}-{\text{10}}^{\text{16}}$
Sulfur ${\text{10}}^{\text{15}}$
Teflon ${\text{>10}}^{\text{13}}$
Wood ${10}^{8}-{10}^{14}$
## Calculating resistor diameter: a headlight filament
A car headlight filament is made of tungsten and has a cold resistance of $0\text{.}\text{350}\phantom{\rule{0.25em}{0ex}}\Omega$ . If the filament is a cylinder 4.00 cm long (it may be coiled to save space), what is its diameter?
Strategy
We can rearrange the equation $R=\frac{\mathrm{\rho L}}{A}$ to find the cross-sectional area $A$ of the filament from the given information. Then its diameter can be found by assuming it has a circular cross-section.
Solution
The cross-sectional area, found by rearranging the expression for the resistance of a cylinder given in $R=\frac{\mathrm{\rho L}}{A}$ , is
$A=\frac{\mathrm{\rho L}}{R}\text{.}$
Substituting the given values, and taking $\rho$ from [link] , yields
$\begin{array}{lll}A& =& \frac{\left(5.6×{\text{10}}^{–8}\phantom{\rule{0.25em}{0ex}}\Omega \cdot \text{m}\right)\left(4.00×{\text{10}}^{–2}\phantom{\rule{0.25em}{0ex}}\text{m}\right)}{\text{0.350}\phantom{\rule{0.25em}{0ex}}\Omega }\\ & =& \text{6.40}×{\text{10}}^{–9}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{2}\text{.}\end{array}$
The area of a circle is related to its diameter $D$ by
$A=\frac{{\mathrm{\pi D}}^{2}}{4}\text{.}$
Solving for the diameter $D$ , and substituting the value found for $A$ , gives
$\begin{array}{lll}D& =& \text{2}{\left(\frac{A}{p}\right)}^{\frac{1}{2}}=\text{2}{\left(\frac{6.40×{\text{10}}^{–9}\phantom{\rule{0.25em}{0ex}}{\text{m}}^{2}}{3.14}\right)}^{\frac{1}{2}}\\ & =& 9.0×{\text{10}}^{–5}\phantom{\rule{0.25em}{0ex}}\text{m}\text{.}\end{array}$
Discussion
The diameter is just under a tenth of a millimeter. It is quoted to only two digits, because $\rho$ is known to only two digits.
## Temperature variation of resistance
The resistivity of all materials depends on temperature. Some even become superconductors (zero resistivity) at very low temperatures. (See [link] .) Conversely, the resistivity of conductors increases with increasing temperature. Since the atoms vibrate more rapidly and over larger distances at higher temperatures, the electrons moving through a metal make more collisions, effectively making the resistivity higher. Over relatively small temperature changes (about $\text{100º}\text{C}$ or less), resistivity $\rho$ varies with temperature change $\Delta T$ as expressed in the following equation
$\rho ={\rho }_{0}\left(\text{1}+\alpha \Delta T\right)\text{,}$
where ${\rho }_{0}$ is the original resistivity and $\alpha$ is the temperature coefficient of resistivity . (See the values of $\alpha$ in [link] below.) For larger temperature changes, $\alpha$ may vary or a nonlinear equation may be needed to find $\rho$ . Note that $\alpha$ is positive for metals, meaning their resistivity increases with temperature. Some alloys have been developed specifically to have a small temperature dependence. Manganin (which is made of copper, manganese and nickel), for example, has $\alpha$ close to zero (to three digits on the scale in [link] ), and so its resistivity varies only slightly with temperature. This is useful for making a temperature-independent resistance standard, for example.
#### Questions & Answers
what is angular velocity
Why does earth exert only a tiny downward pull?
hello
Islam
Why is light bright?
what is radioactive element
an 8.0 capacitor is connected by to the terminals of 60Hz whoes rms voltage is 150v. a.find the capacity reactance and rms to the circuit
thanks so much. i undersooth well
what is physics
is the study of matter in relation to energy
Kintu
a submersible pump is dropped a borehole and hits the level of water at the bottom of the borehole 5 seconds later.determine the level of water in the borehole
what is power?
power P = Work done per second W/ t. It means the more power, the stronger machine
Sphere
e.g. heart Uses 2 W per beat.
Rohit
A spherica, concave shaving mirror has a radius of curvature of 32 cm .what is the magnification of a persons face. when it is 12cm to the left of the vertex of the mirror
did you solve?
Shii
1.75cm
Ridwan
my name is Abu m.konnek I am a student of a electrical engineer and I want you to help me
Abu
the magnification k = f/(f-d) with focus f = R/2 =16 cm; d =12 cm k = 16/4 =4
Sphere
what do we call velocity
Kings
A weather vane is some sort of directional arrow parallel to the ground that may rotate freely in a horizontal plane. A typical weather vane has a large cross-sectional area perpendicular to the direction the arrow is pointing, like a “One Way” street sign. The purpose of the weather vane is to indicate the direction of the wind. As wind blows pa
hi
Godfred
what about the wind vane
Godfred
If a prism is fully imersed in water then the ray of light will normally dispersed or their is any difference?
the same behavior thru the prism out or in water bud abbot
Ju
If this will experimented with a hollow(vaccum) prism in water then what will be result ?
Anurag
What was the previous far point of a patient who had laser correction that reduced the power of her eye by 7.00 D, producing a normal distant vision power of 50.0 D for her?
What is the far point of a person whose eyes have a relaxed power of 50.5 D?
Jaydie
What is the far point of a person whose eyes have a relaxed power of 50.5 D?
Jaydie
A young woman with normal distant vision has a 10.0% ability to accommodate (that is, increase) the power of her eyes. What is the closest object she can see clearly?
Jaydie
29/20 ? maybes
Ju
In what ways does physics affect the society both positively or negatively
how can I read physics...am finding it difficult to understand...pls help
try to read several books on phy don't just rely one. some authors explain better than other.
Ju
And don't forget to check out YouTube videos on the subject. Videos offer a different visual way to learn easier.
Ju
hope that helps
Ju
|
{}
|
• Science
Psychoactive drugs are in a specific category. They significantly impact the body's central nervous system, altering a person's normal activities. A person using psychoactive drugs can experience...
• Science
The answer to this question is yes... and no. It is important to first understand what psychoactive drugs actually are, because normally, the use of the word "psychoactive" often conjures images of...
• Science
This is a multi-part question, so I will only be able to consider each of your questions in brief. Lipolysis simply means the breakdown of fats in the body, and triglycerides are a kind of simple...
• Science
the nitric acid will not interfere with your analysis to find the mass of silver in the alloy. Some of the nitric acid is used to oxidize the Ag and Cu to their +1 and +2 states respectively,...
• Science
Unless you have been given other information about the exact nature of this solution, I feel that this may be a trick question. Remember that molarity (M) is defined as the number of moles of a...
• Science
In a neuron, the movement of Na+ and K+ across the membrane occurs at specialized parts of the axon called Nodes of Ranvier—gaps between the protective glial cells that insulate the passage of...
• Science
Because there are many different tissue types that make up both the heart and lungs, we will focus on only one type for each. The heart is made up of a specialized type of tissue called cardiac...
• Science
To solve this question, we first need to write the balanced chemical equation for the reaction between oxygen (O2) and ethanol (C2H5OH). The same can be written as: C2H5OH + 3 O2 -> 2 CO2 + 3...
• Science
As the second part of the post suggests, consumption varies widely among different parts of the world. For this reason, it can be misleading to speak of a “global overconsumption” problem....
• Science
The question gives a great clue to the answer. Here we start with a system (Nerf gun and dart) that is at rest with velocity 0. Given the equation for basic momentum (p=mv ), the total momentum...
• Science
Your eye is able to perceive light and visual stimuli because of the way it directs light onto the retina. When light enters the eye through the transparent cornea, it passes through the pupil...
• Science
1.) In the first place, this molecule doesn't exist under normal circumstances. On the periodic table, Aluminum (Al) has a charge of +3, while the polyatomic anion hydroxide (OH) has a charge of...
• Science
The answer you got is not correct; the correct answer is really ΔV = 8.4 x 107 V. A Van de Graff generator is not the most satisfactory model to perform this analysis, because the electric field it...
• Science
The lamina propria of the duodenum possesses finger-like extensions known as villi, which further elongate and branch out into even finer structures called microvilli. The villi and microvilli...
• Science
This question is based on stoichiometry, and hence we need a balanced chemical equation for the reaction. The reaction can be written as the following: 2Al + 6HCl -> 2AlCl3 + 3H2 Here, 2 moles...
• Science
In the lungs specifically (not counting the nose, mouth, or trachea), various cells serve individual functions. Alveolar macrophages (dust cells), for example, play a critical role in clearing the...
• Science
Carbohydrates, lipids, and proteins all play a vital role in the nutrition of the human body. All three are taken in through foods and used specifically by the body for certain functions. To begin,...
• Science
In order for a muscle to contract, a neurotransmitter must cross a synapse between a motor neuron and a muscle fiber. One of the strongest and most common neurotransmitters that has this function...
• Science
Down Syndrome is also known as Trisomy 21, because a failure of chromosomes to properly separate during meiosis leads to the presence of an extra (a third) 21st chromosome in all of the somatic...
• Science
Histologically, the duodenum is similar to the other tubes that comprise the gastrointestinal tract. It therefore possesses similar tissues (groups of cells working together for a common purpose)...
• Science
Chemical B, which floods the muscle cells' cytoplasm with Ca2+, would not be a good choice for a muscle relaxer. In order to determine this, one must understand the function of Ca2+,and calcium, in...
• Science
Chemical A, a chemical that binds to and blocks acetylcholine receptors of muscle cells, could be considered a good choice as a muscle relaxer. We can determine this through understanding the role...
• Science
Chemical D, which prevents the binding of Ca2+ to troponin, could be a good choice for a muscle relaxer prior to surgery. The function of Ca2+ within the skeletal muscle cells is to activate muscle...
• Science
Chemical E, a chemical that mimics the function of acetylcholine, could not serve as a muscle relaxer. We can determine this through understanding the function of acetylcholine in the...
• Science
Penguins remain one of the most interesting and beloved creatures on Earth. This flightless seabird primarily resides in cold climates below the Equator. The majority of the penguin population is...
• Science
The digestive system is a one-way system. Food enters the mouth, gets digested, and wastes exit the body via the anus. The mouth is the start of the digestive system, and it does both mechanical...
• Science
The boiling point is the temperature at which a substance changes from the liquid phase to the gaseous phase. The most common day to day example of boiling is the boiling of water (especially while...
• Science
Population genetics is a branch of biological studies. Population geneticists study the genetic composition of a given population and study the changes in genetic composition in that population...
• Science
Our body requires a large number of chemicals for its operation and maintenance. In fact, almost 60 elements are present in our bodies. Each chemical has its own purpose in our body. The two...
• Science
Arteries carry blood, oxygen, and nutrients away from the heart. Coronary refers to the heart, so coronary arteries are arteries that supply the heart. Coronary artery disease develops when those...
• Science
My recommendation would be to create a percussion instrument. This focuses on instruments that create sound by being struck, shaken, or scraped. A homemade drum should be fairly easy to piece...
• Science
The endocrine and nervous system are controlled by the pituitary gland. The pituitary gland is located in the brain behind the bridge of the nose and is about the size of a pea. It is attached to...
• Science
The liver is the heaviest internal organ in the human body and is also the largest gland in our body. Although there may be variations in its weight, the average weight of the liver is about 1.5...
• Science
I will explain the structure of the human heart, including the tissues and cells, here. The human heart is shaped roughly like a human fist and consists of four chambers: two atria and two...
• Science
Chaucer's General Prologue is a robust celebration of human diversity, yet it also engages in social stereotypes. As an Estates Satire, its purpose is to show the various members of the medieval...
• Science
Students can often be confused about the differences between mitosis and meiosis. Perhaps it is because the names are so similar or because the processes share the same phase names as the process...
• Science
Because human infants are unable to fend for themselves, they are wholly dependent upon caretakers for much of their early lives. In general, newborns are not capable of (nor interested in)...
• Science
Osteoporosis is a condition where your bones get weaker over time. Bone tissue, the strongest of the tissue cells, naturally looks a bit like a spider's web: it is densely compacted but has "holes"...
• Science
Raising cattle and using their byproducts has a negative effect on climate in several ways. In many parts of the world, people consume a substantial amount of beef, relying on it as a major source...
• Science
Numerous studies have shown that cars are a major contributor of greenhouse gases in the atmosphere which are a cause of climate change. In the United States alone, exhaust from cars and trucks are...
• Science
George Herbert Mead’s theory of "the I and the me" characterizes the "I" as the active aspect of personality, analogous to the Freudian "ego," and the "me" as the socialized aspect, consisting of...
• Science
I suppose that chainsaws contribute to the destabilization of the atmosphere because they allow people to quickly and easily cut down trees. While all plants help to regulate the levels of carbon...
• Science
While I cannot provide you with a direct answer to the question—that is, a summary of the article—I will provide some facts and leads that will hopefully point you in the right direction and help...
• Science
There are three main ways in which individuals who work around radiation protect themselves from over-exposure. These three ways are typically referred to as “time, distance, and shielding.” All...
• Science
Iodine 131 is a radioisotope of the element iodine. It is a radioactive material and undergoes decay. This means that its concentration decreases over time. The half-life (the time taken by a...
• Science
In some ways, this question is asking about an overall process and then naming specific types of that process. Both osmosis and diffusion are types of passive transport. This means that both...
• Science
The first thing one would notice if Earth had no atmosphere would be an absence of life. The Earth would have no way to keep warm or cool enough to support life, so the Earth would be barren of all...
• Science
A region's high temperatures can affect many factors within an environment. Many species cannot survive in either hot or cold temperatures. Seeds will not germinate if the ground is too cold or...
Hello, let's solve this problem! So we have a 1000 kg car travelling at 28 m/s and we wish to make it stop after braking for a total distance of 105 m . We want to know what the magnitude...
|
{}
|
### Author Topic: Compiling a PM sample with GCC (Read 3886 times)
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #45 on: April 12, 2022, 04:37:44 pm »
Hi Martin, the PROTMODE statement is for whether the program runs in real or protected mode, which doesn't make sense for 32 bit OS/2 but is silently ignored by the IBM linkers, best is to simply remove it.
The program is weird, maybe written for OS/2 1.1? I got it to show up in the window list with this patch. At least now it is easy to kill
Code: [Select]
`H:\tmp\boxes>diff -u BOXES3.C.orig BOXES3.C--- BOXES3.C.orig 2022-04-11 21:08:42.000000000 -0700+++ BOXES3.C 2022-04-11 21:17:24.000000000 -0700@@ -25,8 +25,7 @@ QMSG qmsg ; ULONG flFrameFlags = FCF_TITLEBAR | FCF_SYSMENU | FCF_SIZEBORDER |- FCF_MINMAX | FCF_TASKLIST | FCF_ICON | FCF_ACCELTABLE |- FCF_MENU;+ FCF_MINMAX | FCF_TASKLIST; const unsigned char szTitle [] = "Boxes"; hab = WinInitialize (0) ;@@ -41,18 +40,13 @@ hwndFrame = WinCreateStdWindow ( HWND_DESKTOP, /* Parent window handle */- WS_VISIBLE /* Style of frame window */- | FS_SIZEBORDER- //| WC_TITLEBAR- | FID_SYSMENU- | FID_MINMAX- | VK_MENU,+ WS_VISIBLE, /* Style of frame window */ &flFrameFlags, /* Client window class name */ szClientClass, /* Title bar text */- szTitle,+ NULL,+ 0L, 0, /* Style of client window */ 0, /* Module handle for resources */- ID_MAINMENU, /* ID of resources */ &hwndClient) ; /* Pointer to client window handle */ while (WinGetMsg (hab, &qmsg, 0, 0, 0))`
Looking at the msg queue, instead of the usual CASE WM_CREATE, it starts with CASE WM_COMMAND. Probably the whole queue thing needs to be rewritten, which I'm not knowledgeable enough to do. Need to find my OS/2 programming for Dummies book
Perhaps someone with more knowledge can chime in.
Thanks Dave. I made the suggested changes and now it shows on the window list, but yes, it does not display anything. I will let it here rest until maybe someone can give us a hint.
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #46 on: April 12, 2022, 05:17:43 pm »
Hi
I also compiled this PMHELLO samples. I cleaned up the warnings, it compiles, it works and displays the PM Sample.
I only have this warning.
Code: [Select]
`gcc -Wall -Zomf -c -O2 pmhello.c -o pmhello.objgcc -Wall -Zomf -c -O2 pmhello1.c -o pmhello1.objgcc -Zomf pmhello.obj pmhello1.obj pmhello.def -o pmhello1.exeE:\projects\SamplePack\dev-samples-pm-pmhello\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPIgcc -Wall -Zomf -c -O2 pmhello2.c -o pmhello2.objgcc -Zomf pmhello.obj pmhello2.obj pmhello.def -o pmhello2.exeE:\projects\SamplePack\dev-samples-pm-pmhello\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPIgcc -Wall -Zomf -c -O2 pmhello3.c -o pmhello3.objgcc -Zomf pmhello.obj pmhello3.obj pmhello.def -o pmhello3.exeE:\projects\SamplePack\dev-samples-pm-pmhello\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPI`I guess it shows since I removed PROTMODE from the .DEF file. Any tips to remove that warning?
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #47 on: April 12, 2022, 05:30:18 pm »
Hi
Just as a note to myself and any comments are welcome.
I created a new development VM with ArcaOS 5.0.7 just with the goal to compile my samples, since my other VM had a lot of other dev software that I was not using with some the stuff on the config.sys. I wanted to started clean.
To recover my status of being able to compile these samples I did:
1) yum install gcc libc-devel binutils kbuild-make
2) SET INCLUDE=C:\usr\include; on Config.sys
3) Got "ilink50.zip" and put the exe on C:\sys\bin and DLLs on c:\sys\dll
(I guess there is no ilink on the rpm)
With that I have the basic stuff to compile the samples again.
Regards
never do 2)
Yes, right, I don't know what happened earlier that I got an error that can not find OS2.H. But now I had removed the SET INCLUDE on the config.sys and it keeps working.
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #48 on: April 12, 2022, 05:39:45 pm »
Hi
I would prefer to use wlink since (I guess) it is open source. But I have no idea if how to call it, since on the makefile I'm not making a direct link to "Ilink", I don't know if gcc is the one calling it.
Any suggestion what to change on the makefile to call wlink and give it a try with these samples?
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #49 on: April 12, 2022, 05:53:10 pm »
Hi
I would prefer to use wlink since (I guess) it is open source. But I have no idea if how to call it, since on the makefile I'm not making a direct link to "Ilink", I don't know if gcc is the one calling it.
Any suggestion what to change on the makefile to call wlink and give it a try with these samples?
Regards
Run emxomfld with no arguments to see how to call the various linkers and rc's.
#### Silvan Scherrer
• Full Member
• Posts: 194
• Karma: +1/-0
##### Re: Compiling a PM sample with GCC
« Reply #50 on: April 12, 2022, 06:23:38 pm »
Hi
I would prefer to use wlink since (I guess) it is open source. But I have no idea if how to call it, since on the makefile I'm not making a direct link to "Ilink", I don't know if gcc is the one calling it.
Any suggestion what to change on the makefile to call wlink and give it a try with these samples?
Regards
if you install it via rpm all gets set up for you. gcc then finds the right linker.
kind regards
Silvan
CTO bww bitwise works GmbH
Please help us with donations, so we can further work on OS/2 based projects. Our Shop is at https://www.bitwiseworks.com/shop/index.php
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #51 on: April 12, 2022, 07:15:03 pm »
Hi
I installed "watcom-wrc" package and I don't know what happened to the envirment.
I get:
Quote
Creating binary resource file walker.RES
RC: RCPP -E -D RC_INVOKED -W4 -f walker.rc -ef C:\OS2\RCPP.ERR
fatal error C1015: cannot open include file 'os2.h'
make: *** [walker.res] Error 3
and with wrc the same error:
Code: [Select]
`gcc -Wall -Zomf -c -O2 walker.c -o walker.objwrc -r walker.rcOpen Watcom Windows and OS/2 Resource Compiler Version 2.0beta1 LAPortions Copyright (c) 1993-2002 Sybase, Inc. All Rights Reserved.Source code is available under the Sybase Open Watcom Public License.See http://www.openwatcom.org/ for details.walker.rc(2): Error! E062: Unable to open 'os2.h'make: *** [walker.res] Error 9`
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #52 on: April 13, 2022, 02:02:15 am »
That's weird as the wrc package only has wrc.exe and some type of documentation.
Try reinstalling GCC and libc-devel?
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #53 on: April 13, 2022, 04:27:22 am »
Hi
I tried again to refresh the VM.
I had only installed "yum install gcc libc-devel binutils kbuild-make" and ilink5.0.zip, no "watcom-wrc" yet.
Now I get this new error from RC.
Code: [Select]
`gcc -Wall -Zomf -c -O2 walker.c -o walker.objgcc -Zomf walker.obj walker.def -o walker.exeE:\projects\SamplePack\PMwalker_r4\walker.def(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPIrc walker.resOperating System/2 Resource CompilerVersion 4.00.011 Oct 10 2000(C) Copyright IBM Corporation 1988-2000(C) Copyright Microsoft Corp. 1985-2000All rights reserved.Reading binary resource file walker.res(0) RC: error - Only integer TYPE allowed ().RC: 1 error detectedmake: *** [walker.exe] Error 1`
It compiles and create the exe, but the icon is not assigned and if you run the exe it does not shows anything.
My second question is, where is wlink.exe? I can not find it on the RPM.
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #54 on: April 13, 2022, 05:40:18 am »
That seems like a weird error. Wonder if it is due to the wrong ilink? There's 2 or 3 of them and they all like slightly different syntax. Default is the ilink from VAC365 with ilink 5 only licensed for building Mozilla but IIRC, it is more compatible with the VAC365 one.
The warning about WINDOWCOMPAT seems weird too, at least my version of walker.def has WINDOWAPI.
The RPM package name is watcom-wlink-hll, see if that fixes things.
BTW, I usually stick to rc.exe rather then wrc.exe as it seems I've had problems with them not quite being compatible.
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #55 on: April 13, 2022, 05:46:52 am »
Tested wrc by editing the makefile, compiles fine.
Code: [Select]
`gcc -Zomf -c -O2 walker.c -o walker.objgcc -Zomf walker.obj walker.def -o walker.exewrc walker.resOpen Watcom Windows and OS/2 Resource Compiler Version 2.0beta1 LAPortions Copyright (c) 1993-2002 Sybase, Inc. All Rights Reserved.Source code is available under the Sybase Open Watcom Public License.See http://www.openwatcom.org/ for details.`
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #56 on: April 17, 2022, 02:10:46 am »
Hi
I still get the OS2.h issue, no idea what to do:
Code: [Select]
`[E:\PROJECTS\SAMPLEPACK\PMWALKER]make 2>&1 | tee make.outgcc -Wall -Zomf -c -O2 walker.c -o walker.objwrc -r walker.rcOpen Watcom Windows and OS/2 Resource Compiler Version 2.0beta1 LAPortions Copyright (c) 1993-2002 Sybase, Inc. All Rights Reserved.Source code is available under the Sybase Open Watcom Public License.See http://www.openwatcom.org/ for details.walker.rc(2): Error! E062: Unable to open 'os2.h'make: *** [walker.res] Error 9`
But I have some other questions:
- Where is wlink? I can not find it on the rpm, is it there? I need the package name if it is there.
- Did you change something special on the makefile to use wrc?
I just changed the make file to:
Code: [Select]
`all : walker.exewalker.exe : walker.obj walker.res walker.def gcc -Zomf walker.obj walker.def -o walker.exe wrc walker.reswalker.obj : walker.c walker.h gcc -Wall -Zomf -c -O2 walker.c -o walker.objwalker.res : walker.rc step1.ico step2.ico step3.ico wrc -r walker.rcclean : rm -rf *exe *RES *obj`
But I still need to know what is the issue with the OS2.H. I only have this one:
Quote
Directory of C:\usr\include
4-16-22 6:16p 721 124 a--- os2.h
Regards
Martin Iturbide
OS2World NewsMaster
... just share the dream.
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #57 on: April 17, 2022, 06:54:35 am »
Hi
I still get the OS2.h issue, no idea what to do:
Code: [Select]
`[E:\PROJECTS\SAMPLEPACK\PMWALKER]make 2>&1 | tee make.outgcc -Wall -Zomf -c -O2 walker.c -o walker.objwrc -r walker.rcOpen Watcom Windows and OS/2 Resource Compiler Version 2.0beta1 LAPortions Copyright (c) 1993-2002 Sybase, Inc. All Rights Reserved.Source code is available under the Sybase Open Watcom Public License.See http://www.openwatcom.org/ for details.walker.rc(2): Error! E062: Unable to open 'os2.h'make: *** [walker.res] Error 9`
But I have some other questions:
- Where is wlink? I can not find it on the rpm, is it there? I need the package name if it is there.
As I said, watcom-wlink-hll is the package name
Quote
- Did you change something special on the makefile to use wrc?
I just changed the make file to:
Code: [Select]
`all : walker.exewalker.exe : walker.obj walker.res walker.def gcc -Zomf walker.obj walker.def -o walker.exe wrc walker.reswalker.obj : walker.c walker.h gcc -Wall -Zomf -c -O2 walker.c -o walker.objwalker.res : walker.rc step1.ico step2.ico step3.ico wrc -r walker.rcclean : rm -rf *exe *RES *obj`
Yes that is what I did
Quote
But I still need to know what is the issue with the OS2.H. I only have this one:
Quote
Directory of C:\usr\include
4-16-22 6:16p 721 124 a--- os2.h
Regards
All mine seem to be 723 bytes in size, try reinstalling
Code: [Select]
`[W:\usr\include]dir os2.hThe volume label in drive W is ARCAOS.The Volume Serial Number is 2D68:FC15.Directory of W:\usr\include 8-26-21 7:26a 723 124 a--- os2.h 1 file(s) 723 bytes used 293,700,608 bytes free`
« Last Edit: April 17, 2022, 06:56:25 am by Dave Yeo »
#### Dave Yeo
• Hero Member
• Posts: 3631
• Karma: +77/-0
##### Re: Compiling a PM sample with GCC
« Reply #58 on: April 17, 2022, 06:58:14 am »
Here's my os2.h,
Code: [Select]
`/* os2.h,v 1.3 2004/09/14 22:27:35 bird Exp *//** @file * EMX */#ifndef NO_INCL_SAFE_HIMEM_WRAPPERS# include <os2safe.h>#endif#ifndef _OS2_H#define _OS2_H#if defined (__cplusplus)extern "C" {#endif#ifndef _Cdecl#define _Cdecl#endif#ifndef _Far16#define _Far16#endif#ifndef _Optlink#define _Optlink#endif#ifndef _Pascal#define _Pascal#endif#ifndef _Seg16#define _Seg16#endif#ifndef _System#define _System#endif#if defined (USE_OS2_TOOLKIT_HEADERS)#include <os2tk.h>#else#include <os2emx.h> /* <-- change this line to use Toolkit headers */#endif#include <os2thunk.h>#if defined (__cplusplus)}#endif#endif /* not _OS2_H */`
#### Martin Iturbide
• OS2World NewsMaster
• Global Moderator
• Hero Member
• Posts: 3595
• Karma: +34/-0
##### Re: Compiling a PM sample with GCC
« Reply #59 on: June 29, 2022, 01:30:15 am »
Hi
I got some time today and I wanted to keep compiling some little PM samples. I had restarted my test with "PMHELLO".
My Dev enviroment is back in business and I was able to compile the sample with these warnings.
Code: [Select]
`gcc -Wall -Zomf -c -O2 pmhello.c -o pmhello.objgcc -Wall -Zomf -c -O2 pmhello1.c -o pmhello1.objgcc -Zomf pmhello.obj pmhello1.obj pmhello.def -o pmhello1.exeE:\projects\SamplePack\TEST\PM-PMHELLOb1\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPIgcc -Wall -Zomf -c -O2 pmhello2.c -o pmhello2.objgcc -Zomf pmhello.obj pmhello2.obj pmhello.def -o pmhello2.exeE:\projects\SamplePack\TEST\PM-PMHELLOb1\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPIgcc -Wall -Zomf -c -O2 pmhello3.c -o pmhello3.objgcc -Zomf pmhello.obj pmhello3.obj pmhello.def -o pmhello3.exeE:\projects\SamplePack\TEST\PM-PMHELLOb1\PMHELLO.DEF(3) : warning LNK4072: changing application type from WINDOWCOMPAT to WINDOWAPI`
Just as a reminder I'm using:
Quote
- ArcaOS - Verion 5.0.7
- RC - Version 4.00.011 Oct 10 2000
- gcc - gcc (GCC) 9.2.0 20190812 (OS/2 RPM build 9.2.0-5.oc00)
- make - Version 3.81 k2 (2017-11-10)
|
{}
|
Volume 398 - The European Physical Society Conference on High Energy Physics (EPS-HEP2021) - T10: Searches for New Physics
The CLIC potential for new physics
J. Klamka* and On behalf of the CLICdp collaboration
Full text: pdf
Pre-published on: February 24, 2022
Published on: May 12, 2022
Abstract
The Compact Linear Collider (CLIC) is a mature option for a future electron-positron collider operating at centre-of-mass energies of up to 3 TeV. It incorporates a novel two-beam acceleration technique offering accelerating gradient of up to 100 MeV/m. CLIC would be built and operated in a staged approach with three centre-of-mass energy stages currently assumed to be 380 GeV, 1.5 TeV, and 3 TeV.
The first CLIC stage will be focused on precision Higgs and top quark measurements. The so called "Higgs-strahlung" process (e$^+$e$^-$ $\to$ ZH) is a key for a model independent measurement of Higgs boson decays and extraction of its couplings. Precision top quark measurements will include the pair-production threshold scan, which is assumed to be the most precise method for the top-quark mass determination.
The two subsequent energy stages will allow for extended Standard Model studies, including the direct measurement of the Higgs self-coupling and the top Yukawa coupling, but their main goals will be to search for signatures of Beyond the Standard Model phenomena.
Presented in this contribution is a selection of recent results showing sensitivity of CLIC experiment to diverse BSM physics scenarios. Compared with
hadron colliders, the low background conditions at CLIC provide extended discovery potential, in particular for the production through electroweak and/or Higgs boson interactions. This includes scenarios with extended scalar sectors, also motivated by dark matter, which can be searched for using associated production processes or cascade decays involving electroweak gauge bosons. In a wide range of models, new particles can be discovered almost up to the kinematic limit while the indirect search sensitivity extends up to ${\cal{O}}(100)$ TeV scales.
DOI: https://doi.org/10.22323/1.398.0714
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
{}
|
p Without actually finding the cube...
Question
# p Without actually finding the cubes find the following:
11th - 12th Class
Maths
Solution
108
4.0 (1 ratings)
|
{}
|
August 13, 2020, 10:28:42 AM
Forum Rules: Read This Before Posting
### Topic: benzoin condensation (Read 12483 times)
0 Members and 1 Guest are viewing this topic.
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #15 on: July 04, 2015, 03:47:42 PM »
The KOH 0.1M is a quite low concentration, when compared to ROH 16M (1/160). Therefore, autoionization of both ROH and water must also be taken into account.
ROH + KOH → ROK + H2O
KOH → K(+) + OH(-)
ROK → RO(-) + K(+)
ROH → RO(-) + H(+)
H2O → HO(-) + H(+)
In addition, positive and negative charges must be equilibrated.
[K(+)] + [H(+)] = [OH(-)] + [RO(-)]
So, start re-calculating by tacking all above equilibria constants, into account and by the assumption that the total volume does not change, when mixing ROH with water (if needed) and additionally, that no thermal expasion occurs during exothermic solvolysis.
Good luck.
« Last Edit: July 04, 2015, 04:48:33 PM by pgk »
#### Borek
• Mr. pH
• Deity Member
• Posts: 25889
• Mole Snacks: +1693/-401
• Gender:
• I am known to be occasionally wrong.
##### Re: benzoin condensation
« Reply #16 on: July 04, 2015, 05:56:43 PM »
The KOH 0.1M is a quite low concentration, when compared to ROH 16M (1/160). Therefore, autoionization of both ROH and water must also be taken into account.
ROH + KOH → ROK + H2O
KOH → K(+) + OH(-)
ROK → RO(-) + K(+)
ROH → RO(-) + H(+)
H2O → HO(-) + H(+)
In addition, positive and negative charges must be equilibrated.
[K(+)] + [H(+)] = [OH(-)] + [RO(-)]
So, start re-calculating by tacking all above equilibria constants, into account and by the assumption that the total volume does not change, when mixing ROH with water (if needed) and additionally, that no thermal expasion occurs during exothermic solvolysis.
Good luck.
Burden of proof you are right lies on you. We know the general ideas about how to deal with the equilibrium calculations, so your post adds nothing new to the discussion. Either show results of your calculations, or stop posting at all, as you are just trolling now.
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #17 on: July 05, 2015, 04:42:01 AM »
Dear Sir,
1). Burden of proof me are right lies on me but according the forum policy rules, I have to show only the insights but not the full answer.
Are there different policy rules for the forum stuff than the forum members?
2). These are not general ideas but the basic principles of chemical equilibria.
3). Considering basic principles as nothing new to the educational forum, consists a violation of the first paragraph of the forum’s registration agreement.
Are there different registration terms for the forum stuff than the forum members?
4). The results of calculations are already shown above:
K = Ka/Kw → K = 10exp(-16)/10exp(-14) → K= 10exp(-2) → [RO(-)]/[OH(-)] = 1/100
5). Please, feel free to restrict my posts that you do not want to judge.
Sincerely yours
pgk
« Last Edit: July 05, 2015, 06:17:23 AM by pgk »
#### Dan
• Retired Staff
• Sr. Member
• Posts: 4716
• Mole Snacks: +467/-72
• Gender:
• Organic Chemist
##### Re: benzoin condensation
« Reply #18 on: July 05, 2015, 06:18:54 AM »
K = Ka/Kw → K = 10exp(-16)/10exp(-14) → K= 10exp(-2) → [RO(-)]/[OH(-)] = 1/100
K = Ka(ROH)/Ka(water)
Ka(water) ≠ Kw
Ok, if 0.1 M is too dilute, let's try 1 M.
The same calculation as my previous gives x = 0.9, i.e. 90:10 alkoxide:hydroxide
--
I have presented several calculations now that demonstrate that the alkoxide is the major anion present in alcoholic hydroxide solutions, despite the (generally) weaker acidity of alcohols compared to water (i.e. despite the fact that K < 1)
The reason for this is the vast excess of alcohol present. It's a quantitative demonstration of Le Chaterlier's principle.
You appear to dismiss these calculations as invalid on the basis of the assumptions made (i.e. because autodissociation was not considered). While the assumptions I have made in my calculations will affect the accuracy of the result, I think it is safe to say that they have qualitative value in predicting the major anions present. If the result of my calculations ([RO-] >> [HO-] in dilute alcoholic hydroxide solution) is reversed by a more sophisticated mathematical model, then you have to demonstrate it. That's not a forum rule, it's just that unless you can demonstrate it, I'm not going to take your word for it.
My research: Google Scholar and Researchgate
#### Borek
• Mr. pH
• Deity Member
• Posts: 25889
• Mole Snacks: +1693/-401
• Gender:
• I am known to be occasionally wrong.
##### Re: benzoin condensation
« Reply #19 on: July 05, 2015, 06:31:42 AM »
Edit: due to circumstances it took me a long time to write this post. Dan was faster, and his post addresses basically the same problems.
1). Burden of proof me are right lies on me but according the forum policy rules, I have to show only the insights but not the full answer.
Are there different policy rules for the forum stuff than the forum members?
These rules are designed for helping those solving homework questions, we are long past this point.
Quote
2). These are not general ideas but the basic principles of chemical equilibria.
Which is why they don't add anything new. At the moment it is not a discussion on the HS level, so we can safely assume we all know basic principles. No need to list them as if they were eye-openers.
Quote
4). The results of calculations are already shown above:
K = Ka/Kw → K = 10exp(-16)/10exp(-14) → K= 10exp(-2) → [RO(-)]/[OH(-)] = 1/100
And they are wrong. K is not [RO-]/[OH-]. You have ignored concentrations of ROH and H2O, and their ratio can substantially change the result:
$$\frac{[RO^-]}{[OH^-]} = K \frac {[ROH]}{[H_2O]}$$
If you start with anhydrous ethanol, ratio $\frac {[ROH]}{[H_2O]}$ is quite large. Dan calculations took it into account and he have shown to you how to find the result he got. If you think he is wrong, show precisely where Dan is wrong, or show your full calculations. So far your arguments are handwavy and don't look reasonable:
The KOH 0.1M is a quite low concentration, when compared to ROH 16M (1/160). Therefore, autoionization of both ROH and water must also be taken into account.
Autoionization of water and alcohol yields concentrations of products several orders of magnitude lower than the concentrations of RO- and OH- from the equilibria taken into account so far. Experience tells me they will not change anything in the full picture. Feel free to solve the full system and prove me wrong.
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #20 on: July 05, 2015, 07:09:10 AM »
Indeed, Kw is the ionization expression of per number of moles, contrary to Ka that is the ionization expression per molarity (even better, normality) of water and therefore, pkw = 14 and pKa = 15.7.
In other words, Kw measures the water ionization degree, contrary to pKa that measures the acidity of water and helps the comparison with other compounds. But when being in mixtures and solutions, the ratio of ionized/noninized molecules of water, remains 10exp(-14) and therefore, Kw is the predominant and valuable constant, for further mathematical applications.
So, let’s do it again:
ROH + OH(-) → RO(-) + H2O
Kreaction = [RO(-)][H2O]/ [HO(-)][ROH]
If you multiply and divide that function by [H+], you finally get:
K = Ka(ROH diss.)/Kw
ATTENTION: The ratio of concentrations [RO(-)]/[HO(-)] per volume unit does not change, regardless the changes of concentrations of ROH and water that occured during the alkali exchange reaction.
« Last Edit: July 05, 2015, 08:16:16 AM by pgk »
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #21 on: July 05, 2015, 07:23:01 AM »
The experimental verification of all above seems quite easy because alkoxides have different α- and β- chemical shifts in 1H-NMR and different ipso-C shift in 13C-NMR than the corresponding alcohols.
I will really appreciate any literature and experimental data, regarding the issue.
#### Borek
• Mr. pH
• Deity Member
• Posts: 25889
• Mole Snacks: +1693/-401
• Gender:
• I am known to be occasionally wrong.
##### Re: benzoin condensation
« Reply #22 on: July 05, 2015, 08:18:02 AM »
So, let’s do it again:
ROH + OH(-) → RO(-) + H2O
Kreaction = [RO(-)][H2O]/ [HO(-)][ROH]
If you multiply and divide that function by [H+], you finally get:
K = Ka(ROH diss.)/Kw
No matter how many times you will repeat it, K will not become a ratio of concentrations of RO- and OH-. Some things do cancel, [ROH] and [H2O] do not, they are left in the equation. Yes, you have shown how to calculate K for the reaction we are interested in from known Ka and Kw, no, it doesn't show what you claim it shows.
Quote
ATTENTION: the ratio of concentrations [RO(-)]/[HO(-)] per volume unit does not change, regardless the changes of concentrations of ROH and water
This is exactly the problem with your reasoning. Why do you think the above is true? The correct expression describing the ratio of concentrations, derived from the ROH/OH- equilibrium, is
$$\frac{[RO^-]}{[OH^-]} = K \frac {[ROH]}{[H_2O]}$$
and the ratio of concentrations of water and alcohol plays an important role. You can't ignore it, unless you have a good reason to do so. So far you have failed to show what the reason is.
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #23 on: July 05, 2015, 08:23:21 AM »
Because the total volume does not change (or practically, it does not significantly change).
#### Dan
• Retired Staff
• Sr. Member
• Posts: 4716
• Mole Snacks: +467/-72
• Gender:
• Organic Chemist
##### Re: benzoin condensation
« Reply #24 on: July 05, 2015, 08:53:06 AM »
So, let’s do it again:
ROH + OH(-) → RO(-) + H2O
Kreaction = [RO(-)][H2O]/ [HO(-)][ROH]
If you multiply and divide that function by [H+], you finally get:
K = Ka(ROH diss.)/Kw
I don't think you do.
Kreaction = [RO-][H2O]/ [HO-)][ROH]
= ([RO-)][H+]/[ROH])*([H2O]/[HO-][H+])
= Ka(ROH)*Ka(H2O)
The term in red is Ka(H2O), not Kw. Kw = [HO-][H+] in dilute aqueous solution - I don't think it's applicable here. You are incorrectly ignoring [H2O].
My research: Google Scholar and Researchgate
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #25 on: July 05, 2015, 10:43:04 AM »
Or and in other words, the function Kw = [HO-][H+] = 10exp(-14) is not valuable and applicable for concentrated aqueous solutions, as well as for pure water. Of course, it is!
Please, note that the dissolution medium is not water, hereby and thus, [H2O] is not the basis for calculations of [H+] and [OH-].
Besides and for similar reasons:
KaKb = kw
and:
KaKb ≠ Ka(water)
Once again, the terms Ka(water) and pKa(water) refer to pure water (free of electrolytes and solutes) and they are the expression of water acidity, in comparison with Arrhenius acids.
« Last Edit: July 05, 2015, 10:56:24 AM by pgk »
#### Dan
• Retired Staff
• Sr. Member
• Posts: 4716
• Mole Snacks: +467/-72
• Gender:
• Organic Chemist
##### Re: benzoin condensation
« Reply #26 on: July 05, 2015, 12:10:10 PM »
Or and in other words, the function Kw = [HO-][H+] = 10exp(-14) is not valuable and applicable for concentrated aqueous solutions, as well as for pure water. Of course, it is!
Kw is based on the assumption that [H2O] is constant. That is not necessarily true for concentrated solutions.
Kw = [H2O]*Ka(H2O]) = 10-14 if [H2O] = 55.5 M (i.e. pure water)
But that is really beside the point. We are not discussing aqueous solutions, we are discussing alcoholic hydroxide solutions. Kw has no place in these calculations. [H2O] is certainly not constant in this scenario, and it is certainly not 55.5 M.
I think you have misused Kw to incorrectly calculate the equilibrium constant. I calculate K = 0.5, but you get K = 0.01 because you have erroneously divided by the concentration of pure water.
If you can show a valid calculation that supports your assertion that alkoxide formation in alcoholic KOH is practically negligible, please do so. I don't mind being proved wrong, it wouldn't be the first time, but otherwise I think it's time to stop.
My research: Google Scholar and Researchgate
#### Borek
• Mr. pH
• Deity Member
• Posts: 25889
• Mole Snacks: +1693/-401
• Gender:
• I am known to be occasionally wrong.
##### Re: benzoin condensation
« Reply #27 on: July 05, 2015, 12:31:54 PM »
Actually Kw is sometimes expressed not as 10-14, but as 1.8×10-16 (10-14/55.5) (and in this case that would be the more correct value).
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### Borek
• Mr. pH
• Deity Member
• Posts: 25889
• Mole Snacks: +1693/-401
• Gender:
• I am known to be occasionally wrong.
##### Re: benzoin condensation
« Reply #28 on: July 05, 2015, 12:42:12 PM »
Because the total volume does not change (or practically, it does not significantly change).
How is it related to problem in question? Are you saying that because the volume doesn't change significantly, concentrations don't change as well? If you start with an anhydrous alcohol containing several ppm of water, and you add 1 mL of water to 1 L of alcohol, the concentration ratio change is hundredfold, while the volume change is for most practical purposes insignificant.
ChemBuddy chemical calculators - stoichiometry, pH, concentration, buffer preparation, titrations.info, pH-meter.info
#### pgk
• Chemist
• Full Member
• Posts: 892
• Mole Snacks: +97/-24
##### Re: benzoin condensation
« Reply #29 on: July 05, 2015, 02:10:13 PM »
Assuming that the total volume does not significantly change, when mixing ROH with water:
Mixing aqueous KOH 1N and alcoholic KOH 1N, we get an initial mixture of KOH 1N per total volume unit that will immediately be transformed (ionic reaction) to (say) KOH 0.995 N and ROK 0.005 N per total volume and (say) KOH 1.98 N and ROK 0.02 N per ROH volume or a ratio 0.99/0.01
The relative concentrations of KOH and ROK may change by further dilution with either water of ROH or their mixture but the concentration ratio 0.99/0.01 will remain constant, when calculated per ROH partition.
As a reminder, the latter discussion refers to anhydrous ethanolic KOH.
« Last Edit: July 05, 2015, 02:24:51 PM by pgk »
|
{}
|
Prime number theorem for knots?
Is there an analog to the prime-number theorem describing the distribution of the prime numbers among the integers: A theorem that describes the distribution of the prime knots, perhaps with respect to the knots with $k$-crossings? Or a distribution with respect to some other relevant knot parameter? I.e., is there any analog to $$\pi(x) \sim \frac{\log(x)}{x} \; \mathrm{?}$$
• OEIS has some info. – Arthur May 27 '18 at 23:36
• @Arthur: Thanks! Almost too many references, and no clear distribution formula I can see. But much to penetrate to be certain. – Joseph O'Rourke May 28 '18 at 0:06
|
{}
|
# Does Zhang's theorem generalize to $3$ or more primes in an interval of fixed length?
Let $p_n$ be the $n$-th prime number, as usual: $p_1 = 2$, $p_2 = 3$, $p_3 = 5$, $p_4 = 7$, etc.
For $k=1,2,3,\ldots$, define $$g_k = \liminf_{n \rightarrow \infty} (p_{n+k} - p_n).$$ Thus the twin prime conjecture asserts $g_1 = 2$.
Zhang's theorem (= weak twin prime conjecture) asserts $g_1 < \infty$.
The prime $m$-tuple conjecture asserts $g_2 = 6$ (infinitely many prime triplets), $g_3 = 8$ (infinitely many prime quadruplets), "etcetera" (with $m=k+1$).
Can Zhang's method be adapted or extended to prove $g_k < \infty$ for any (all) $k>1$?
Added a day later: Thanks for all the informative comments and answers! To summarize and update (I hope I'm attributing correctly):
0) [Eric Naslund] The question was already raised in the Goldston-Pintz-Yıldırım paper. See Question 3 on page 3:
Assuming the Elliott-Halberstam conjecture, can it be proved that there are three or more primes in admissible $k$-tuples with large enough $k$? Even under the strongest assumptions, our method fails to prove anything about more than two primes in a given tuple.
1) [several respondents] As things stand now, it does not seem that Zhang's technique or any other known method can prove finiteness of $g_k$ for $k > 1$. The main novelty of Zhang's proof is a cleverly weakened estimate a la Elliott-Halberstam, which is well short of "the strongest assumptions" mentioned by G-P-Y.
2) [GH] For $k>1$, the state of the art remains for now as it was pre-Zhang, giving nontrivial bounds not on $g_k$ but on $$\Delta_k := \liminf_{n \rightarrow \infty} \frac{p_{n+k} - p_n}{\log n}.$$ The Prime Number Theorem (or even Čebyšev's technique) trivially yields $\Delta_k \leq k$ for all $k$; anything less than that is nontrivial. Bombieri and Davenport obtained $\Delta_k \leq k - \frac12$; the current record is $\Delta_k \leq e^{-\gamma} (k^{1/2}-1)^2$. This is positive for $k>1$ (though quite small for $k=2$ and $k=3$, at about $0.1$ and $0.3$), and for $k \rightarrow \infty$ is asymptotic to $e^{-\gamma} k$ with $e^{-\gamma} \approx 0.56146$.
3) [Nick Gill, David Roberts] Some other relevant links:
Terry Tao's June 3 exposition of Zhang's result and the work leading up to it;
The "Secret Blogging Seminar" entry and thread that has already brought the bound on $g_1$ from Zhang's original $7 \cdot 10^7$ down to below $5 \cdot 10^6$;
A PolyMath page that's keeping track of these improvements with links to the original arguments, supporting computer code, etc.;
A Polymath proposal that includes the sub-project of achieving further such improvements.
4) [Johan Andersson] A warning: phrases such as "large prime tuples in a given [length] interval" (from the Polymath proposal) refer not to configurations that we can prove arise in the primes but to admissible configurations, i.e. patterns of integers that could all be prime (and should all be prime infinitely often, according to the generalized prime $m$-tuple [a.k.a. weak Hardy-Littlewood] conjecture, which we don't seem to be close to proving yet). Despite appearances, such phrasings do not bear on a proof of $g_k < \infty$ for $k>1$, at least not yet.
-
Prior to Zhang's paper, even on the full Elliott-Halberstam conjecture no one was able to get bounded intervals with three primes. Note that Goldstone-Pintz-Yildirm did investigate this in arxiv.org/abs/math/0508185. Zhang's contribution is largely a specialized case of the EH conjecture, so I doubt it directly sheds any new light on this problem. On the other hand, perhaps the connection with Elliot-Halberstam was previously under investigated considering that any result would have been conditional. – Mark Lewko Jun 4 '13 at 19:17
"On the other hand, perhaps the connection with Elliot-Halberstam was previously under investigated considering that any result would have been conditional." I seriously dispute with this comment, as I think the whole field was pretty much discerned. GPY in particular made extensive effort to apply to $p_{n+\nu}-p_n$. GPY were also quite extensive in their results from partial EH, and indeed the intersection. (See Theorem 3 in Annals paper, one gets $(\sqrt\nu-\sqrt{2\theta})^2$ where $\theta$ is the distro level, and an extra factor of $e^{-\gamma}$ from Maier matrix later). – v08ltu Jun 4 '13 at 20:16
@v08ltu, As I pointed out in my comment GPY did investigate this problem in their original prime tuple paper (and, subsequently returned to the topic in follow-up papers). Certainly their work was a major breakthrough on a problem that was stuck for a long time. By "possibly under investigated" I really meant something like "I don't see any reason to believe the current best results in this direction are fundamentally the limitation of the ideas involved, or that I would be shocked to see further improvements in this direction the way I would be to, say, see twin primes proved by the method" – Mark Lewko Jun 4 '13 at 20:56
Fresh and relevant: arxiv.org/abs/1306.0948 – GH from MO Jun 6 '13 at 6:24
Now this seems to have been decisively answered! See arxiv.org/pdf/1311.4600.pdf – Lucia Nov 20 '13 at 2:06
Edit (20/11/2013) : Yesterday James Maynard posted the paper Small gaps between primes on the arxiv in which he shows that for any $m$ there exists a constant $C_m$ such that $$p_{n+m}-p_n\leq C_m$$ infinitely often. More about this result can be found on Terence Tao's blog, or in this expository article by Andrew Granville.
In Goldston, Pintz, and Yildirim paper Primes in tuples I, they show that under the assumption of the Elliott Halberstam Conjecture,
$$\liminf_{n\rightarrow\infty}p_{n+1}-p_n \leq 16$$
and they leave the following question on page 3:
Question 3. Assuming the Elliott-Halberstam conjecture, can it be proved that there are three or more primes in admissible k-tuples with large enough k? Even under the strongest assumptions, our method fails to prove anything about more than two primes in a given tuple.
From what I understand, the issue is increasing a coefficient from $1$ to $2$.
Let $\mathcal{H}=\left\{ 1,\dots,h_{k}\right\}$ be our admissible set, and suppose that $\max_{i}h_{i}\leq x.$ The approach is to look at the sum
$$\sum_{x<n\leq 2x}\left(\sum_{i=1}^{k}\vartheta\left(n+h_{i}\right)-\log(3x)\right)W(n),$$
where $\vartheta(n)=1_{\mathcal{P}}(n)\log n$, $1_{\mathcal{P}}(n)$ is the indicator function for the primes, and $W(n)$ is a positive weight function. If this sum is positive, then one of the terms must be positive, so for some $x<n\leq2x$ we have
$$\sum_{i=1}^{k}\vartheta\left(n+h_{i}\right)>\log(3x),$$
and since $\log(n+h_{i})\leq\log(3x)$ for all $n$ in our range, it follows that there are at least two indices $i\neq j$ such that
$$\vartheta(n+h_{i}),\ \vartheta(n+h_{j})\neq0.$$
Selberg advocated that in general for ease of calculation one should take a positive weight function to be a square, $W(n)=\lambda(n)^{2},$ so the goal is to prove the inequality $$\sum_{x<n\leq2x}\sum_{i=1}^{k}\vartheta\left(n+h_{i}\right)\lambda(n)^{2}>\log(3x)\sum_{x<n\leq2x}\lambda(n)^{2}$$
for some choice of $\lambda(n).$ In Goldston, Pintz, and Yildirim's paper, they choose
$$\lambda(n)=\frac{1}{\left(k+l\right)!}\sum_{\begin{array}{c} d|P(n)\\ d\leq R \end{array}}\mu(d)\log\left(\frac{R}{d}\right)^{k+l}$$
where $P(n)=\prod_{j=1}^{k}\left(n+h_{j}\right)$, and $R$ depends on $x$. To use the same approach for $3$ terms, we would need to examine the sum
$$\sum_{x<n\leq2x}\left(\sum_{i=1}^{k}\vartheta\left(n+h_{i}\right)-2\log(3x)\right)\lambda(n)^{2},$$
and show that
$$\sum_{x<n\leq2x}\sum_{i=1}^{k}\vartheta\left(n+h_{i}\right)\lambda(n)^{2}>2\log(3x)\sum_{x<n\leq2x}\lambda(n)^{2},$$
for a suitable choice of $\lambda(n)$. Increasing the coefficient to a $2$ seems to be a fundamental issue, and hopefully an expert can explain why this is the case.
-
And nothing more can be expected (in fact something less such as a larger constant than 16) to be gained from Zhang's approach than Goldston-Yildirim-Pintz + Elliot-Halberstam! The only reason to possibly expect such progress is that Zhang's paper has resulted in more mathematicians learning the field and trying to improve stuff. But it should not be something simple. Rather another breakthrough is needed. – Johan Andersson Jun 4 '13 at 16:13
This is slightly too long for a comment....
Terry Tao has just written a long post on Zhang's result and the generalization to tuples. He also links, at the bottom, to a polymath proposal for improving Zhang's bounds, and for extending the result to tuples, as per the OP.
It seems to me that those two sites are the best places to go for a description of the state-of-the-art on this question.
-
Thank you for these links. I already saw Tao's post, but didn't see that it claimed results on prime tuples, only on admissible tuples (which are a key tool in Zhang's proof but aren't themselves known to correspond to prime tuples). The polymath proposal does explicitly target "Finding narrow prime tuples of a given cardinality (or, dually, finding large prime tuples in a given interval)" as part of Part 1 of the project, but doesn't seem to say whether it's known that Zhang's techniques can give such a result. – Noam D. Elkies Jun 4 '13 at 14:56
@Noam, Tao's formulation of the polymath proposal suggests to me that (as far as he can see) Zhang's techniques may well yield results for prime tuples, but that some work needs to be done to establish this for certain... – Nick Gill Jun 4 '13 at 15:10
Nick, It sounds like that but I think it might be a misunderstanding. Finding the narrow prime tuples that Tao talks about I think seems related to given a small k0 finding an optimal admissible set H. (that means just finding one such set, not infinitely many) to get as small gap as possible. On the link he gives, such matters are discussed (finding smallest k0 and optimal admissible set). – Johan Andersson Jun 4 '13 at 15:27
As for the original question, I do not think that anyone has any ideas how to treat more than two primes (I might not be correct of course) – Johan Andersson Jun 4 '13 at 15:28
This has now achieved by James Maynard "Small gaps between primes."
-
It is probably worth noting that Maynard's method is significantly different from Zhang's (and indeed, is more elementary). Of course this is all explained in the well-written paper. – Sam Hopkins Nov 20 '13 at 15:24
Good question! Perhaps it is worthwhile to note that Goldston-Pintz-Yildirim asked a similar question in their original paper (Primes in tuples I, Question 3, Page 822). As of now it is not even known if $$\liminf_{n \rightarrow \infty} \frac{p_{n+2} - p_n}{\log n}=0.$$
Added. To answer Noam's question addressed in his comment below, let us denote $$\Delta_\nu:= \liminf_{n \rightarrow \infty} \frac{p_{n+\nu} - p_n}{\log n},$$ then Bombieri-Davenport (1965) proved $\Delta_v\leq\nu-\frac{1}{2}$, which was improved by Huxley, Maier, Goldston-Yildirim in several papers. The current best result, as far as I know, appears in Goldston-Pintz-Yildirim: Primes in tuples III, Funct. Approx. Comment. Math. 35 (2006), 79–89.), namely $$\Delta_\nu\leq e^{-\gamma}(\sqrt{\nu}-1)^2.$$
-
Thanks for this chapter-and-verse reference (yes, it must be "worthwhile"). So is it known that $\liminf_{n \rightarrow \infty} \frac{p_{n+2} - p_n}{\log n} < 2$? Even this does not follow logically from the G-P-Y or Zhang results. – Noam D. Elkies Jun 4 '13 at 19:13
@Noam: I added some information to my response. – GH from MO Jun 4 '13 at 19:32
Thanks. This last formula $\Delta_\nu \leq e^{-\gamma} (\nu-1)^2$ can't be right, though: it's too strong for $\nu=1$, and too weak for $\nu \geq 4$ (since it then exceeds $\nu$ itself). Did you mean something else? – Noam D. Elkies Jun 4 '13 at 19:38
@Noam: Sorry, I had a typo, $\nu-1$ should be $\sqrt{\nu}-1$. For $\nu=1$ the result is the original breakthrough of Goldston-Pintz-Yildirim (very small gaps between primes). – GH from MO Jun 4 '13 at 19:43
I see, that makes sense now, thanks. – Noam D. Elkies Jun 4 '13 at 19:57
Pintz paper from arxiv last week
"Polignac Numbers, Conjectures of Erdös on Gaps between Primes, Arithmetic Progressions in Primes, and the Bounded Gap Conjecture" http://arxiv.org/abs/1305.6289,
proves some results in this direction. His result is in a sense much stronger than Zhang's as he proves in his Main Theorem that if $\mathcal H= \{ h_1,\ldots,h_{k} \}$ is an admissable set with $k>k_0=3.5 \cdot 10^6$ (this constant $k_0$ seems to have been improved recently. See Terence Tao's website) we can find infinitely many (and he also gives a lower bound for how many less than $x$) $n$'s such such that the $k-$tuples $n+\mathcal H$ have two primes but also all other elements are almost primes (a bounded number $c$ of prime factors, where the bound $c=c(k)$ depends on $k$). Thus he proves infinitely many "two primes + any fixed number of almost primes" in a bounded range.
Of course he uses Zhang's method of proof (as well as some of his previous results/methods).
-
$k_0 = 341640$ at present, see michaelnielsen.org/polymath1/… for progress and references. – David Roberts Jun 4 '13 at 22:46
$k_0=34429$ at the moment – GH from MO Jun 6 '13 at 22:42
This is just another comment: in this paper http://arxiv.org/abs/1306.0948 James Maynard generalizes the previous result of Pintz and gives explicit bound on the number of divisors of an almost prime. More precisely, there era infinitely many $n,$ such that the interval $[n, n+10^8]$ contains two primes and an almost prime with at most $31$ prime divisors. Both bounds $31$ and $10^8$ can be improved by using recent improvement of the Polymath project.
Also, in the recent talk, he mentioned that by modifying the weights $\lambda (n)$ in GPY and Zhang and assuming Elliott-Halberstam one can bring the bound $p_{n+1}-p_n\le 16$ to $p_{n+1}-p_n\le 12.$
-
|
{}
|
# Electron electric dipole moment as a sensitive probe of PeV scale physics
@article{Ibrahim2014ElectronED,
title={Electron electric dipole moment as a sensitive probe of PeV scale physics},
author={Tarek Ibrahim and Ahmad M. Itani and P. Nāth},
journal={Physical Review D},
year={2014},
volume={90},
pages={055006}
}
• Published 2014
• Physics
• Physical Review D
• We give a quantitative analysis of the electric dipole moments as a probe of high scale physics. We focus on the electric dipole moment of the electron since the limit on it is the most stringent. Further, theoretical computations of it are free of QCD uncertainties. The analysis presented here first explores the probe of high scales via electron EDM within MSSM where the contributions to the electric dipole moment (EDM) arise from the chargino and the neutralino exchanges in loops. Here it is… CONTINUE READING
29 Citations
#### References
SHOWING 1-10 OF 39 REFERENCES
|
{}
|
Chapter 23
The endogenous corticosteroids are produced by the adrenal cortex and are essential for life. As with the gonadal steroid hormones discussed in Chapter 22, corticosteroids are synthesized from cholesterol. They comprise two major physiologic and pharmacologic groups, glucocorticoids and mineralocorticoids (Figure 23–1). Glucocorticoids have important effects on intermediary metabolism, catabolism, immune responses, and inflammation. Mineralocorticoids regulate sodium and potassium transport in the collecting tubules of the kidney. A third group, the adrenal androgens (dehydroepiandrosterone [DHEA] and androstenedione) constitute the major endogenous precursors of estrogen in females in whom ovarian function is deficient or absent (e.g., postmenopausal) and in preadolescent males. Drugs that modulate the physiologic effects of endogenous corticosteroids either mimic the corticosteroids or inhibit corticosteroid synthesis or receptor interactions.
###### Figure 23–1.
Classification of drugs that mimic or block the effects of endogenous corticosteroids.
Like other hormones under the control of the hypothalamic and pituitary endocrine system, glucocorticoids provide feedback inhibition of their own production by acting in the hypothalamus and pituitary. Glucocorticoids inhibit the production of corticotropin-releasing factor (CRF) within the hypothalamus and adrenocorticotropic hormone (ACTH) within the pituitary. CRF controls the release of ACTH, which in turn regulates corticosteroid production within the adrenal cortex. A key action of exogenous glucocorticoids is activation of this feedback inhibition system with subsequent suppression of endogenous adrenal steroid production. After chronic treatment with exogenous glucocorticoids, recovery of the endogenous system takes weeks to months.
A large number of synthetic glucocorticoids are available. They can be delivered by a variety of routes, including oral, intravenous, intra-articular, and topical.
### Mechanism of Action
Steroid hormones enter the cell and bind to cytosolic receptors. The complex of the receptor and its bound steroid translocates to the nucleus where it alters gene expression by binding to glucocorticoid response elements (GREs) (Figure 23–2). Tissue-specific responses to steroids are made possible by the presence of different protein regulators in each tissue that control the interaction between the hormone-receptor complex, other transcription factors, and particular response elements.
###### Figure 23–2.
Mechanism of glucocorticoid action. This figure models the interaction of a steroid (S), with its receptor (R) and the subsequent events in a target cell. The steroid is present in the blood—bound to the corticosteroid-binding globulin (CBG)—but it enters the cell as the free form. The intracellular receptor (R) is bound to stabilizing proteins, including heat shock protein 90 (Hsp90) and several others. When the complex binds a molecule of steroid, the Hsp90 and associated molecules are released. The steroid-receptor complex enters the nucleus as a dimer, binds to the glucocorticoid response element (GRE), and thereby regulates gene transcription by RNA polymerase II and associated transcription factors. The resulting mRNA is edited and exported to ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessPhysiotherapy Full Site: One-Year Subscription
Connect to the full suite of AccessPhysiotherapy content and resources including interactive NPTE review, more than 500 videos, Anatomy & Physiology Revealed, 20+ leading textbooks, and more.
|
{}
|
Diffraction with a large array of slits
1. May 27, 2014
Flucky
Hi all, exams soon and I'm stressing out over this small question. If anyone could guide me through, explaining why you're doing what you're doing that'd be beyond great.
1. The problem statement, all variables and given/known data
Light of wavelength λ is incident normally on a screen with a large array of slits having
equal widths b, and periodically displaced by a distance a.
(i) Find the maximum diffraction order which can be observed using this system of slits.
(ii) Find the minimum period a for which diffraction can be observed for light with
wavelength λ = 10µm.
2. Relevant equations
AFAIK the only equation relevant is asinθ = mλ
One that has cropped up is sinθ$\pm1$ = $\pm\frac{λ}{b}$ , although there is no explanation next to this one so I'm not sure what it means.
3. The attempt at a solution
Initial thoughts are to set θ = 90° as it's asking for a maximum. Past this I don't know where to go
2. May 28, 2014
haruspex
It's not hard to find articles on the subject on the net. Did you try to follow any of those?
3. May 28, 2014
Flucky
I've had a look all round but can't seem to find anything that helps, unless I'm missing something obvious. I've played around with a few applets as well. The things I found focused on the first equation, and the questions would often be "find the #th maxima" with slit width etc given.
The 90 degrees being a maximum I understand and max diffraction occurs when the slit width equals the wavelength, but for the first part of the question that still leaves me with 3 unknown variables.
Also I created another thread over in the other HW section (https://www.physicsforums.com/showthread.php?p=4758792) thinking I could delete this one. Don't know if the mods would like to merge/delete one.
|
{}
|
# 抽象代数代写| Cosets
1. Isomorphisms
We see that cyclic group of order $d$ looks very similar to the group $\left(\mathbb{Z}_{d},+\right) .$ In some sense this is the same group, and to make this claim precise we introduce a new notion.
2. Definition 1: Isomorphism
A group isomorphism between groups $\left(G_{1}, *{1}\right)$ and $\left(G{2}, *_{2}\right)$ is a bijection
$$\varphi: G_{1} \rightarrow G_{2}$$
which respects operations ${1}$ and ${2}$, i.e., for any $x, y \in G_{1}$ we have
$$\varphi\left(x *{1} y\right)=\varphi(x) *{2} \varphi(y) .$$
3. Proposition 1
Let $\varphi: G_{1} \rightarrow G_{2}$ be an isomorphism. Then $\varphi^{-1}: G_{2} \rightarrow G_{1}$ is also an isomorphism.
Proof. Since $\varphi$ is a bijection, $\varphi^{-1}$ is also a bijection.
It remains to check that $\varphi^{-1}$ respects multiplication. Let $a, b, \in G_{2}$ be any two elements. Since $\varphi$ is a bijection, we can uniquely write
$$a=\varphi(x) \quad b=\varphi(y)$$
for the corresponding $x$ and $y$ in $G_{1}$. Then
$$\varphi^{-1}(a) *{1} \varphi^{-1}(b)=\varphi^{-1}(\varphi(x)) *{1} \varphi^{-1}(\varphi(y))=x *_{1} y$$
Applying bijection $\varphi$ to the right hand side we would get
$$\varphi\left(x *{1} y\right)=a *{2} b$$
since $\varphi$ respects multiplication. Thus $x * 1 y=\varphi^{-1}\left(a *_{2} b\right)$, so we conclude
$$\varphi^{-1}(a) *{1} \varphi^{-1}(b)=\varphi^{-1}\left(a *{2} b\right)$$
proving that $\varphi^{-1}$ also respects multiplication.
4. Example 1
1. The exponential function defines an isomorphism $\left(\mathbb{R}^{*}, \cdot\right) \rightarrow(\mathbb{R},+)$.
2. We have encountered at least two groups of order two, namely $\left(\mathbb{Z}_{2},+\right)$ and $({1,-1}, \cdot) .$ The map
$$\mathbb{Z}_{2} \rightarrow{1,-1}$$
sending 0 to 1 and 1 to $-1$ gives an isomorphism between the two.
5. Definition 2
Two groups $G$ and $G^{\prime}$ are said to be isomorphic if there exists an isomorphism $\varphi: G \rightarrow G^{\prime} .$
Isomorphic groups have exactly the same properties (same order, isomorphic cyclic subgroups etc.), so we often can identify them with each other.
Claim. Being isomorphic is an equivalence relation on the set of all groups.
The main problem of the group theory is classification of groups up to the isomorphism equivalence relation.
6. Proposition 2
A cyclic group of infinite order is isomorphic to $\mathbb{Z}$.
7. Proposition 3
Let $n \geqslant 2$ be an integer. Any cyclic group of order $n$ is isomorphic to $\mathbb{Z}_{n}$.
8. Example 2: Direct product $\mathbb{Z}{2} \times \mathbb{Z}{2}$
The simplest example of this construction is the group $\mathbb{Z}{2} \times \mathbb{Z}{2}$ with respect to addition in both factors. It has elements
$${(0,0),(0,1),(1,0),(1,1)}$$
and the operation on the above pairs via the coordinate-wise addition modulo 2 .
9. Example 3
Let us now give some examples of how to prove that two groups are not isomorphic.
1. The groups $\mathbb{Z}{4}$ and $\mathbb{Z}{2} \times \mathbb{Z}_{2}$ are not isomorphic the first one being cyclic of order 4 , whereas the second one has only elements of order at most $2 .$
2. The groups $\mathbb{Q}$ and $\mathbb{Z}$ are not isomorphic. Indeed, assume we have an isomorphism $\varphi: \mathbb{Z} \rightarrow \mathbb{Q}$ and denote $\varphi(1)=a$. By surjectivity of $\varphi$, there exists an integer $n$ such that $\varphi(n)=\frac{a}{2}$. But since $\varphi$ is a homomorphism, we must have $\varphi(2 n)=2 \varphi(n)=a$, so that by injectivity of $\varphi, 2 n=1$, which is a contradiction. Note that here the argument relied on the fact that in the group $\mathbb{Q}$ one can divide by 2 indefinitely, whereas this is not possible in $\mathbb{Z}$.
Another way of seeing this is by remarking that for all $n \in \mathbb{Z}$, we have $\varphi(n)=n \varphi(1)$. This means that the denominator of the rational number $n \varphi(1)$ is at most the denominator of $\varphi(1) .$ Since the denominators of elements of $\mathbb{Q}$ can be arbitrarily large, this means that $\varphi$ cannot be surjective.
1. The additive group $(\mathbb{Q},+)$ is not isomorphic to the multiplicative group $\left(\mathbb{Q}^{\times}, \cdot\right) .$ Indeed, let $\varphi$ : $\left(\mathbb{Q}^{\times}, \cdot\right) \rightarrow(\mathbb{Q},+)$ be an isomorphism. Put $\varphi(2)=a .$ By surjectivity of $\varphi$, there is a rational number $x$ such that $\varphi(x)=\frac{a}{2}$. Then $\varphi(x \cdot x)=\varphi(x)+\varphi(x)=a$, so by injectivity, $x^{2}=2$. This is impossible since there is no rational number $x$ satisfying this. This argument is similar to the one in the previous example: here we used that dividing by 2 in the additive setting corresponded to taking square roots in the multiplicative setting, which is not always possible in the rationals.
10. Cosets and Lagrange’s theorem
11. Left and right cosets
12. Definition 3
Let $G$ be a group and $H$ a subgroup of $G .$ A left coset of $H$ is a subset of $G$ of the form
$$g H={g h, h \in H} \text {. }$$
In the same way, we can define right cosets to be $H g={h g, h \in H}$ for $g \in G$.
13. Remark 1
The group $G$ itself is both its a left coset and a right coset, for $g=e$ the identity element of $G$ : $G=e G=$ Ge. More generally, for all $g \in G$, we have $G=g G=G g$.
14. Remark 2
Left and right cosets of $H$ are the same if the group $G$ is abelian, but in general they may be different. For an abelian group, we will often use additive notation and write both types of cosets in the form $g+H .$
15. Example 4
The cosets of $H={0,3}$ in $\mathbb{Z}_{6}$ are
\begin{aligned} &0+H=3+H={0,3} \ &1+H=4+H={1,4} \ &2+H=5+H={2,5} \end{aligned}
|
{}
|
# Mathematical proof
A mathematical proof is a way to show that a math theorem is true. One must show that the theory is true in all cases.
There are different ways of proving a mathematical theorem.
## Proof by Induction
One type of proof is called proof by induction. This is usually used to prove a theorem that is true for all numbers. There are 4 steps in a proof by induction.
1. State that the proof will be by induction, and state which variable will be used in the induction step.
2. Prove that the statement is true for some beginning case.
3. Assume that for some value n = n0 the statement is true and has all of the properties listed in the statement. This is called the induction step.
4. Show that the statement is true for the next value, n0+1.
Once that is shown, then it means that for any value of n that is picked, the next one is true. Since it's true for some beginning case (usually n=1), then it's true for the next one (n=2). And since it's true for 2, it must be true for 3. And since it's true for 3, it must be true for 4, etc. Induction shows that it is always true, precisely because it's true for whatever comes after any given number.
An example of proof by induction:
Prove that for all natural numbers n, 2(1+2+3+....+n-1+n)=n(n+1)
Proof: First, the statement can be written "for all natural numbers n, 2${\displaystyle \sum _{k=1}^{n}k}$=n(n+1)
By induction on n,
First, for n=1, 2${\displaystyle \sum _{k=1}^{1}k}$=2(1)=1(1+1), so this is true.
Next, assume that for some n=n0 the statement is true. That is, 2${\displaystyle \sum _{k=1}^{n_{0}}k}$ = n0(n0+1)
Then for n=n0+1, 2${\displaystyle \sum _{k=1}^{{n_{0}}+1}k}$ can be rewritten 2(n0+1) + 2${\displaystyle \sum _{k=1}^{n_{0}}k}$
Since 2${\displaystyle \sum _{k=1}^{n_{0}}k}$ = n0(n0+1), 2n0+1 + 2${\displaystyle \sum _{k=1}^{n_{0}}k}$ = 2(n0+1) + 2n0(n0+1)
So 2(n0+1) + 2n0(n0+1)= 2(n0+1)(n0 + 2)
## Proof by Contradiction
Proof by contradiction is a way of proving a mathematical theorem by showing that if the statement is false, there is a problem with the logic of the proof. That is, if one of the results of the theorem is assumed to be false, then the proof does not work.
When proving a theorem by way of contradiction, it is important to note that in the beginning of the proof. This is usually abbreviated BWOC. When the contradiction appears in the proof, there is usually an X made with 4 lines instead of 2 placed next to that line.
## Other Examples of Proofs
Prove that x = -b ± ( √b² - 4ac ) / 2a from ax²+bx+c=0
${\displaystyle x^{2}+2xy+y^{2}=(x+y)^{2}\!}$.
Dividing the quadratic equation
${\displaystyle ax^{2}+bx+c=0\!}$
by a (which is allowed because a is non-zero), gives:
${\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {c}{a}}=0\!}$
or
${\displaystyle x^{2}+{\frac {b}{a}}x=-{\frac {c}{a}}\qquad (1)\!}$
The quadratic equation is now in a form in which completing the square can be done. To "complete the square" is to find some number k so that
${\displaystyle x^{2}+{\frac {b}{a}}x+k=x^{2}+2xy+y^{2}\!}$
for another number y. In order for these equations to be true,
${\displaystyle y={\frac {b}{2a}}\!}$
and
${\displaystyle k=y^{2}\!}$
so
${\displaystyle k={\frac {b^{2}}{4a^{2}}}\!}$.
Adding this number to equation (1) makes
${\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {b^{2}}{4a^{2}}}=-{\frac {c}{a}}+{\frac {b^{2}}{4a^{2}}}\!}$.
The left side is now a perfect square because
${\displaystyle x^{2}+{\frac {b}{a}}x+{\frac {b^{2}}{4a^{2}}}=\left(x+{\frac {b}{2a}}\right)^{2}\!}$
The right side can be written as a single fraction, with common denominator 4a2. This gives
${\displaystyle \left(x+{\frac {b}{2a}}\right)^{2}={\frac {b^{2}-4ac}{4a^{2}}}\!}$.
Taking the square root of both sides gives
${\displaystyle \left|x+{\frac {b}{2a}}\right|={\frac {\sqrt {b^{2}-4ac\ }}{|2a|}}\Rightarrow x+{\frac {b}{2a}}=\pm {\frac {\sqrt {b^{2}-4ac\ }}{2a}}\!}$.
Getting x by itself gives
${\displaystyle x=-{\frac {b}{2a}}\pm {\frac {\sqrt {b^{2}-4ac\ }}{2a}}={\frac {-b\pm {\sqrt {b^{2}-4ac\ }}}{2a}}\!}$.
|
{}
|
# How do you evaluate 5(3P2)?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
2
Jan 29, 2016
5(3P2) on a calculator would give you 30.
#### Explanation:
3P2 means how many permutations of 2 can be achieved from a choice of three.
Say you have three choice A,B and C
You have 3 possible COMBINATIONS
AB
AC
BC
But with PERMUTATIONS the order is significant
AB but also BA
AC CA
BC CB
6 possibilities. You can shortcut this by doing 3P2 on a calculator. The formula would be (3!)/((3-2)!)
You are then multiplying this by 5, this would be 5x6 = 30
• 18 minutes ago
• 20 minutes ago
• 24 minutes ago
• 24 minutes ago
• 55 seconds ago
• 3 minutes ago
• 5 minutes ago
• 5 minutes ago
• 10 minutes ago
• 17 minutes ago
• 18 minutes ago
• 20 minutes ago
• 24 minutes ago
• 24 minutes ago
|
{}
|
Post Module Problem
$\Delta p \Delta x\geq \frac{h}{4\pi }$
Michael Du 1E
Posts: 117
Joined: Sun Sep 22, 2019 12:16 am
Post Module Problem
What would the uncertainty of position be for this problem? "The hydrogen atom has a radius of approximately 0.05 nm. Assume that we know the position of an electron to an accuracy of 1 % of the hydrogen radius, calculate the uncertainty in the speed of the electron using the Heisenberg uncertainty principle.
Comment on your value obtained." Thank you!
Jessica Tejero 3L
Posts: 54
Joined: Wed Sep 25, 2019 12:16 am
Re: Post Module Problem
The size of the atom represents the uncertainty in position. The radius of the atom represents 1/2 of the uncertainty of the position of the electron, the diameter equaling the entire uncertainty. Use this and the mass of an electron and the constant h/4pi to calculate delta-v, or the uncertainty in the electron's speed.
|
{}
|
# Rearrangement Reaction
1. Homework Statement
phph
| |
ph-c-c-ch3 + H+ ->
| |
OH Br
2. Homework Equations
3. The Attempt at a Solution
Here, the OH is protonated, and leaves as water. Then the phenolic group on the right carbon migrates to the left one leaving a +ive charge on the right carbon. What happens after that?
If there was another OH group instead of Bromine, the hydrogen on the OH would leave, forming a c=O. What happens here?
Im sorry if the presentation is unclear, but I dont know how to use latex for chem.
Related Biology and Chemistry Homework Help News on Phys.org
Gokul43201
Staff Emeritus
Gold Member
Have you drawn the compound correctly? It isn't supposed to be...
Code:
ph ph
| |
C-C-C-CH3
| |
HO Br
is it?
No. That didnt come out correctly. 1,1,2-triphenyl-1,2-propandiol, is the compound.
GCT
Homework Helper
Use Chemdraw, it's free. Be sure to state the problem again, this time in a clearer form, mentioning all of the necessary information including those that were in the original post.
chemisttree
Homework Helper
Gold Member
I'm assuming that you meant to write 1,1,2-triphenyl-2-bromo-1-propanol.
You are correct that water will leave and generate a very stable benzylic cation which is further stabilized by the adjacent bromine. This will form a three membered ring, bromonium ion. The positive charge will reside on both the 1 and 2 carbons. This will probably not revert back to the vinyl compound through loss of Br+ (Br+ is a poor leaving group) even if Br- is present. What may happen is that the water might re-add to the bromonium ion system and regenerate the original bromohydrin. This will undoubtedly occur by the E1 mechanism and the fleeting, isolated charge (and thus the hydroxyl) will reside on the most stable carbonium ion... on the diphenyl-substituted carbon (the "1" position), regenerating starting material.
You might consider that the H+ adds to the bromine (in the neutral compound) and then loses HBr leaving behind a stabilized cation alpha to a hydroxyl group. If this happens, the oxygen of the adjacent alcohol could migrate over to form an epoxide and regenerate a proton. What do we know about epoxides and strong acids? Strong acids add to epoxides in a Markovnikov fashion and the product would be 112-triphenyl-1-bromo-2-propanol, a new product. This is unlikely to happen however since the hydroxyl is much more basic (a likely target for H+) than bromine. Even if it did happen, the first reaction (adding H+ to OH group) would definitely occur and regenerate 1,1,2-tripheny-2-bromo-1-propanol.
Therefore, if I had to provide the answer, I would submit the bromonium ion intermediate as the only reasonable answer.
|
{}
|
# How do you factor and solve (a+1)(a+2) = 0?
Jun 16, 2016
$a = - 1$ or $a = - 2$.
#### Explanation:
$\left(a + 1\right) \left(a + 2\right) = 0$ is already factorized into $\left(a + 1\right)$ and $\left(a + 2\right)$.
As product of $\left(a + 1\right)$ and $\left(a + 2\right)$ is zero, it means
either $a + 1 = 0$ or $a + 2 = 0$
i.e either $a = - 1$ or $a = - 2$.
|
{}
|
# The 12th International Workshop on the Physics of Excited Nucleons
Jun 10 – 14, 2019
Bonn, Campus Poppelsdorf
Europe/Zurich timezone
## EtaMAID-2019 for η and η' photoproduction on nucleons
Jun 11, 2019, 3:30 PM
30m
HS 7
### HS 7
Partial wave analyses and baryon resonance parameter extraction
### Speaker
Victor Kashevarov (Institut für Kernphysik, Mainz)
### Description
Results of the phenomenological analysis of $\eta$ and $\eta'$
photoproduction on the protons and the neutrons with updated version of
EtaMAID model are presented.The model includes 23 nucleon resonances parameterized with Breit-Wigner shapes, $t$-channel exchange of vector and axial-vector mesons with Regge cuts,and Born terms in $s$ and $u$ channels.
A new approach is discussed to avoid double counting in the overlap region of Regge and resonances. Parameters of the resonances were obtained from a fit to available experimental data. The model well describes both differential cross sections and polarization observables for photoproduction of $\eta$ and $\eta'$ on the nucleons at photon beam energies from the threshold up to 9 GeV.
A comparison is done among four newly updated partial waves analyses for
observables and partial waves. The nature of the most interesting specifics in the data is discussed.
### Primary author
Victor Kashevarov (Institut für Kernphysik, Mainz)
|
{}
|
Woolz Image Processing Version 1.7.5
WlzCrossCorValue
Name
WlzCrossCorValue - computes the cross correlation value of two objects.
Synopsis
WlzCrossCorValue [-h] [-n] [-o<out obj>] [-u] [<in obj 0>] [<in obj 1>]
Options
-h Help, prints usage message. -n Normalise the cross-correlation value by dividing it by the area/volume over which it is computed. -o Output file for the cross-correlation value. -u Compute the cross-correlation value within the union of objects domains. The default is to compute the cross correlation value within the intersection of the objects domains.
Description
Computes the cross correlation value (with zero shift) of two 2D spatial domain objects with grey values. The input objects are read from stdin and values are written to stdout unless the filenames are given.
Examples
WlzCrossCorValue -o out.num in0.wlz in1.wlz
The cross correlation value computed from the objects read from in0.wlz and in1.wlz is written to the file out.num.
File
WlzCrossCorValue.c
|
{}
|
Home
Scavenger Security
Cancel
# Imaginary CTF 2021 - String Editor 2 [Pwn]
String Editor 2 is a pwn challenge from ImaginaryCTF 2021. We are given a compiled executable and the target server’s libc. The program is a very simple string editor that allows us to edit a 15 ch...
# WPICTF 2021 - Strong ARM [Pwn]
In this challenge we have an ELF binary that has been compiled for the aarch64 architecture. We do not need to reverse engineer the file because the source is already provided. First of all, we ex...
# RITSEC 2021 - Baby Graph [Pwn]
In this challenge we are given an ELF64 binary. The challenge consists of getting remote code execution and reading the flag. We need to determine whether a given graph is Eulerian to get a prize (...
# UMassCTF 2021 - Chains [Reversing]
Chains is a reversing challenge that got 20 solves. We are given a stripped aarch64 (ARM 64 bit) ELF binary called chains: \$ file chains chains: ELF 64-bit LSB pie executable, ARM aarch64, versio...
# Securinets CTF Quals 2021 - RUN! [Reversing]
RUN! is a reversing challenge that got 15 solves. We are a provided a Windows PE binary called wamup.exe and the following description: keygenme now! nc bin.q21.ctfsecurinets.com 2324 If we...
Trending Tags
|
{}
|
Introduction
A significant amount of focus in statistics is on making inference about the averages or means of phenomena. For example, we might be interested in the average number of goals scored per game by a football team, or the average global temperature or the average cost of a house in a particular area.
The two types of averages that we usually focus on are the sample mean from a set of data and the expectation that comes from a probability distribution. For example if three men weigh 70kg, 80kg, and 90kg respectively then the sample mean of their weight is $\bar x = \frac{70+80+90}{3} = 80$. Alternatively, we might say that the arrival times of trains are exponentially distributed with parameter $\lambda = 3$ we can use the properties of the exponential distribution to find the mean (or expectation). In this case the mean is $\mu = \frac{1}{\lambda} = \frac{1}{3}$.
It is this second kind of mean (which we will call the expectation from now on), along with the generalisation of taking the expectation of functions of random variables that we will focus on.
Expected values
Alright, it is now a good time to define what the expectation operator does.
Say we have a discrete random variable, X, that follows a distribution with a probability mass function p. Then the expectation of a function of X:
$\mathbb{E}[X] = \displaystyle \sum_x x p(x)$
For a continuous random variable, we have that the expectation operator is now an integral (limit of a sum)* and the probability mass function is replaced by a probability density function, f:
$\mathbb{E}[X] = \displaystyle \int_x x f(x) dx$.
From these definitions, it is clear to see that we are taking the probability-weighted average of the random variable. That is, each value of X contributes an amount to the expectation that is determined by both the size of the random variable itself and the probability of that value being observed.
For the sake of interpreting these values, the ‘expected value’ is somewhat of a misleading name. Let’s say for example you are planning to go out to an event: entrance is R60 before 9pm and R100 after 9pm. You are meeting your friends beforehand and will travel to the event together. You believe that there is a 30% chance of arriving before 9pm and a 70% chance of arriving after 9pm. Calculating the expected value of X, the amount that you pay for entrance is done as follows:
$\mathbb{E}[X] = 60(0.3) + 100(0.7) = 88$
That’s fine and correct, but its pretty odd that you in no way expect to pay R88 for the ticket. If you were asked ‘what do you expect to pay’- you might answer ‘well, we might get there early but most likely I will end up paying R100, so I expect to pay that’. It is important to keep this distinction in check.
*this is a great way to remember the linearity of expectations, $\mathbb{E}[X+Y] = \mathbb{E}[X] + \mathbb{E}[Y]$
Expectations of other functions
Now that we can find the mean of functions we might want to extend this to more general functions. Now our expectation of some function, g(X) is:
$\mathbb{E}[X] = \sum_x g(x) p(x)$ or $\mathbb{E}[X] = \int_x g(x) f(x) dx$.
This generalisation can be really useful if we know (or can estimate) the probability distribution of the random variable. The most common example would be $g(X) = (\mathbb{E}[X]-X)^2$ which would give us the variance.
Really, we could put in any function that keeps the integral convergent into the expectation operator.
Say, for example, that we had a business where we sold interesting and wonderful things. Say we also knew the distribution of the number of things that we expected to sell, f and that for a given number of things sold, x, we would make a profit of $g(x) = x^2 - 7x - 3$, then we could find our expected profit, simply by taking the expectation of g. Again, this looks just like taking a probability-weighted average of the function. We will expect to make a large profit if large values of occur with high probability i.e. the values of X that give us a large also give us a large f.
Some examples
Now we may visualise the some examples of expectations of functions and hopefully give a little more clarity on the topic.
For each of the plots below, the density/mass function for a random variable is shown in black. The red function is the function over which we will integrate in order to find the appropriate expectation- it is the contribution to the probability-weighted average for each value of X.
The value of the expectation is therefore the area under the red curve.
The first plot we consider is the mean of a standard normal distribution. As we can see the red line shows an odd function, where the integral over a range symmetric about 0 is 0. This makes sense- the distribution is symmetric about 0 so the negative values of x contribute a negative amount to the expectation and the positive values contribute positively to the expectation. As the values of x get bigger, the value of the density tends to 0 faster than x, so very large values of x contribute nothing to the expectation (see the areas where the red line is at 0).
$g(x) = x$
Our next two plots contrast the difference in variance between 2 different normal distributions, both centred on 0. The first is the standard normal distribution and the second has a standard deviation of 2. Since the mean is zero we can find the variance simply by using $g(x) = x^2$. As we can see the area under the red curve in the second plot is much greater (in fact, 4 times greater). This is due to the tails of distribution getting fatter- higher values of $x^2$ are associated with higher values of the density, resulting in there being a larger area under the red function. The probability weighted values of $x^2$ have increased- in this case because the probability density at those values of x have increased.
$g(x) = x^2$
$g(x) = x^2$
Finally we look at the Exponential and Poisson distributions (with parameters $\lambda = 3 \text{ and } \lambda = 4$ respectively) and their expectations. For the exponential distribution we see that as the value of x increases, so the density decreases exponentially (unsurprising giving the name)- since the value of x increases only linearly, we find that the value of the red function is highest when x is small. So even though the support of the distribution runs from 0 to infinity, most of what we expect to see comes from a very small proportion of this.
$g(x) = x$
For a poisson distribution, which is discrete, the exception is now found by summing the distances from the red points to the horizontal axis. For a Poission distribution with $\lambda = 4$, the expected value of X is equal to 4. We can see where this 4 comes from by looking at the plot below. The mass is highest for values of X near 4 so these values contribute the most to the expectation i.e. 4*p(X=4) is high as is 5*p(X=5). Looking back at the definition of the expectation, we know that 15*p(X=15) also contributes to it- but looking at the plot we see this value is very small- why? Because the probability of X=15 is so small that the contribution to the expectation is negligible.
$g(x) = x$
Conclusion
The expectation operator has a number of uses and is a very important concept in statistics- a good way to think about the expectation of a function is as a probability weighted average of that function, summed (integrated) over the values that the function can take on. This has been demonstrated by creating the ‘red function’ which shows the probability weighted values. From this we can interpret the expectation as the area under the red curve (for continuous random variables), which can help us to understand what is going on when we take an expectation.
How clear is this post?
|
{}
|
# Is the Product of Measurable Spaces the Categorical Product?
This post requires some knowledge of measure theory.
Today I’m going to show that the product of two measurable spaces ${(X, \mathcal{B}_X)}$ and ${(Y, \mathcal{B}_Y)}$, is actually the product in the category of measurable spaces. See Product (category theory).
The category of measurable spaces, ${\mathbf{Measble}}$, is the collection of objects ${(X,\mathcal{B}_X)}$, where ${X}$ is a set and ${\mathcal{B}_X}$ is a ${\sigma}$-algebra on ${X}$, and the collection of morphisms ${\phi : (X, \mathcal{B}_X) \rightarrow (Y, \mathcal{B}_Y)}$ such that
1. ${\phi : X \rightarrow Y}$;
2. ${\phi^{-1}(E) \in \mathcal{B}_X}$ for all ${E \in \mathcal{B}_Y}$.
Such a function ${\phi}$ is called a measurable morphism, and ${(X, \mathcal{B}_X)}$ is called a measurable space.
Given two measurable spaces ${(X, \mathcal{B}_X)}$ and ${(Y, \mathcal{B}_Y)}$ we can define their product ${(X\times Y, \mathcal{B}_X \times \mathcal{B}_Y)}$, where ${X\times Y}$ is the cartesian product of ${X}$ and ${Y}$, and ${\mathcal{B}_X \times \mathcal{B}_Y}$ is the ${\sigma}$-algebra generated by the sets of the form ${E \times Y}$ and ${X \times F}$ with ${E \in \mathcal{B}_X}$ and ${F \in \mathcal{B}_Y}$. We will need the following definition: A ${\sigma}$-algebra ${\mathcal{B}}$ on a set ${Z}$ is said to be coarser than a ${\sigma}$-algebra ${\mathcal{B}'}$ on ${Z}$ if ${\mathcal{B} \subseteq \mathcal{B}'}$. In exercise 18 of Terry Tao’s notes on product measures, it is shown that ${\mathcal{B}_X \times \mathcal{B}_Y}$ is the coarsest ${\sigma}$-algebra on ${X\times Y}$ such that the projection maps ${\pi_X}$ and ${\pi_Y}$ are both measurable morphisms.
Now, we are finally ready to show that ${(X\times Y, \mathcal{B}_X\times \mathcal{B}_Y)}$ is the categorical product of the measurable spaces ${(X, \mathcal{B}_X)}$ and ${(Y, \mathcal{B}_Y)}$. If ${\phi_X : (Z, \mathcal{B}_Z) \rightarrow (X, \mathcal{B}_X)}$ and ${\phi_Y : (Z, \mathcal{B}_Z) \rightarrow (Y, \mathcal{B}_Y)}$ are measurable morphisms, then we need to show that there exists a unique measurable morphism ${\phi_{X\times Y} : (Z, \mathcal{B}_Z) \rightarrow (X\times Y, \mathcal{B}_X \times \mathcal{B}_Y)}$ such that ${\phi_X = \pi_X \circ \phi_{X\times Y}}$ and ${\phi_Y = \pi_Y \circ \phi_{X\times Y}}$. Because the cartesian product is the product in the category of sets, we only have one choice for such a map. Indeed, ${\phi_{X \times Y} = (\phi_X, \phi_Y)}$.
We claim that ${\phi_{X \times Y}}$ is measurable. Indeed, because the pullback ${\phi_{X \times Y}^{-1} : 2^{X\times Y} \rightarrow 2^Z}$ respects arbitrary unions and complements, and ${\phi_{X\times Y}(\emptyset) = \emptyset}$, we only need to show that ${\phi_{X\times Y}^{-1}(E \times Y) \in \mathcal{B}_Z}$ and ${\phi_{X\times Y}^{-1}(X \times F) \in \mathcal{B}_Z}$ for all ${E \in \mathcal{B}_X}$ and ${F \in \mathcal{B}_Y}$ (see remark 4 of Terry Tao’s notes on abstract measure spaces). This is easy to show:
$\displaystyle \begin{array}{rcl} \phi^{-1}_{X\times Y} (E\times Y) &=& (\pi_X \circ \phi_{X\times Y})^{-1}(E) \\ &=& \phi_X^{-1}(E) \\ \phi^{-1}_{X\times Y} (X \times F) &=& (\pi_Y \circ \phi_{X\times Y})^{-1}(F) \\ &=& \phi_Y^{-1}(F) \end{array}$
are both ${\mathcal{B}_Z}$ measurable because ${E \in \mathcal{B}_X}$ and ${F \in \mathcal{B}_Y}$.
Thus, we have shown that ${(X \times Y, \mathcal{B}_X\times \mathcal{B}_Y)}$ is actually the product of the measurable spaces ${(X, \mathcal{B}_X)}$ and ${(Y, \mathcal{B}_Y)}$ in the category of measurable spaces. This is reassuring, otherwise the term “product space” would be misleading.
I would like to know if there is a category of measure spaces with objects ${(X, \mathcal{B}_X, \mu_X)}$, where ${X}$ is a set, ${\mathcal{B}_X}$ is a ${\sigma}$-algebra on ${X}$, and ${\mu_X : \mathcal{B}_X \rightarrow [0, +\infty]}$ is a measure. There would need to be some extra condition on morphisms in this category, otherwise we couldn’t distinguish between triples ${(X, \mathcal{B}_X, \mu_X)}$ and ${(X, \mathcal{B}_X, \mu_X')}$ where ${\mu_X \neq \mu_X'}$. If this was a category, I don’t believe products could exist. Indeed, if the measure spaces ${(X, \mathcal{B}_X, \mu_X)}$ and ${(Y, \mathcal{B}_Y, \mu_Y)}$ are not ${\sigma}$-finite, then there can be multiple measures on ${(X \times Y, \mathcal{B}_X \times \mathcal{B}_Y)}$: see Terry Tao’s notes on product measures.
Another question is whether equalizers, coproducts, coequalizer, etc. exist in the category of measurable spaces. If a sufficient number of these properties exist, then we can take categorical (co)limits of measurable spaces. These might be of some interest already, but I have not looked into it.
|
{}
|
# Math API
## Functions
Return the absolute value of num (i.e. the number's distance from zero). This means that negative numbers will be converted to a positive number (e.g. -5 into 5), and positive numbers remain unaffected. The return value will always be a positive number.
Syntax math.abs(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the absolute value of user input
Read a line from the user, then print the absolute value of the number they entered.
Code
print(math.abs(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter -5, the number printed would be 5.
Return num rounded to the next whole integer.
Syntax math.ceil(
)
Returns number
Part of Lua (source)
API math
ExampleRound the value of user input
Read a line from the user, then print the ceiling of the number they entered.
Code
print(math.ceil(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter 0.2, the number printed would be 1.
Return the cosine of num. Note: The argument must be given in radians. If it is in degrees, use the math.rad function to convert it.
Syntax math.cos(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the cosine of user input
Read a line from the user, convert it to radians, then print its cosine.
Code
print(math.cos(math.rad(tonumber(read()))))
Output Depends on what the user wrote. For instance, if they were to enter 90, the number printed would be -1.
Convert the number num from radians to degrees.
Syntax math.deg(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the inverse sine of user input in degrees
Read a line from the user, compute its arcsin, then convert it to degrees.
Code
print(math.deg(math.asin(tonumber(read()))))
Output Depends on what the user wrote. For instance, if they were to enter -1, the number printed would be -90.
Returns e raised to the power of num.
Syntax math.exp(
)
Returns number
Part of Lua (source)
API math
ExampleCompute e
Prints the approximate value of Euler's constant, ${\displaystyle e}$
Code
print(math.exp(1))
Output
2.718281828459
Return num rounded to the previous whole integer.
Syntax math.floor(
)
Returns number
Part of Lua (source)
API math
ExampleRound the value of user input
Read a line from the user, then print the floor of the number they entered.
Code
print(math.floor(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter 0.2, the number printed would be 0.
Return the remainder of dividing x by y. This has the same behaviour as the % operator.
Syntax math.mod(
)
Returns number
Part of Lua (source)
API math
ExampleTest if user input is even
Read a line from the user, then print if it is evenly divisible by 2.
Code
print(math.mod(tonumber(read()), 2) == 0)
Output Depends on what the user wrote. For instance, if they were to enter 5, the value printed would be false.
Return the mantissa and exponent of num, two numbers ${\displaystyle m}$ and ${\displaystyle e}$ such that ${\displaystyle {\text{num}}=2m^{e}}$
Syntax math.frexp(
)
Returns number mantissa, number exponent
Part of Lua (source)
API math
ExampleDecompose user input input
Read a line from the user, then print the mantissa and exponent of the number they entered.
Code
print(math.frexp(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter 5, the numbers printed would be 0.625 3.
Compute a floating point number from the given mantissa and exponent. This is the inverse operation of math.frexp, and computes ${\displaystyle 2{\text{mantissa}}^{\text{exponent}}}$.
Syntax math.ldexp(
)
Returns number
Part of Lua (source)
API math
ExampleTurn user input into a number
Read a two lines from the user, and compute the floating point number represented by the given mantissa and exponent.
Code
local m = tonumber(read())
print(math.ldexp(m, e))
Output Depends on what the user wrote. For instance, if they were to enter 0.625 and 3, the number print would be 5
Return the greatest of all the numbers given.
Syntax math.max(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the maximum value of user input
Read as many lines as the user inputs, and return the greatest value that they wrote.
Code
local max = -(1/0) -- negative infinity
while true do
if tonumber(l) then
max = math.max(max, tonumber(l))
else
break
end
end
print(max)
Output The line the user wrote with the greatest value, or inf if they wrote no lines.
Return the smallest of all the numbers given.
Syntax math.min(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the minimum value of user input
Read as many lines as the user inputs, and return the smallest value that they wrote.
Code
local max = math.huge
while true do
if tonumber(l) then
max = math.min(max, tonumber(l))
else
break
end
end
print(max)
Output The line the user wrote with the smallest value, or inf if they wrote no lines.
Return the integer and fractional parts of number.
Syntax math.modf(
)
Returns number integral, number fractional
Part of Lua (source)
API math
ExamplePrint the integral and fractional parts of user input
Read a single number from the user, then print its integral and fractional parts.
Code
print(math.modf(tonumber(read())))
Output Depends. If the user wrote 5.2, the numbers printed will be 5 0.2
Return x raised to the yth power. This is equivalent to the operator ^.
Syntax math.pow(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the n-th power of two
Read a line from the user, then print two raised to that number.
Code
print(math.pow(2, tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter 5, the value printed would be 32.
Convert the number num from degrees to radians.
)
Returns number
Part of Lua (source)
API math
ExamplePrint the sine of user input
Read a line from the user, convert it to radians, then print its math.sin.
Code
print(math.sin(math.rad(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter -1, the number printed would be -90.
Sets the seed for the random number generator used by math.random. The same seeds produce will always produce the same random numbers.
Syntax math.randomseed(
)
Returns nil
Part of Lua (source)
API math
ExampleGenerate two sequences with the same seed
Print two sequences of randomly generated numbers using the same seed
Code
math.randomseed(1234)
print("First: " .. math.random())
math.randomseed(1234)
print("Second: " .. math.random())
Output First: 0.12414929654836
Second: 0.12414929654836
Returns the sine of num. Note: The argument must be given in radians. If it is in degrees, use the math.rad function to convert it.
Syntax math.sin(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the sine of user input
Code
print(math.sin(math.rad(tonumber(read())))
Output Depends on what the user wrote. For instance, if they were to enter 90, the number printed would be 1.
Returns the square root of num.
Syntax math.sqrt(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the square root of 2.
Prints an approximation of ${\displaystyle {\sqrt {2}}}$.
Code
print(math.sqrt(2))
Output 1.4142135623731
Returns the tangent of num. Note: The argument must be given in radians. If it is in degrees, use the math.rad function to convert it.
Syntax math.tan(
)
Returns number
Part of Lua (source)
API math
ExamplePrint the tangent of user input
Code
print(math.tan(math.rad(tonumber(read())))
Output Depends. If the user wrote 45, the output would be 1.
## Constants
Floating point positive infinity, a number greater than any other.
Type number
Value
inf
Part of CC:Tweaked
An approximation of the mathematical constant π.
Type number
Value
3.1415926535898
Part of CC:Tweaked
|
{}
|
# Math Help - Harmonic function question
1. ## Harmonic function question
Show that
$4 cos(\alpha t) + 3sin(\alpha t) = 5cos(\alpha t + \phi)$
where $\phi =arc tan(\frac{-3}{4}).$
I believe this is a harmonic function/sinusoid? Any help?
2. Well, I would probably use the addition of angles formula to expand out the RHS.
|
{}
|
# Arrangements of congruent rectangles
I have stumbled on an interesting problem. How many congruent rectangles on a plane can be arranged in such a way that they all touch each other but never overlap? I pondered this problem for a few days, but so far I'm not even sure what's the best way to formalize it. Has this problem been solved? Is there a useful hint? Can you recommend me something to read on the subject?
UPD: my best idea so far is to represent a rectangle in $\mathbb{E}^2$ by a tuple $r = (p, x, y)$, where $p$ is a point, and $x$ and $y$ are two orthogonal vectors representing two sides coming out of it. Then we define symmetries as (a) euclidean motions acting as usual, and (b) swapping of $x$ and $y$. Then we observe that the solutions of the problem, that is, $m$ touching rectangles, are mapped to some other solutions under euclidean motions and dilatations of the underlying space, as well as under $S_m$. Now we have to algebrize the problem, but I'm unsure how.
-
Three is easy, and $K_5$ is known not to be a planar graph, so it comes down to: can you do four? Size seems to be a killer. Three touching rectangles form a kind of a closed path that must either go around the fourth or have the fourth wrap around somehow. Doesn't seem to work, but I may have missed something. – Jyrki Lahtonen Jul 9 '11 at 6:04
Four is easy (three in a stack, and another one on the side). – Alexei Averchenko Jul 9 '11 at 6:10
Are top and bottom of the stack touching? – Jyrki Lahtonen Jul 9 '11 at 6:15
\$Jurki: Oh..... – Alexei Averchenko Jul 9 '11 at 6:18
Removed the (hint) tag. See here. I think it is sufficient to just ask for a hint, like you did in the question. No need to make a tag out of it. – Willie Wong Jul 10 '11 at 11:47
|
{}
|
## Abstract
Using a population-based sampling strategy, the National Institutes of Health (NIH) Magnetic Resonance Imaging Study of Normal Brain Development compiled a longitudinal normative reference database of neuroimaging and correlated clinical/behavioral data from a demographically representative sample of healthy children and adolescents aged newborn through early adulthood. The present paper reports brain volume data for 325 children, ages 4.5–18 years, from the first cross-sectional time point. Measures included volumes of whole-brain gray matter (GM) and white matter (WM), left and right lateral ventricles, frontal, temporal, parietal and occipital lobe GM and WM, subcortical GM (thalamus, caudate, putamen, and globus pallidus), cerebellum, and brainstem. Associations with cross-sectional age, sex, family income, parental education, and body mass index (BMI) were evaluated. Key observations are: 1) age-related decreases in lobar GM most prominent in parietal and occipital cortex; 2) age-related increases in lobar WM, greatest in occipital, followed by the temporal lobe; 3) age-related trajectories predominantly curvilinear in females, but linear in males; and 4) small systematic associations of brain tissue volumes with BMI but not with IQ, family income, or parental education. These findings constitute a normative reference on regional brain volumes in children and adolescents.
## Introduction
The National Institutes of Health (NIH) Magnetic Resonance Imaging (MRI) Study of Normal Brain Development was initiated to provide a resource for the scientific community with which to address questions related to healthy pediatric brain development and further develop image-processing tools. This longitudinal multisite project, conducted by the Brain Development Cooperative Group (BDCG), is establishing a comprehensive, multimodal database of pediatric structural MRI, diffusion tensor imaging, and proton MR spectroscopy data together with concurrent clinical/behavioral data. The study employed a population-based strategy to recruit a sample that mirrored the demographic distribution of the US population. Imaging assessments were obtained in conjunction with comprehensive documentation of demographic characteristics and assessments of clinical/behavioral characteristics (BDCG 2006; Waber et al. 2007).
The project comprises 2 coordinated protocols: “Objective 1,” the subject of this report, enrolled children and adolescents from 4 years and 6 months to 18 years and 3 months of age; “Objective 2,” performed at a subset of the Objective 1 study sites, enrolled newborns, toddlers, and preschoolers up to the age of 4 years, 5 months, thereby providing continuity with Objective 1 (Almli et al. 2007). Both protocols included an accelerated longitudinal design (Harezlak et al. 2005) with multiple imaging and clinical/behavioral assessments and scanning and clinical/behavioral protocols repeated at intervals ranging from months to years, depending on the age of the child. Although similar domains of cognitive/behavioral functioning were evaluated in the Objective 1 (older) and Objective 2 (younger) cohorts, specific age-appropriate clinical/behavioral measures differed between them. More importantly, due to brain tissue contrast differences between the 2 cohorts, the imaging protocols were necessarily different. The contrast differences, particularly for the very young ages (<3 years of age) require specialized segmentation algorithms for this population, which are under development. Given the differences in clinical/behavioral measures, MRI protocols, and image processing methods, results from Objective 2 data are not published here.
This report addresses cross-sectional age- and sex-related differences in whole- and regional brain volumes and relationships to key socioeconomic and physical growth indicators for a demographically representative sample of 325 children, ages 4 years and 9 months through 18 years and 4 months based on data from the first cross-sectional time point for Objective 1. As such, it constitutes a normative reference for studies of healthy brain development and brain-based disorders derived from a larger data set and resource that is freely available to qualified researchers (www.NIH-PediatricMRI.org); see Data Access.
## Materials and Methods
### Participants
Healthy children and adolescents (N = 433) were enrolled in Objective 1 at 6 Pediatric Study Centers (PSCs) across the United States: Children’s Hospital, Boston; Children’s Hospital Medical Center of Cincinnati; Children’s Hospital of Philadelphia; the University of California at Los Angeles; the University of Texas Health Science Center at Houston; and Washington University Saint Louis. A Data Coordinating Center (DCC) at the Montreal Neurological Institute coordinated the imaging aspects of the study and consolidated the data into a centralized database. A Clinical Coordinating Center at Washington University Saint Louis coordinated the recruitment for the project and managed the clinical/behavioral arm of the study. Institutional Review Boards at all participating institutions approved all study protocols. Informed consent was obtained from parents or guardians and participants of adult age.
A population-based sampling plan was implemented to minimize biases that can be present in samples of convenience and thus maximized the generalizability of the findings. Data from the 2000 US Census (United States Census Bureau 2000) were used in conjunction with site-specific zip code–based demographic data (geocoding) to develop regional targets for recruitment (BDCG 2006). In order to achieve a sample that approximated the demographics of the US population, zip codes within a 30- to 60-mile distance of each PSC were used to compile a demographic profile for each region by income, ethnicity, and race. These profiles were used in conjunction with national demographic data to develop a target sampling plan that defined the number of families to be recruited from low-, medium-, and high-income families. Ranges for low-, medium-, and high-income zip codes were chosen based on the Census. The sampling plan combined cross-sectional and accelerated longitudinal study design principles (Harezlak et al. 2005; BDCG 2006) and yielded a demographically representative, population-based healthy sample referenced to the Census. Volunteers were recruited into predetermined cells, whose distribution, based on family income levels (categorized as low, medium, and high) and race/ethnicity proportions within each level, approximated that of the Census. Low-income families (∼25% of the sample) generally fell below the qualifying income for federal assistance (∼1.5 times the poverty level), high-income families (∼35%) were approximately 3 times the poverty level or higher, and medium-income families (∼41%) were those in between (United States Department of Housing and Urban Development 2003). Equal numbers of males and females were targeted for each cell. As families were screened for recruitment, adjustments were made to account for regional differences in the cost of living and for family size using methods established by the Department of Housing and Urban Development (HUD). The recruitment process is described in greater detail elsewhere (BDCG 2006; Waber et al. 2007; www.NIH-PediatricMRI.org).
Following the mailing of an introductory letter, rates of successful contact were similar across income groups. However, high-income families had elevated rates of initial and total refusals relative to medium- and low-income families across various stages of the recruitment process. In contrast, exclusion rates were higher in low-income zip codes relative to middle- or high-income zip codes during the early stages of screening involving interviews for health-related factors and the completion of the Child Behavior Checklist (CBCL) to screen for exclusionary behavioral difficulties (Waber et al. 2007).
Strict and comprehensive inclusion/exclusion criteria were specified, representing factors that are established or suspected to adversely impact healthy brain development or that could prohibit completion of the full study protocol, for example, contraindications for MR scanning. Health and behavioral exclusions involved prenatal, birth, and perinatal history (including maternal substance use during pregnancy), medical and psychiatric disorders (e.g., attention deficit hyperactivity disorder (ADHD), internalizing/externalizing disorders, diabetes), poor academic functioning (e.g., special education placement), IQ < 70 (as measured with the Wechsler Abbreviated Scale of Intelligence), and a history of specific family medical and psychiatric disorders in first-degree relatives. Children with heights, weights, or head circumferences below the third percentile were excluded, but no upper limits were imposed. Detailed descriptions of inclusion/exclusion criteria have been previously reported (BDCG 2006; Waber et al. 2007). As noted above, exclusion rates were highest for low-income children.
Income and parental education (highest level of education achieved by either parent) were evaluated as predictors of brain volumes. Following enrollment, a continuous adjusted family income (AFI) variable that took into account both regional differences in the cost of living and the child’s family size was derived for use in these analyses using data available from the US Department of HUD. Families with greater than four 4 members had their income adjusted downward by a percentage per family member, whereas those with fewer than four 4 family members had their income adjusted upward under the rationale that these numbers reflect income as related to need. This continuous AFI variable was computed as follows:
where Variable A is the midpoint of each family’s self-reported income bracket, Variable B is the HUD adjustment factor for family size, Variable C is the local median family income for the Metropolitan Statistical Area in which the family resided at the time of the interview, and Variable D is the US median family income.
Body mass index (BMI), calculated as (Weight in kg)/(Height in m)2, served as an indicator of adiposity and was used in the analyses. Age- and sex-specific norms for BMI from 2000 available from the Centers for Disease Control (CDC) (http://www.cdc.gov/healthyweight/assessing/bmi/childrens_bmi/about_childrens_bmi.html) were used to classify individuals as underweight (less than the fifth percentile), normal weight (5th to 84th percentile), overweight (85th to 94th percentile), or obese (>94th percentile) for descriptive purposes. Of the 173 females included in the analyses, 119 (69%) had a BMI within the normal range for sex and age, 4 (2%) were underweight, 30 (17%) were overweight, and 20 (12%) were classified as obese. Of the 152 males included in the analyses, 110 (72%) had a BMI within the normal range for sex and age, 7 (5%) were underweight, 17 (11%) were overweight, and 18 (12%) were obese.
Following the application of image quality control and an automated processing pipeline, complete multispectral MRI data sets (consisting of acceptable T1, T2, and proton density (PD) scans; see below) from 325 participants were analyzed for this report. Data set exclusions due to incomplete scanning sequences or inadequate quality were found to occur at random with respect to age (P = 0.32) and sex (P = 0.50). Table 1 summarizes sample distributions for the 325 participants by age, sex, family income, and race/ethnicity.
Table 1
Demographic characteristics of the study sample.
Sex Male Female Panel A Sample size (N = 325) 152 173 Age (years)a 11.00 (3.80) 10.87 (3.71) AFIb ($) 69 656 (29 305) 75 261 (34 084) Full-scale IQ 111.27 (12.8) 111.18 (11.7) BMIc 19.07 (4.31) 19.41 (4.29) Right handed (%) 85.43 (0.03) 90.17 (0.023) Parental educationd Less than high school 1 0 High school 18 11 Some college 33 38 College Degree 46 54 Some graduate school 9 13 Graduate school 45 55 No reported 0 2 Panel B AFIb <$50 000 $50 000– 100 000 >$100 000 Total Sex Male Female Male Female Male Female Male Female Total Age (years) 4.8–6 2 6 11 3 0 4 13 13 26 6–8 6 10 16 20 8 5 30 35 65 8–10 5 8 14 12 4 7 23 27 50 10–12 4 10 17 14 0 11 21 35 56 12–14 12 1 10 10 4 10 26 21 47 14–16 9 6 9 10 4 5 22 21 43 16–18.3 6 7 6 9 5 5 17 21 38 Total 44 48 83 78 25 47 152 173 325 Panel C Race/ethnicity of child Male (n=152) Female e (n=173) Total (n=325) American Indian or Alaskan native 3 2 5 Asian 2 4 6 Native Hawaiian or other Pacific islander 1 1 2 Black or African American 11 19 30 White 120 120 240 Mixed/unknown or not reported 15 27 42 Hispanic (of any race) 21 22 43
Sex Male Female Panel A Sample size (N = 325) 152 173 Age (years)a 11.00 (3.80) 10.87 (3.71) AFIb ($) 69 656 (29 305) 75 261 (34 084) Full-scale IQ 111.27 (12.8) 111.18 (11.7) BMIc 19.07 (4.31) 19.41 (4.29) Right handed (%) 85.43 (0.03) 90.17 (0.023) Parental educationd Less than high school 1 0 High school 18 11 Some college 33 38 College Degree 46 54 Some graduate school 9 13 Graduate school 45 55 No reported 0 2 Panel B AFIb <$50 000 $50 000– 100 000 >$100 000 Total Sex Male Female Male Female Male Female Male Female Total Age (years) 4.8–6 2 6 11 3 0 4 13 13 26 6–8 6 10 16 20 8 5 30 35 65 8–10 5 8 14 12 4 7 23 27 50 10–12 4 10 17 14 0 11 21 35 56 12–14 12 1 10 10 4 10 26 21 47 14–16 9 6 9 10 4 5 22 21 43 16–18.3 6 7 6 9 5 5 17 21 38 Total 44 48 83 78 25 47 152 173 325 Panel C Race/ethnicity of child Male (n=152) Female e (n=173) Total (n=325) American Indian or Alaskan native 3 2 5 Asian 2 4 6 Native Hawaiian or other Pacific islander 1 1 2 Black or African American 11 19 30 White 120 120 240 Mixed/unknown or not reported 15 27 42 Hispanic (of any race) 21 22 43
a
Mean (SD).
b
As originally reported by telephone interview and adjusted for family size and geographical region.
c
BMI, defined as mass (kg)/height (m)2.
d
Highest level attained by either parent.
### MRI Protocols
MR brain images at 1.5 T were acquired at the 6 PSC sites without sedation with a 30- to 45-minute protocol (Table 2 and BDCG 2006). A whole-brain 3D T1-weighted spoiled gradient recalled echo sequence was applied to obtain sagittal slices with 1 mm in-plane resolution. Slice thickness was 1 mm on Siemens scanners and 1.4–1.8 mm on GE scanners, due to the width of the subject’s head and the 124 slice number limit of the GE scanners. To provide additional data for use in automated multispectral tissue classification and segmentation, a dual contrast, proton density– and T2-weighted acquisition with an optimized 2D multislice (2 mm) dual echo fast spin echo sequence was obtained in the axial plane parallel to the anterior commissure-posterior commissure line.
Table 2
MRI protocols
3DT1 weighted 2D PD/T2 weighted Fall-back: T1 weighted Fall-back: 2D PD/T2 weighted Sequence 3D RF-spoiled gradient echo Fast/turbo spin echo (ETL/turbo factor 8) Spin echo Fast/turbo spin echo (ETL/turbo factor 8) Time repetition (ms) 22–25 3500 500 3500 Time echo (TE) (ms) 10–11 12 Excitation pulse (°) 30 90 90 90 Signal averages 1 1 1 1 TE1 (effective) (ms) 15–17 15–17 TE2 (effective) (ms) 5–119 5–119 Refocusing pulse (°) 180 180 180 180 Orientation Sagittal Oblique axial (AC–PC) Oblique axial (AC–PC) Oblique axial (AC–PC) Thickness, gap (mm) 1, 0 2, 0 3, 0 3, 0 Number of slices Ear to ear Apex to below cerebellum Apex to below cerebellum Apex to below cerebellum Field of view (mm) AP:256, LR:160–180 (whole head) AP: 256, LR: 224 AP: 256, LR: 192 AP: 256, LR: 192 Matrix (mm) AP:256, LR: for 1 mm isotropic AP: 256, LR: 224 AP: 256, LR: 192 AP: 256, LR: 192 Scan time (min) time varies with head size 15–18 7–11 3–5 4–7
3DT1 weighted 2D PD/T2 weighted Fall-back: T1 weighted Fall-back: 2D PD/T2 weighted Sequence 3D RF-spoiled gradient echo Fast/turbo spin echo (ETL/turbo factor 8) Spin echo Fast/turbo spin echo (ETL/turbo factor 8) Time repetition (ms) 22–25 3500 500 3500 Time echo (TE) (ms) 10–11 12 Excitation pulse (°) 30 90 90 90 Signal averages 1 1 1 1 TE1 (effective) (ms) 15–17 15–17 TE2 (effective) (ms) 5–119 5–119 Refocusing pulse (°) 180 180 180 180 Orientation Sagittal Oblique axial (AC–PC) Oblique axial (AC–PC) Oblique axial (AC–PC) Thickness, gap (mm) 1, 0 2, 0 3, 0 3, 0 Number of slices Ear to ear Apex to below cerebellum Apex to below cerebellum Apex to below cerebellum Field of view (mm) AP:256, LR:160–180 (whole head) AP: 256, LR: 224 AP: 256, LR: 192 AP: 256, LR: 192 Matrix (mm) AP:256, LR: for 1 mm isotropic AP: 256, LR: 224 AP: 256, LR: 192 AP: 256, LR: 192 Scan time (min) time varies with head size 15–18 7–11 3–5 4–7
Note: AC--PC, anterior commissure-posterior commissure.
Where feasible, children who were unable to complete the scanning protocol successfully were scanned with a “fallback” protocol consisting of shorter 2D acquisitions (Table 2). Following a careful statistical check to assure that the inclusion or exclusion of the fallback scans did not significantly modify associations between structural volumes and the demographic characteristics that guided sample selection, these scans were included in the analyses reported here. Of the total sample, 9.9% (n = 34) of participants contributed T1W or T2W fallback scans, 9.3% (n = 32) contributed both T1W and T2W fallback scans.
### MRI Analysis
Following visual inspection of the data at the scanner, the scans were transferred to the DCC, where further quality control assessments using 3D display software were implemented to insure completeness of data transfer, protocol compliance, checks for motion, signal-to-noise ratio, magnetic susceptibility and other artifacts, resulting in a sample size of 401. The multimodal data collected from each subject were submitted to a series of image preprocessing steps to minimize these artifacts. First, any multislice data that were distorted by interpacket misregistration were re-registered, realigned and resampled onto a 1-mm isotropic grid (Gedamu et al. 2008). Each modality (T1, T2, and PD) was corrected in native space for 3D intensity nonuniformity using the N3 method (Sled et al. 1998). The intensity ranges of all volumes were trimmed and normalized to range 0–100 by a linear mapping of the 99.8th percentile to 100 and the 0.02th percentile to zero.
For each subject, a mutual information–based registration procedure was used to compute the rigid body transformation mapping the multiecho T2/PD volume onto the T1 volume. All data were normalized into the Talairach-like MNI stereotaxic space (Collins et al. 1994) in order to account for differences in position, orientation, and size of each subject’s brain in the native scans. Although making brain sizes more comparable, the stereotaxic transformation did not, however, equate them across subjects. The T1 volume for each subject visit was mapped linearly to the stereotaxic space defined by the ICBM152 nonlinear average template using the “mritotal” program from the “mni_autoreg” software package (packages.bic.mni.mcgill.ca) with a 9-parameter transformation (3 translations, 3 rotations, and scales; Collins et al. 1994). The “mritotal” program determined the 9 transformation parameter values that maximized the cross-correlation intensity between the subject’s T1 volume and the ICBM152 template. The native T1 volume was resampled using a trilinear resampling kernel onto a standard 1-mm3 isotropic 181 × 217 ×181 grid defined in MNI stereotaxic space. The T2-to-T1 transformation was composed with the T1 stereotaxic transformation and mapped the T2 and PD volumes to the MNI stereotaxic space to yield a voxel-by-voxel transformed T1 volume.
Once in MNI stereotaxic space, a brain extraction tool was used to create a mask of the average of the T1, T2, and PD volumes for each subject visit to remove extracerebral tissue (Smith 2002). The combination of the 3 volumes resulted in more robust segmentations than when using the T1 volume alone. The mask covered the entire cerebrum, cerebellum, and brainstem, ending at the foramen magnum. A tissue label, either white matter (WM), gray matter (GM), or cerebrospinal fluid (CSF), was assigned to each voxel within the brain mask using the Intensity Normalized Stereotaxic Environment for the Classification of Tissue program (Zijdenbos et al. 2002; Cocosco et al. 2003), using predefined locations in stereotaxic space to identify likely samples of each tissue type. The approach to tissue classification was thus driven primarily by the expected locations of GM and WM tissues and CSF and was more robust than methods that use a standard intensity range. Although the standard GM, WM, and CSF sampling coordinates were derived from adult subjects, they were transformed to the younger subjects’ MRI scans, yielding less-biased results. After a pruning process (Cocosco et al. 2003), the samples were processed by a neural net classifier to identify GM, WM, and CSF within the intracranial cavity. Total brain volume (TBV) was defined as the sum of whole-brain GM and whole-brain WM volume, including the cerebrum, cerebellum, and brainstem (ending at the foramen magnum).
Although the preceding well-established methodology has been widely used in the analysis of structural brain data from pediatric subjects, changes in MR signal intensity associated with brain maturation may have affected the likelihood of a structure being classified as GM or WM. It is plausible that immature WM may have a signal intensity that increases the likelihood of its classification as GM such that a reported decrease in GM volume may reflect a change in the size of an immature WM compartment that has been misclassified as GM. The use of the terms “GM” and “WM” volumes in the remainder of this paper refers to the above operational definitions, while acknowledging that immaturities of WM development may have affected these tissue classifications.
Brain structure segmentation was achieved with Automatic Nonlinear Image Matching and Anatomical Labeling (Collins et al. 1995; Collins and Evans 1997). The registration-based stereotaxic strategy aligned a subject’s transformed T1 volume nonlinearly to a prelabeled template using a multiscale approach. The anatomical labels were then mapped through the inverse of the recovered transformation from the template onto the subject’s data in the MNI stereotaxic space. The intersection of these regions with the GM, WM, and CSF tissue classes was then used to identify individual structures. These structures included the left and right frontal, temporal, parietal and occipital lobes, lateral ventricles; thalamus (dorsal thalamus), caudate (head and body), putamen, globus pallidus, cerebellum as well as brainstem. The caudate volume did not include the tail, due to the limited reliability of small-structure segmentation of 1-mm3 1.5-T data. Four whole-brain, regionally segmented images collected from 2 males and 2 females scanned at 5 and 14 years of age are provided as Supplementary Data. The linear scaling factors estimated in the initial linear stereotaxic transformation were used to recover the native volumes for each structure.
Each step of the image processing pipeline was evaluated qualitatively. For example, ICC masking failed if part of the bone marrow, skull, or optic nerve was included. Stereotaxic registration failed if the orientation, position or size was inappropriate. T2/PD to T1 registration failed if the CSF visible in the T2 image did not line up perfectly with the sulci on the T1 data. Nonlinear registration (and the resulting segmentation) failed if the nonlinearly resampled data did not align well with the template or if the segmented lobe borders did not fall in line with the appropriate sulci or the segmented basal ganglia did not align with the borders of the corresponding structures on the subject’s MRI. Of the 401 complete (T1/T2/PD) individual data sets submitted to the multistep image processing pipeline, 76 failed to meet image quality control standards for one or more steps in the pipeline, resulting in a sample size of N = 325 for the present report.
### Biostatistical Analysis
Initial descriptive statistics by age and sex were generated based on native, unadjusted brain volume measurements. Preliminary analyses revealed large differences in the variances of regional volumes (P < 0. 0001, Bartlett’s T), which were proportional to regional means for all structures (Pearson’s r > 0.90). The coefficient of variation (CV), defined as the standard deviation (SD) expressed as a percentage of the mean (CV = SD/mean × 100), was therefore used as the measure of variability to permit comparisons of variability across brain structures on the same scale (Lange et al. 1997; Van Belle and Fisher 2004).
Mixed-effects models (Laird and Ware 1982; Lange and Laird 1989; Venables and Ripley 2002) were applied to evaluate the simultaneous effects of age, sex, AFI, parental education, and BMI on brain volumes. The simultaneous inclusion of variables that could potentially influence brain volumes provides greater precision in the estimates of the effects of each of these predictors. Race and ethnicity were not included in these models because many cell occupancies were too small to yield reliable estimates. TBV served as a covariate for the analysis of regional brain volumes, and hemisphere/side (left vs. right) as a within-subject repeated measure under a compound symmetric variance–covariance structure for the volumes of the 4 lobes, lateral ventricles, subcortical GM and cerebellum. Linear models without random effects were employed for TBV, whole-brain GM, whole-brain WM, and brainstem volumes. The best-fitting yet simplest model for each structure was obtained by uniform application of the Akaike Information Criterion (AIC; Akaike 1974) across all structures. A testwise false-positive error rate was set at 0.05, thus controlling for potential experimentwise errors. All data analysis was performed in R version 2.10.1 (12/14/09 build; http://www.rproject.org/foundation/main.html), whose results are equivalent to those produced by SAS.
Based on reported effect sizes (Lange et al. 1997), we estimate that our sample size (N = 325) had probative power of at least 80% to detect regional age- and sex-related variation in all measured structures at all ages at a false-positive error rate of 5%.
## Results
### Descriptive Statistics
Table 3 provides average native volumes (not adjusted for TBV) and coefficients of variation for key structures. Considerable variability in the volumes was seen across measures with the lateral ventricles showing the highest volumetric variability. Figure 1 displays individual data points and best-fitting age curves separately for males and females for all regional volumes without consideration of any other covariates. Table 4 shows volumes for each year of age for males and females, represented as a percentage of the averaged 17- to 18-year value. Because of the sparse numbers of individuals in some cells, these values are more uneven across the age range than the fitted curves shown in Figure 1. Nonetheless, general patterns can be discerned consistent with the fitted curves.
Table 3
Means and coefficients of variation (CVs) for brain volumes
Volume (cm3) Total (N = 325) Male (n = 152) Female (n = 173) T RH LH T RH LH T RH LH A. Mean Total braina 1262.43 — — 1327.96 — — 1204.86 — — Whole-brain GM 787.13 — — 824.48 — — 754.31 — — Whole-brain WM 475.30 — — 503.48 — — 450.55 — — Lobar GM 635.34 317.7 317.64 665.97 332.86 333.11 608.43 304.38 304.04 Frontal 265.81 133.12 132.69 278.54 139.49 139.05 254.62 127.52 127.10 Parietal 136.40 67.78 68.62 143.10 70.94 72.16 130.51 65.01 65.50 Temporal 173.22 87.60 85.62 180.98 91.51 89.48 166.40 84.17 82.23 Occipital 59.92 29.20 30.71 63.35 30.93 32.42 56.90 27.69 29.21 Lobar WM 394.18 196.87 197.31 418.49 209.16 209.34 372.82 186.07 186.75 Frontal 170.04 84.75 85.29 180.54 90.00 90.53 160.82 80.13 80.69 Parietal 93.07 46.40 46.67 98.70 49.16 49.54 88.12 43.96 44.16 Temporal 85.93 43.28 42.65 90.88 45.84 45.04 81.58 41.04 40.55 Occipital 45.14 22.44 22.70 48.38 24.15 24.23 42.30 20.94 21.36 Subcortical GM 38.83 19.38 19.46 40.33 20.15 20.18 37.52 18.70 18.82 Thalamus 14.45 7.20 7.25 14.91 7.43 7.47 14.04 6.99 7.05 Caudate nucleus 11.23 5.58 5.64 11.60 5.78 5.83 10.89 5.41 5.48 Putamen 10.67 5.38 5.29 11.25 5.69 5.56 10.16 5.11 5.05 Globus pallidus 2.49 1.21 1.28 2.57 1.25 1.32 2.42 1.18 1.24 Lateral ventricles 11.69 5.67 6.01 12.40 6.00 6.40 11.06 5.38 5.68 Cerebellum 132.82 66.12 66.71 138.85 69.10 69.75 127.53 63.50 64.03 Brainstem 29.32 — — 30.69 — — 28.11 — — B. CV (%)b Total brain volumea 9.08 — — 7.92 — — 8.51 — — Whole-brain GM 10.30 — — 8.79 — — 9.79 — — Whole-brain WM 13.86 — — 12.99 — — 12.35 — — Lobar GM 11.31 11.30 11.35 9.94 9.90 10.01 10.81 10.85 10.79 Frontal 11.20 11.23 11.25 9.68 9.66 9.79 10.84 10.93 10.83 Parietal 13.40 13.41 13.74 12.09 12.30 12.28 13.09 13.06 13.46 Temporal 10.97 11.22 10.96 9.94 10.30 9.85 10.33 10.54 10.39 Occipital 17.35 18.22 17.69 16.70 17.56 16.96 16.27 17.13 16.84 Lobar WM 14.62 14.62 14.67 13.48 13.44 13.58 13.36 13.34 13.42 Frontal 14.31 14.33 14.38 13.02 13.01 13.16 13.14 13.19 13.18 Parietal 15.48 15.57 15.74 14.03 14.38 14.01 14.79 14.68 15.31 Temporal 15.98 16.34 16.02 15.15 15.40 15.35 14.91 15.34 14.89 Occipital 18.77 19.48 19.07 18.32 18.56 19.08 16.55 17.55 16.64 Subcortical GM 8.24 8.29 8.33 7.64 7.60 7.81 7.18 7.20 7.32 Thalamus 9.10 9.09 9.39 9.04 8.95 9.48 8.15 8.17 8.38 Caudate nucleus 9.76 9.92 9.89 8.62 8.69 8.86 9.83 10.00 9.96 Putamen 11.67 11.88 11.90 11.05 10.97 11.49 9.87 10.17 10.18 Globus pallidus 13.26 15.46 13.45 13.46 16.37 13.01 12.41 14.06 13.15 Lateral ventricles 43.38 46.97 45.35 40.89 45.01 44.17 45.22 48.39 45.79 Cerebellum 10.06 10.04 10.22 9.29 9.25 9.47 8.95 8.97 9.08 Brainstem 10.93 — — 10.45 — — 9.56 — —
Volume (cm3) Total (N = 325) Male (n = 152) Female (n = 173) T RH LH T RH LH T RH LH A. Mean Total braina 1262.43 — — 1327.96 — — 1204.86 — — Whole-brain GM 787.13 — — 824.48 — — 754.31 — — Whole-brain WM 475.30 — — 503.48 — — 450.55 — — Lobar GM 635.34 317.7 317.64 665.97 332.86 333.11 608.43 304.38 304.04 Frontal 265.81 133.12 132.69 278.54 139.49 139.05 254.62 127.52 127.10 Parietal 136.40 67.78 68.62 143.10 70.94 72.16 130.51 65.01 65.50 Temporal 173.22 87.60 85.62 180.98 91.51 89.48 166.40 84.17 82.23 Occipital 59.92 29.20 30.71 63.35 30.93 32.42 56.90 27.69 29.21 Lobar WM 394.18 196.87 197.31 418.49 209.16 209.34 372.82 186.07 186.75 Frontal 170.04 84.75 85.29 180.54 90.00 90.53 160.82 80.13 80.69 Parietal 93.07 46.40 46.67 98.70 49.16 49.54 88.12 43.96 44.16 Temporal 85.93 43.28 42.65 90.88 45.84 45.04 81.58 41.04 40.55 Occipital 45.14 22.44 22.70 48.38 24.15 24.23 42.30 20.94 21.36 Subcortical GM 38.83 19.38 19.46 40.33 20.15 20.18 37.52 18.70 18.82 Thalamus 14.45 7.20 7.25 14.91 7.43 7.47 14.04 6.99 7.05 Caudate nucleus 11.23 5.58 5.64 11.60 5.78 5.83 10.89 5.41 5.48 Putamen 10.67 5.38 5.29 11.25 5.69 5.56 10.16 5.11 5.05 Globus pallidus 2.49 1.21 1.28 2.57 1.25 1.32 2.42 1.18 1.24 Lateral ventricles 11.69 5.67 6.01 12.40 6.00 6.40 11.06 5.38 5.68 Cerebellum 132.82 66.12 66.71 138.85 69.10 69.75 127.53 63.50 64.03 Brainstem 29.32 — — 30.69 — — 28.11 — — B. CV (%)b Total brain volumea 9.08 — — 7.92 — — 8.51 — — Whole-brain GM 10.30 — — 8.79 — — 9.79 — — Whole-brain WM 13.86 — — 12.99 — — 12.35 — — Lobar GM 11.31 11.30 11.35 9.94 9.90 10.01 10.81 10.85 10.79 Frontal 11.20 11.23 11.25 9.68 9.66 9.79 10.84 10.93 10.83 Parietal 13.40 13.41 13.74 12.09 12.30 12.28 13.09 13.06 13.46 Temporal 10.97 11.22 10.96 9.94 10.30 9.85 10.33 10.54 10.39 Occipital 17.35 18.22 17.69 16.70 17.56 16.96 16.27 17.13 16.84 Lobar WM 14.62 14.62 14.67 13.48 13.44 13.58 13.36 13.34 13.42 Frontal 14.31 14.33 14.38 13.02 13.01 13.16 13.14 13.19 13.18 Parietal 15.48 15.57 15.74 14.03 14.38 14.01 14.79 14.68 15.31 Temporal 15.98 16.34 16.02 15.15 15.40 15.35 14.91 15.34 14.89 Occipital 18.77 19.48 19.07 18.32 18.56 19.08 16.55 17.55 16.64 Subcortical GM 8.24 8.29 8.33 7.64 7.60 7.81 7.18 7.20 7.32 Thalamus 9.10 9.09 9.39 9.04 8.95 9.48 8.15 8.17 8.38 Caudate nucleus 9.76 9.92 9.89 8.62 8.69 8.86 9.83 10.00 9.96 Putamen 11.67 11.88 11.90 11.05 10.97 11.49 9.87 10.17 10.18 Globus pallidus 13.26 15.46 13.45 13.46 16.37 13.01 12.41 14.06 13.15 Lateral ventricles 43.38 46.97 45.35 40.89 45.01 44.17 45.22 48.39 45.79 Cerebellum 10.06 10.04 10.22 9.29 9.25 9.47 8.95 8.97 9.08 Brainstem 10.93 — — 10.45 — — 9.56 — —
T = RH + LH; RH = right hemisphere; LH = left hemisphere
a
Total brain volume is defined as the sum of whole-brain GM and WM volumes.
b
CV (%) = SD/mean × 100.
Table 4
Means of regional brain structure volumes by age and sex expressed as a percent of 17- to 18-year-old volumes (cm3)
Age in years (#male/#female) 4 (2/2) 5 (11/11) 6 (20/21) 7 (10/14) 8 (14/12) 9 (9/15) 10 (12/19) 11 (9/16) 12 (14/10) 13 (13/11) 14 (11/11) 15 (11/10) 16 (3/8) 17–18 (14/29) Total braina Male 105.8 103.3 100.9 101.4 104.6 103.3 101.8 101.7 103.3 102.9 102.7 105.6 99.4 1308.22 Female 83.7 89.5 96.5 99.1 98.3 98.2 94.6 99.6 96.8 96.9 93.0 93.3 92.8 1267.67 Whole-brain GM Male 116.6 116.0 113.1 112.4 114.6 115.6 109.4 109.9 110.9 107.0 106.2 106.7 99.5 749.10 Female 101.9 105.6 113.0 115.7 114.7 113.2 107.7 110.0 109.0 106.6 102.0 99.2 98.2 696.55 Whole-brain WM Male 90.8 86.6 85.0 87.0 91.4 86.4 92.1 90.9 93.4 98.2 98.0 104.8 99.4 543.41 Female 70.6 80.0 87.0 90.1 89.8 90.8 90.0 99.3 93.9 97.4 93.9 98.8 98.7 487.70 Subcortical GM Male 100.5 101.8 102.6 107.4 104.6 104.6 104.0 105.1 103.5 105.5 104.4 106.7 100.8 38.77 Female 91.8 96.8 102.8 102.9 102.4 103.3 101.0 102.6 101.4 101.2 99.4 102.4 99.7 37.01 Lateral ventricles Male 119.2 82.4 77.4 80.4 87.1 110.5 76.1 92.2 89.5 75.4 106.1 85.6 96.9 14.01 Female 55.5 81.0 103.9 92.1 81.3 111.3 85.6 109.7 88.5 92.5 94.2 82.4 97.2 11.70 Cerebellum Male 94.3 95.5 94.6 97.8 99.1 103.9 99.3 99.4 101.3 104.0 102.9 107.1 100.0 138.95 Female 85.6 93.4 95.7 100.6 102.8 101.3 99.9 99.7 100.2 102.1 98.9 98.8 99.1 128.37 Brainstem Male 85.5 84.6 85.0 91.8 93.3 95.3 93.7 95.1 96.7 101.8 99.0 102.7 99.4 65.01 Female 79.1 85.7 91.2 93.7 93.9 94.4 94.1 100.0 98.3 100.0 101.5 100.3 98.9 58.60
Age in years (#male/#female) 4 (2/2) 5 (11/11) 6 (20/21) 7 (10/14) 8 (14/12) 9 (9/15) 10 (12/19) 11 (9/16) 12 (14/10) 13 (13/11) 14 (11/11) 15 (11/10) 16 (3/8) 17–18 (14/29) Total braina Male 105.8 103.3 100.9 101.4 104.6 103.3 101.8 101.7 103.3 102.9 102.7 105.6 99.4 1308.22 Female 83.7 89.5 96.5 99.1 98.3 98.2 94.6 99.6 96.8 96.9 93.0 93.3 92.8 1267.67 Whole-brain GM Male 116.6 116.0 113.1 112.4 114.6 115.6 109.4 109.9 110.9 107.0 106.2 106.7 99.5 749.10 Female 101.9 105.6 113.0 115.7 114.7 113.2 107.7 110.0 109.0 106.6 102.0 99.2 98.2 696.55 Whole-brain WM Male 90.8 86.6 85.0 87.0 91.4 86.4 92.1 90.9 93.4 98.2 98.0 104.8 99.4 543.41 Female 70.6 80.0 87.0 90.1 89.8 90.8 90.0 99.3 93.9 97.4 93.9 98.8 98.7 487.70 Subcortical GM Male 100.5 101.8 102.6 107.4 104.6 104.6 104.0 105.1 103.5 105.5 104.4 106.7 100.8 38.77 Female 91.8 96.8 102.8 102.9 102.4 103.3 101.0 102.6 101.4 101.2 99.4 102.4 99.7 37.01 Lateral ventricles Male 119.2 82.4 77.4 80.4 87.1 110.5 76.1 92.2 89.5 75.4 106.1 85.6 96.9 14.01 Female 55.5 81.0 103.9 92.1 81.3 111.3 85.6 109.7 88.5 92.5 94.2 82.4 97.2 11.70 Cerebellum Male 94.3 95.5 94.6 97.8 99.1 103.9 99.3 99.4 101.3 104.0 102.9 107.1 100.0 138.95 Female 85.6 93.4 95.7 100.6 102.8 101.3 99.9 99.7 100.2 102.1 98.9 98.8 99.1 128.37 Brainstem Male 85.5 84.6 85.0 91.8 93.3 95.3 93.7 95.1 96.7 101.8 99.0 102.7 99.4 65.01 Female 79.1 85.7 91.2 93.7 93.9 94.4 94.1 100.0 98.3 100.0 101.5 100.3 98.9 58.60
a
Total brain volume is defined as the sum of whole brain GM and whole-brain WM.
Figure 1.
Individual data points and best-fitting cross-sectional age curves for males and females separately for all regional volumes without consideration of any other covariates.
Figure 1.
Individual data points and best-fitting cross-sectional age curves for males and females separately for all regional volumes without consideration of any other covariates.
For males, the best-fitting models of age-related changes in brain volumes were predominantly linear. Linear volumetric relationships were seen for TBV, whole-brain WM, frontal lobe WM, parietal lobe GM and WM, temporal lobe GM and WM, occipital lobe GM and WM, the caudate, putamen, and lateral ventricles. The best-fitting models were curvilinear (quadratic, in the shape of an inverted-U curve) for whole-brain GM, subcortical GM, frontal lobe GM, thalamus, globus pallidus, cerebellum, and brainstem. No additional or higher-order associations were found.
For females, age-related changes were more predominantly quadratic, also in the shape of an inverted-U curve. Linear volumetric relationships were seen for occipital GM, parietal WM, putamen, globus pallidus, and the lateral ventricles. Quadratic volumetric relationships were seen for TBV, whole-brain GM and WM, frontal WM, temporal GM and WM, parietal GM, occipital WM, subcortical GM, thalamus, caudate, cerebellum, and brainstem. No additional or higher-order associations were found.
### Multiple Regression Analyses
The multiple regression models provided the best estimate of age and other associations because they adjusted simultaneously for the additional measured sources of variance available. Table 5 summarizes the results of the mixed models that considered covariances by age, sex, and hemisphere simultaneously in a group analysis. To simplify viewing, only estimates (regression coefficients) that were statistically significant are listed in the table. The initial set of models also included parental education, AFI, and BMI. Preliminary analyses indicated that AFI and parent education had no significant relations with any regional brain volume (P values ranged from 0.27 to 0.98). These were therefore dropped from the final models. Recruitment and scan site (PSC) was also entered in the models to evaluate potential variation associated with scanning devices, but this variable was also found to be statistically insignificant and was dropped from the final models. Although fallback scans were more prevalent in younger children, this interaction, when added to the multivariable model, did not change the coefficients in Table 5 significantly and was dropped.
Table 5
Results of mixed-effects regression models, showing covariates that are statistically significant predictors of regional brain volumes
Brain volume Reference volume Percent change and percent change per cross-sectional year from reference volume Female right hemisphere TBVa (mean: 1262.43 cm3) Hemisphere (left minus right) Age (mean: 10.9 years) Sex (male minus female) Age2 Body mass index (mean: 19.2) Sex by TBV (male) Sex by Hemisphere(left and male) Total braina 1204.86b 2.03* 10.28*** −0.09* Whole-Brain GM 787.13 0.08*** −0.87*** −0.19** Whole-Brain WM 475.30 0.09*** 1.44*** 0.33** 0.02* Lobar GM 318.96 0.08*** −1.11*** −0.19** Frontal 134.34 0.08*** −0.32** −0.97*** Parietal 68.43 0.09*** 0.73* −1.57*** −0.25* −0.02* 1.06* Temporal 87.95 0.07*** −2.25*** −0.70*** −0.39* Occipital 28.73 0.06*** 5.26*** −1.92*** 3.19* −0.51* Lobar WM 196.93 0.09*** 0.22* 0.32** Frontal 84.16 0.09*** 0.65*** 1.37*** 0.33** Parietal 46.02 0.09*** 0.60* 0.07* 0.26** Temporal 43.53 0.09*** −1.46** 1.64*** 0.13* Occipital 21.93 0.08*** 1.17** 2.14*** 0.04** Subcortical GM 19.25 0.05*** 0.79*** 1.37* −0.78** Thalamus 7.30 0.06*** 0.68*** 1.64** −0.05* Caudate nucleus 5.63 0.06*** 1.13*** −0.28** Putamen 5.22 0.04*** −1.24*** 5.33*** −1.26* Globus pallidus 1.20 0.03*** 5.30*** −0.38* Lateral Ventricles 5.86 0.16*** Cerebellum 66.10 0.05*** 0.89*** 2.12** 2.03* −0.07* Brainstem 29.38 0.06*** 1.46***
Brain volume Reference volume Percent change and percent change per cross-sectional year from reference volume Female right hemisphere TBVa (mean: 1262.43 cm3) Hemisphere (left minus right) Age (mean: 10.9 years) Sex (male minus female) Age2 Body mass index (mean: 19.2) Sex by TBV (male) Sex by Hemisphere(left and male) Total braina 1204.86b 2.03* 10.28*** −0.09* Whole-Brain GM 787.13 0.08*** −0.87*** −0.19** Whole-Brain WM 475.30 0.09*** 1.44*** 0.33** 0.02* Lobar GM 318.96 0.08*** −1.11*** −0.19** Frontal 134.34 0.08*** −0.32** −0.97*** Parietal 68.43 0.09*** 0.73* −1.57*** −0.25* −0.02* 1.06* Temporal 87.95 0.07*** −2.25*** −0.70*** −0.39* Occipital 28.73 0.06*** 5.26*** −1.92*** 3.19* −0.51* Lobar WM 196.93 0.09*** 0.22* 0.32** Frontal 84.16 0.09*** 0.65*** 1.37*** 0.33** Parietal 46.02 0.09*** 0.60* 0.07* 0.26** Temporal 43.53 0.09*** −1.46** 1.64*** 0.13* Occipital 21.93 0.08*** 1.17** 2.14*** 0.04** Subcortical GM 19.25 0.05*** 0.79*** 1.37* −0.78** Thalamus 7.30 0.06*** 0.68*** 1.64** −0.05* Caudate nucleus 5.63 0.06*** 1.13*** −0.28** Putamen 5.22 0.04*** −1.24*** 5.33*** −1.26* Globus pallidus 1.20 0.03*** 5.30*** −0.38* Lateral Ventricles 5.86 0.16*** Cerebellum 66.10 0.05*** 0.89*** 2.12** 2.03* −0.07* Brainstem 29.38 0.06*** 1.46***
a
Total brain volume is defined as the sum of whole-brain GM and whole-brain WM.
b
Reference volume includes left and right hemispheres.
*P < 0.05, ** P < 0.01, *** P < 0.001.
The models were constructed such that female volumes were designated as the baseline reference volume with subsequent coefficients indicating the percent increase or decrease relative to that baseline. For structures for which separate hemispheric volumes were available, female right hemisphere volume was designated as an arbitrary baseline reference, relative to which the coefficients indicate percent increase or decrease. Thus, for example, for parietal GM, the Hemisphere estimate (0.73) indicates that the left hemisphere volume was increased relative to the right by 0.73%. In addition, there is also a significant Sex by TBV interaction and a significant Sex by Hemisphere interaction, indicating that the male left parietal GM is larger still by another estimated 1.06%, less 0.02% after adjusting for TBV. Thus, the effects of multiple factors are considered simultaneously.
Higher-order terms were also tested in the models to evaluate potential nonlinear relationships between age and specific volumes. Thus, where a quadratic age term (Age2) appears in Table 5, the relationship to age is not a simple linear function, but a curvilinear one (inverted-U shaped), similar to the shapes seen in Figure 1. Interactions between sex and the quadratic term were tested, but were not found to be statistically significant, even though different trajectories were fitted for the 2 sexes for all structures. The model selection criterion (the AIC) found that these interactions were less important than those included in the table based on best-fitting simplest models that included and excluded these coefficients. Higher-order polynomial terms were also tested, but none were found to be statistically significant.
Table 5 can be used to estimate normative volumes for specific regional structures when age, sex, TBV, and BMI are known. In the example shown below, we estimate the left temporal lobe GM volume of a healthy male 18.0 years of age with TBV of 1400 cm3 and a BMI of 22.2. Note that no term is included for sex because it was not a significant predictor of temporal lobe GM following adjustment for TBV.
Upon substituting the appropriate values from Table 5,
The findings displayed in Table 5 are summarized below. Only findings that are statistically significant are addressed. Probability levels for statistical significance are indicated in the table by asterisks.
### Total Brain Volume
As shown in Table 5, the mean TBV for males exceeded that of females by 10.28% overall. TBV increased by 2.03% per year in cross-sectional age (P < 0.05), offset by the significant decrease in TBV by quadratic age of −0.09% (P < 0.05), reflecting a curvilinear, age-related increase and then decrease (inverted U) in TBV across the age range. Whole-brain GM volume declined at approximately 6.56 cm3 per year of age, whereas whole-brain WM increased by 6.49 cm3 per year of age throughout the age range.
### Regional Brain Volumes
TBV, hemisphere, age, BMI, and to a lesser extent sex were all associated with variation in regional brain volumes when considered simultaneously in the context of the multivariable model. All regional volumes were positively correlated with TBV. The largest such association emerged for the lateral ventricles (0.16%). Coefficients were larger for lobar GM (0.08%) and WM (0.09%) than for subcortical GM structures (0.05%).
### Age
As expected, after adjusting for TBV, lobar GM declined across age, by an estimated 1.11% per year of age, whereas lobar WM increased by an approximate 1.54% per year of age. There was more variability across relative regional lobar volumes for GM than WM. For GM, the age-related declines were far more prominent in the parietal and occipital cortex than in the frontal and temporal cortex. In contrast, for lobar WM, the rate of volumetric increase with age was relatively more consistent across structures, ranging from 1.37% (frontal) per year of age to 2.14% (occipital).
Age-related change was not detected, however, for the total subcortical GM or for the lateral ventricles. This summary measure of subcortical structures, however, masks an age-related increase for the thalamus and decreases for the caudate and globus pallidus. Although the age estimate for the ventricles was relatively large (0.98%), it failed to reach a level of statistical significance, presumably because of the high degree of interindividual variation, as indicated by the very large CV, shown in Table 4.
### Sex
As noted previously, the most prominent finding with respect to sex was that TBV was approximately 10% larger in males than in females. The absence of a sex by age interaction suggests that this difference was relatively constant across the age range. All sex effects on regional volumes are reported after adjusting for TBV. Occipital GM volumes were larger by an estimated 3.2% in males relative to females. A sex difference in the putamen was especially striking, with the male volume larger by >5%. The cerebellum was approximately 2% greater in volume in males. There were also several small interactions of sex with TBV (Table 5).
### Hemispheric Asymmetry
Significant hemispheric asymmetries were seen in both lobar and subcortical volumes. The frontal and temporal lobes showed a rightward GM asymmetry, whereas the parietal and occipital lobes showed a leftward GM asymmetry. The leftward asymmetry of occipital GM was especially pronounced, by approximately 5%. For lobar WM, there was a rightward asymmetry in the temporal lobe, in contrast to the leftward asymmetry seen for other lobar volumes.
Asymmetries were also seen in subcortical GM structures, most of which showed leftward asymmetry, the exception being the putamen, which was larger on the right. The asymmetry was most pronounced for the globus pallidus, by approximately 5%. Finally, the left lateral ventricle was larger than the right, by nearly 6%.
There were a few interactions between sex and hemisphere, but no significant interactions were found between age and hemisphere, suggesting that these asymmetries are relatively stable across the age range. A leftward parietal GM asymmetry was seen in both males and females but was more pronounced in males, as indicated by the significant sex × hemisphere interaction. The putamen showed a significant rightward asymmetry, which was more pronounced in males.
### Body Mass Index
There were small but statistically significant and consistent associations of BMI with tissue-specific lobar brain volumes. A higher BMI was associated with smaller GM volumes and larger WM volumes, across age and sex, without any net impact on TBV. The largest such association was for occipital GM (−0.51%), but the association for occipital WM failed to reach statistical significance. BMI was not associated with variations in any other volumes.
## Discussion
This study is, to our knowledge, the first pediatric MRI study to implement a population-based sampling strategy to generate unbiased estimates of global and regional brain volumes in a large sample of healthy children and adolescents, ages 4–18 years. It is also the first to examine comprehensively the effects of key socioeconomic indicators on healthy brain volumetric development and the first to report the effects of BMI on brain volumes in healthy children. We describe and quantify associations with age and sex for global brain volumes, regional brain volumes, and hemispheric asymmetries, as measured by multispectral MRI in conjunction with an automated processing pipeline, strict quality control, and a well-established biostatistical model.
There were minimal cross-sectional age-related differences in TBV within the 4- to 18-year-old age range, larger TBVs in males, age-related decreases in cerebral GM, concomitant increases in WM, and more prominent age-related differences in cortical structures and the cerebellum relative to subcortical structures. Salient leftward asymmetries (>1% of TBV) were seen in occipital GM, occipital WM, temporal WM, caudate, and globus pallidus, whereas a prominent rightward asymmetry was seen in temporal lobe GM. Neither of the 2 socioeconomic indicators examined here, AFI or parental education, was significantly related to the volumetric brain measures examined in this healthy pediatric sample. BMI index was inversely related to cerebral GM volumes and positively related to cerebral WM volumes, with no net effect on overall brain volume. We now discuss these findings in more detail.
TBV showed a very small, curvilinear pattern of association across the age range, first increasing and then decreasing very slightly, consistent with some previous reports (e.g., Giedd et al. 1996; Sporn et al. 2003; Lenroot et al. 2007). Thus, most of the increase in global brain volume has already occurred by approximately age 5, or early school age. TBV was approximately 10% greater in males than in females across the age range, also consistent with both the pediatric (Caviness et al. 1996; Giedd et al. 1996, 1999; Reiss et al. 1996; Lange et al. 1997; Kennedy et al. 1998) and adult (Cosgrove et al. 2007) literature. There is evidence that males, at birth, have approximately 9% larger intracranial volumes (including 10% more cortical GM and 6% more cortical WM) than do females (Gilmore et al. 2007), thus extending this sexual dimorphism to younger age ranges.
All regional volumes were significantly correlated with TBV, necessitating adjustment for overall brain size when making regional, and particularly male–female, comparisons. After such adjustment, considerable variability among regional volumes remained. Associations with TBV were generally larger for lobar GM and WM than for subcortical GM structures. Thus, linear stereotaxic normalization may obscure true, though subtle, relationships between regional, particularly cortical, measures and brain size, particularly during development.
Following adjustments for TBV, there were few sex differences, consistent with some, but not all (Sowell et al. 2002), earlier reports in children (Caviness et al. 1996; Giedd et al. 1996; Lange et al. 1997; Kennedy et al. 1998) and with data in adults (Luders et al. 2002). Thus, many reported sex differences may actually reflect differences in brain size. Only the relative volumes of occipital GM, putamen, and cerebellum differed significantly by sex, all larger in males. Some of these effects are consistent with prior reports (e.g., Giedd et al. 1996 reported larger putamen in males), but other previously reported sex differences were not detected. Among these nonreplicated findings are reports of proportionally larger GM volumes in females involving the frontal lobe (Lenroot et al. 2007) and the caudate (Giedd et al. 1996; Wilke et al. 2007). Inconsistencies likely arise from the high interindividual and interstudy variability seen across similar efforts, differences in the specific regional measures examined, sampling design, biostatistical models applied, and other methodological issues. The determinants and implications of any sex-related differences in brain volumes are unclear. Few studies have examined hormonal, genetic, or experiential influences on global brain size. Neither have the functional correlates of such differences been elucidated, and indeed our cognitive data revealed few sex differences (Waber et al. 2007).
Controlling for brain size may be critical when assessing variations in regional brain volumes and other cerebral measures, particularly when examining sex differences and/or disorders associated with deviations in brain size, such as autism spectrum disorders, which are associated with early brain overgrowth (Lainhart et al. 1997; Schumann et al. 2010), and ADHD, which is associated with smaller brain volumes (Valera et al. 2007). Few studies have controlled specifically and stringently for whole-brain volumes when examining regional structural variations. Sowell et al. (2007) is a notable exception, which matched a subset of subjects for brain size to confirm that sex differences identified in cortical thickness were independent of brain volume. Further allometric study of brain development in both health and neurodevelopmental disorders is warranted.
In contrast to the relative stability of TBVs, cortical GM and cerebral WM volumes showed dynamic relations with age. Global and regional GM volumes decreased and WM increased across the age range for nearly all regions reported here, in keeping with prior findings (Caviness et al. 1996; Reiss et al. 1996; Lange et al. 1997; Giedd et al. 1999). Only the putamen and lateral ventricles failed to show a significant relationship to age. The absence of a relation of age with the lateral ventricular volumes is most likely attributable to high between-subject variability, noted elsewhere in an independent sample (Lange et al. 1997).
When both sexes are analyzed together, most age-related associations (consisting of decreases in GM and increases in WM in most regions) were best described by linear functions, with curvilinear functions observed for TBV, parietal WM, thalamus, and cerebellum only. In the context of the more complex multiple regression model employed here, nonlinear relationships were less prominent than in earlier reports (Giedd et al. 1996; 1999). Within our sample, the numbers of subjects available were more sparse at the youngest and oldest ages, potentially limiting our ability to detect nonlinear relationships, as the identification of more complex functions requires more data points and parameters than those needed to fit simple linear functions (Van Belle and Fisher 2004). The observed trajectories did vary to some extent by sex; females tended to show curvilinear functions more often than did males. These sex differences merit further investigation using forthcoming longitudinal data from this sample. If confirmed, these sex differences should be explored in relationship to genetic, pubertal, and hormonal variables.
After adjusting for TBV, lobar GM declined across cross-sectional age by an estimated 1.11% per year of age, whereas lobar WM increased linearly by 1.54% per year. Age-related associations with GM were more variable across regions than those for WM. Declines in GM volumes were more prominent in parietal and occipital cortex than in frontal and temporal cortex, in keeping with a posterior-to-anterior sequence of maturation. In contrast, lobar WM showed a relatively consistent increase across lobar regions, except for the parietal lobe, ranging from 1.37% (frontal) to 2.14% (occipital) increase per year. These associations are generally consistent with previous reports (Giedd et al. 1996, 1999; Reiss et al. 1996; Sowell et al. 2002; Wilke et al. 2007).
The increases in WM and concurrent decreases in GM are consistent with progressive myelination and thinning of the cortical mantle reported from postmortem studies (Huttenlocher and Dabholkar 1997). The concomitant decreases in GM may reflect synaptic pruning. Although subcortical GM taken as a whole was not significantly associated with age, a substantial age-related increase was seen for the thalamus, along with smaller but statistically significant decreases for the caudate and globus pallidus. Overall, the relationship of age to volumes of the subcortical GM structures was more attenuated relative to the lobar volumes, reflecting a more protracted developmental course of cortical regions, despite the involvement of the basal ganglia in higher-order cognitive functions (Middleton and Strick 2000). In contrast with a previous report (Giedd et al. 1996), the cerebellum showed large volumetric increases through approximately age 11 years. Whether these various age-related differences in volumes also reflect varying capacities for experience-driven plasticity is unknown.
Hemispheric asymmetries were present in nearly every regional volume. Most reflected a larger volume on the left. Particularly salient were the leftward asymmetries of the occipital lobe, especially its GM, and temporal lobe WM, consistent with torque (the opposing tendency of the left posterior/right anterior brain to protrude further than its contralateral counterpart; LeMay 1976; Lancaster et al. 2003). Additional prominent leftward asymmetries were seen in the caudate and globus pallidus. Some leftward hemispheric asymmetries reported here contrast with an earlier report (Giedd et al. 1996) of rightward hemispheric asymmetries for the 4- to 18-year-old age range. Few studies have related global asymmetries to more local asymmetries or investigated their functional significance. However, a recent study reported a positive relationship of torque to asymmetries of the planum temporale (Barrick et al. 2005), shown to be related to handedness and language lateralization (Preis et al. 1999).
In contrast to these leftward asymmetries, temporal lobe GM showed a prominent rightward asymmetry, consistent with findings in adults (Jack et al. 1989) and the tendency of the Sylvian fissure to course upward posteriorly at a steeper angle on the right than on the left, although this asymmetry may be less pronounced in children than in adults (Sowell et al. 2001). Although such studies have primarily involved adults, alterations in cerebral asymmetries have been reported in neurodevelopmental disorders such as dyslexia (Zadina et al. 2006), schizophrenia (Sharma et al. 1999), and autism (Lange et al. 2010a) and thus may hold clinical significance.
There were no significant interactions between asymmetry and age, suggesting that these volumetric asymmetries are relatively stable across the age range. The only interactions of these asymmetries with sex involved parietal GM, which showed a more pronounced leftward asymmetry in males than in females, and the putamen, which showed a more pronounced rightward asymmetry. Although speculative, the enhanced parietal asymmetry in males may be related to sex differences in certain visuospatial skills (Wolbers and Hegarty 2010).
Socioeconomic indicators (family income and parental education) were not associated with variations in brain volumes in our analysis, consistent with a prior analysis indicating that total and regional brain volumes do not mediate the association between parental education and IQ in this sample (Lange et al. 2010b). Although there are scattered reports in which associations between socioeconomic variables and structural brain development have been examined, these have generally been in the context of studies designed to address specific circumscribed brain structures in small samples of limited age range (Eckert et al. 2001; Raizada et al. 2008). To our knowledge, the present study is the first to conduct a comprehensive examination of regional and whole-brain volumes in relation to socioeconomic indicators in a large normative pediatric sample.
Socioeconomic status is a complex construct that has been associated with variation in life stress, social status, and neighborhood quality, as well as in child health and development (Evans and Kantrowitz 2002; Hackman and Farah 2009). Such associations likely reflect the combined and interactive influences of numerous environmental and biological influences, including genetic and epigenetic factors, whose effects may vary across the course of development.
Our stringent inclusion and exclusion criteria may also have contributed to the absence of association between socioeconomic indicators and volumetric MRI measures because the rate of health-based exclusions was significantly higher for lower-income participants in our study (Waber et al. 2007). This pattern is consistent with the higher rates of morbidity, including psychiatric morbidity, typically observed in lower-income populations (Kessler et al. 1994; Muntaner et al. 1998; Mackenbach et al. 2008). Indeed, a large proportion of low-income children who did not meet criteria for the study were excluded on the basis of elevated levels of behavioral symptoms, attention problems (as measured by the CBCL) being frequent. Certain family factors that were exclusionary criteria for our study, such as smoking and psychopathology in first-degree relatives, are also more prevalent in less well-educated individuals (Giovino 2002).
A novel finding was a relatively small but consistent association between BMI and tissue-specific lobar brain volumes, adjusted for all other covariates, including family income and parental education. Twelve percent of the sample was classified as obese by recent childhood norms, but none had diabetes or other diseases associated with obesity. An additional 14% were classified as overweight. Both of these percentages are lower than those observed by Singh et al. (2010) for US children in 2007, who reported a 16% obesity and 32% overweight rate for children from birth through 17 years in a large survey. Higher BMI was associated with decreased whole-brain and lobar GM and increased whole-brain and lobar WM volumes, with no net effect on TBV and no effect on subcortical GM structures, cerebellum, or brainstem. BMI findings here are similar to those reported in healthy adults, including elderly adults, which document negative associations between BMI and GM volumes in various cortical regions (Pannacciulli et al. 2006; Gunstad et al. 2008; Taki et al. 2008). Obesity has also been associated with increases in WM volume in some studies (Walther et al. 2010), an effect reported to be at least partially reversible with dieting (Haltia et al. 2007), suggesting that some such effects are malleable. The present findings, derived from a very well-documented population-based sample, indicating that associations between BMI and brain structure are present in childhood and adolescence, are provocative. The significance of these associations, in particular their functional significance, warrants further investigation.
### Limitations
The relationships of brain volumes to age and sex described here are based on cross-sectional data and await refinement in future analyses of the complete, longitudinal data set. With respect to socioeconomic status, our recruitment relied on family income alone as a proxy and recruited from largely urban areas surrounding major medical centers, thereby excluding rural participants. Despite our rigorous geocoding recruitment strategy, the resulting parental educational levels were higher than expected, perhaps limiting extrapolation to less well-educated low-income populations.
## Conclusions
Data from this normative longitudinal pediatric database provide a reference for studies of both healthy brain development and a wide range of brain disorders affecting children and adolescents. This report provides an analysis of volumetric data from the first cross-sectional time point for the NIH MRI Study of Normal Brain Development, which will serve as a reference for future users. Volumetric data from this time point replicate and further quantify several prior findings in healthy pediatric samples of decreasing GM, increasing WM, relatively stable TBVs, and greater age-related variance in lobar structures relative to subcortical structures across the 4- to 18-year-old age range. Male brains were approximately 10% larger than female brains across the age range and largely accounted for sex differences in regional brain volumes. Other specifics, however, such as the shape (linear vs. curvilinear) of several age-related functions, some sex and age relationships, and identified asymmetries differed from prior reports. Neither family income nor parental education was significantly associated with variations in the brain volumes examined here. However, BMI was inversely related to lobar (but not subcortical) GM volumes and positively associated with WM volumes. Further longitudinal and multimodal analyses will refine and expand on these findings and relate them to measures of cognitive and behavioral functioning.
## Funding
This project was supported with Federal funds from the National Institute of Child Health and Human Development, the National Institute on Drug Abuse, the National Institute of Mental Health, and the National Institute of Neurological Disorders and Stroke (Contract #s N01-HD02-3343; N01-MH9-0002; N01-NS-9-2314, -2315, -2316, -2317, -2319, and -2320; and NS34783).
## Data Access
Access to the NIH Pediatric MRI Data Repository is freely available to qualified researchers whose institutions are covered by a federalwise assurance, who are studying normal brain development, disorders or diseases, and/or developing image processing tools and who agree to the terms of the Data Use Certification. Please see www.NIH-PediatricMRI.org for specific information on how to apply for access.
## Supplementary Material
Supplementary material can be found at: http://www.cercor.oxfordjournals.org/.
Special thanks to the NIH contracting officers for their support. We thank Janet E. Lainhart, MD, for her comments on an early manuscript. We also acknowledge the important contribution and remarkable spirit of John Haselgrove, PhD (deceased).
Disclaimer: The views herein do not necessarily represent the official views of the National Institute of Child Health and Human Development, the National Institute on Drug Abuse, the National Institute of Mental Health, the National Institute of Neurological Disorders and Stroke, the NIH, the US Department of Health and Human Services, or any other agency of the US Government. Conflict of Interest : None declared.
### Appendix: BDCG Authorship List
The MRI Study of Normal Brain Development is a cooperative study performed by DCC, a Clinical Coordinating Center, a Diffusion Tensor Processing Center, and staff of the National Institute of Child Health and Human Development (NICHD), the National Institute of Mental Health (NIMH), the National Institute on Drug Abuse (NIDA), and the National Institute for Neurological Diseases and Stroke (NINDS), Rockville, M.
Key personnel from the 6 pediatric study centers (PSCs) are as follows: Children’s Hospital Medical Center of Cincinnati, Principal Investigator William S. Ball, MD, Investigators Anna Weber Byars, PhD, Mark Schapiro, MD, Wendy Bommer, RN, April Carr, BS, April German, BA, Scott Dunn, RT; Children’s Hospital Boston, Principal Investigator Michael J. Rivkin, MD, Investigators Deborah Waber, PhD, Robert Mulkern, PhD, Sridhar Vajapeyam, PhD, Abigail Chiverton, BA, Peter Davis, BS, Julie Koo, BS, Jacki Marmor, MA, Christine Mrakotsky, PhD, MA, Richard Robertson, MD, Gloria McAnulty, PhD; University of Texas Health Science Center at Houston, Principal Investigators Michael E. Brandt, PhD, Jack M. Fletcher, PhD, Larry A. Kramer, MD, Investigators Grace Yang, MEd, Cara McCormack, BS, Kathleen M. Hebert, MA, Hilda Volero, MD; Washington University in St. Louis, Principal Investigators Kelly Botteron, MD, Robert C. McKinstry, MD, PhD, Investigators William Warren, Tomoyuki Nishino, MS, C. Robert Almli, PhD, Richard Todd, PhD, MD, John Constantino, MD; University of California Los Angeles, Principal Investigator James T. McCracken, MD, Investigators Jennifer Levitt, MD, Jeffrey Alger, PhD, Joseph O'Neill, PhD, Arthur Toga, PhD, Robert Asarnow, PhD, David Fadale, BA, Laura Heinichen, BA, Cedric Ireland BA; Children’s Hospital of Philadelphia, Principal Investigators Dah-Jyuu Wang, PhD and Edward Moss, PhD, Investigators Robert A. Zimmerman, MD, and Research Staff Brooke Bintliff, BS, Ruth Bradford, Janice Newman, MBA. The Principal Investigator of the data coordinating center at McGill University is Alan C. Evans, PhD, Investigators Rozalia Arnaoutelis, BS, G. Bruce Pike, PhD, D. Louis Collins, PhD, Gabriel Leonard, PhD, Tomas Paus, MD, Alex Zijdenbos, PhD, and Research Staff Samir Das, BS, Vladimir Fonov, PhD, Luke Fu, BS, Jonathan Harlap, Ilana Leppert, BE, Denise Milovan, MA, Dario Vins, BC, and at Georgetown University, Thomas Zeffiro, MD, PhD, and John Van Meter, PhD. Corresponding author Nicholas Lange, ScD, Harvard University/McLean Hospital, is the biostatistical study design and data analysis Investigator to the DCC, assisted by Michael P. Froimowitz, MS. The Principal Investigator of the Clinical Coordinating Center at Washington University is Kelly Botteron, MD, Investigators C. Robert Almli, PhD, Cheryl Rainey, BS, Stan Henderson, MS, Tomoyuki Nishino, MS, William Warren, Jennifer L. Edwards, MSW, Diane Dubois, RN, Karla Smith, Tish Singer and Aaron A. Wilber, MS. The Principal Investigator of the Diffusion Tensor Processing Center at the National Institutes of Health (NIH) is Carlo Pierpaoli, MD, PhD, Investigators Peter J. Basser, PhD, Lin-Ching Chang, ScD, and Lindsay Walker, MS. The Principal Collaborators at the NIH are Lisa Freund, PhD (NICHD), Judith Rumsey, PhD (NIMH), Lauren Baskir, PhD (NIMH), Laurence Stanford, PhD (NIDA), Karen Sirocco, PhD (NIDA) and from NINDS, Katrina Gwinn-Hardy, MD, and Giovanna Spinella, MD. The Principal Investigator of the Spectroscopy Processing Center at the University of California Los Angeles is James T. McCracken, MD, Investigators Jeffry R. Alger, PhD, Jennifer Levitt, MD, Joseph O’Neill, PhD.
## References
Akaike
H
A new look at the statistical model identification
IEEE Trans Autom Contr
,
1974
, vol.
19
(pg.
716
-
723
)
Almli
CR
Rivkin
MJ
McKinstry
RC
Brain Development Cooperative Group
The NIH MRI study of normal brain development (Objective-2): newborns, infants, toddlers, and preschoolers
Neuroimage
,
2007
, vol.
35
(pg.
308
-
325
)
Barrick
TR
Mackay
CE
Prima
S
Maes
F
Vandermeulen
D
Crow
TJ
Roberts
N
Automatic analysis of cerebral asymmetry: an exploratory study of the relationship between brain torque and planum temporale asymmetry
Neuroimage
,
2005
, vol.
24
(pg.
678
-
691
)
Brain Development Cooperative Group (corresponding author Evans AC)
The NIH MRI study of normal brain development
Neuroimage
,
2006
, vol.
30
(pg.
184
-
202
)
Caviness
VS
Jr.
Kennedy
DN
Richelme
C
J
Filipek
PA
The human brain age 7–11 years: a volumetric analysis based on magnetic resonance images
Cereb Cortex
,
1996
, vol.
6
(pg.
726
-
736
)
Cocosco
CA
Zijdenbos
AP
Evans
AC
A fully automatic and robust brain MRI tissue classification method
Med Image Anal
,
2003
, vol.
7
(pg.
513
-
527
)
Collins
DL
Evans
AC
ANIMAL: validation and applications of nonlinear registration-based segmentation
Int J Pattern Recog Artif Intell
,
1997
, vol.
11
pg.
1271
Collins
DL
Holmes
CJ
Peters
TM
Evans
AC
Automatic 3D model-based neuroanatomical segmentation
Hum Brain Mapp
,
1995
, vol.
3
(pg.
190
-
208
)
Collins
DL
Neelin
P
Peters
TM
Evans
AC
Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space
J Comput Assist Tomogr
,
1994
, vol.
18
(pg.
192
-
205
)
Cosgrove
KP
Mazure
CM
Staley
JK
Evolving knowledge of sex differences in brain structure, function, and chemistry
Biol Psychiatry
,
2007
, vol.
62
(pg.
847
-
855
)
Eckert
MA
Lombardino
LJ
Leonard
CM
Planar asymmetry tips the phonological playground and environment raises the bar
Child Dev
,
2001
, vol.
72
(pg.
988
-
1002
)
Evans
GW
Kantrowitz
E
Socioeconomic status and health: the potential role of environmental risk exposure
Annu Rev Public Health
,
2002
, vol.
23
(pg.
303
-
331
)
Gedamu
EL
Collins
DL
Arnold
DL
Automated quality control of brain MR images
J Magn Reson Imaging
,
2008
, vol.
28
(pg.
308
-
319
)
Giedd
JN
Blumenthal
J
Jeffries
NO
Castellanos
FX
Liu
H
Zijdenbos
A
Paus
T
Evans
AC
Rapoport
JL
Brain development during childhood and adolescence: a longitudinal MRI study
Nat Neurosci
,
1999
, vol.
2
(pg.
861
-
863
)
Giedd
JN
Snell
JW
Lange
N
Rajapakse
JC
Casey
BJ
Kozuch
PL
Vaituzis
AC
Vauss
YC
Hamburger
SD
Kaysen
D
, et al. .
Quantitative magnetic resonance imaging of human brain development: ages 4–18
Cereb Cortex
,
1996
, vol.
6
(pg.
551
-
560
)
Gilmore
JH
Lin
W
Prastawa
MW
Looney
CB
Vetsa
YS
Knickmeyer
RC
Evans
DD
Smith
JK
Hamer
RM
Lieberman
JA
, et al. .
Regional gray matter growth, sexual dimorphism, and cerebral asymmetry in the neonatal brain
J Neurosci
,
2007
, vol.
27
(pg.
1255
-
1260
)
Giovino
GA
Epidemiology of tobacco use in the United States
Oncogene
,
2002
, vol.
21
(pg.
7326
-
7340
)
J
Paul
RH
Cohen
RA
Tate
DF
Spitznagel
MB
Grieve
S
Gordon
E
Relationship between body mass index and brain volume in healthy adults
Int J Neurosci
,
2008
, vol.
118
(pg.
1582
-
1593
)
Hackman
DA
Farah
MJ
Socioeconomic status and the developing brain
Trends Cogn Sci
,
2009
, vol.
13
(pg.
65
-
73
)
Haltia
LT
Viljanen
A
Parkkola
R
Kemppainen
N
Rinne
JO
Nuutila
P
Kaasinen
V
Brain white matter expansion in human obesity and the recovering effect of dieting
J Clin Endocrinol Metab
,
2007
, vol.
92
(pg.
3278
-
3284
)
Harezlak
J
Ryan
LM
Giedd
JN
Lange
N
Individual and population penalized regression splines for accelerated longitudinal designs
Biometrics
,
2005
, vol.
61
(pg.
1037
-
1048
)
Huttenlocher
PR
Dabholkar
AS
Regional differences in synaptogenesis in human cerebral cortex
J Comp Neurol
,
1997
, vol.
387
(pg.
167
-
178
)
Jack
CR
Jr.
Twomey
CK
Zinsmeister
AR
Sharbrough
FW
Petersen
RC
Cascino
GD
Anterior temporal lobes and hippocampal formations: normative volumetric measurements from MR images in young adults
,
1989
, vol.
172
(pg.
549
-
554
)
Kennedy
DN
Lange
N
Makris
N
Bates
J
Meyer
J
Caviness
VS
Jr
Gyri of the human neocortex: an MRI-based analysis of volume and variance
Cereb Cortex
,
1998
, vol.
8
(pg.
372
-
384
)
Kessler
RC
McGonagle
KA
Zhao
S
Nelson
CB
Hughes
M
Eshleman
S
Wittchen
HU
Kendler
KS
Lifetime and 12-month prevalence of DSM-III-R psychiatric disorders in the United States. Results from the National Comorbidity Survey
Arch Gen Psychiatry
,
1994
, vol.
51
(pg.
8
-
19
)
Lainhart
JE
Piven
J
Wzorek
M
Landa
R
Santangelo
S
Coon
H
Folstein
S
Macrocephaly in children and adults with autism
,
1997
, vol.
36
(pg.
282
-
290
)
Laird
NM
Ware
JH
Random-effects models for longitudinal data
Biometrics
,
1982
, vol.
38
(pg.
963
-
974
)
Lancaster
JL
Kochunov
PV
Thompson
PM
Toga
AW
Fox
PT
Asymmetry of the brain surface from deformation field analysis
Hum Brain Mapp
,
2003
, vol.
19
(pg.
79
-
89
)
Lange
N
DuBray
MB
Lee
JE
Froimowitz
MP
Froehlich
A
N
Wright
B
Ravichandran
C
Fletcher
PT
Bigler
ED
, et al. .
Atypical diffusion tensor hemispheric asymmetry in autism
Autism Res
,
2010
, vol.
3
(pg.
350
-
358
)
Lange
N
Froimowitz
MP
Bigler
ED
Lainhart
JE
Brain Development Cooperative Group
Associations between IQ, total and regional brain volumes, and demography in a large normative sample of healthy children and adolescents
Dev Neuropsychol
,
2010
, vol.
35
(pg.
296
-
317
)
Lange
N
Giedd
JN
Castellanos
FX
Vaituzis
AC
Rapoport
JL
Variability of human brain structure size: ages 4–20 years
Psychiatry Res
,
1997
, vol.
74
(pg.
1
-
12
)
Lange
N
Laird
NM
The effect of covariance structure on variance estimation in balanced growth curve models with random parameters
J Am Stat Assoc
,
1989
, vol.
84
(pg.
241
-
247
)
LeMay
M
Morphological cerebral asymmetries of modern man, fossil man, and nonhuman primate
,
1976
, vol.
280
(pg.
349
-
366
)
Lenroot
RK
Gogtay
N
Greenstein
DK
Wells
EM
Wallace
GL
Clasen
LS
Blumenthal
JD
Lerch
J
Zijdenbos
AP
Evans
AC
, et al. .
Sexual dimorphism of brain developmental trajectories during childhood and adolescence
Neuroimage
,
2007
, vol.
36
(pg.
1065
-
1073
)
Luders
E
Steinmetz
H
Jancke
L
Brain size and grey matter volume in the healthy human brain
Neuroreport
,
2002
, vol.
13
(pg.
2371
-
2374
)
Mackenbach
JP
Stirbu
I
Roskam
AJ
Schaap
MM
Menvielle
G
Leinsalu
M
Kunst
AE
Socioeconomic inequalities in health in 22 European countries
N Engl J Med
,
2008
, vol.
358
(pg.
2468
-
2481
)
Middleton
FA
Strick
PL
Basal ganglia output and cognition: evidence from anatomical, behavioral, and clinical studies
Brain Cogn
,
2000
, vol.
42
(pg.
183
-
200
)
Muntaner
C
Eaton
WW
Diala
C
Kessler
RC
Sorlie
PD
Social class, assets, organizational control and the prevalence of common groups of psychiatric disorders
Soc Sci Med
,
1998
, vol.
47
(pg.
2043
-
2053
)
Pannacciulli
N
Del Parigi
A
Chen
K
Le
DS
Reiman
EM
Tataranni
PA
Brain abnormalities in human obesity: a voxel-based morphometric study
Neuroimage
,
2006
, vol.
31
(pg.
1419
-
1425
)
Preis
S
Jancke
L
Schmitz-Hillebrecht
J
Steinmetz
H
Child age and planum temporale asymmetry
Brain Cogn
,
1999
, vol.
40
(pg.
441
-
452
)
RD
Richards
TL
Meltzoff
A
Kuhl
PK
Socioeconomic status predicts hemispheric specialisation of the left inferior frontal gyrus in young children
Neuroimage
,
2008
, vol.
40
(pg.
1392
-
1401
)
Reiss
AL
Abrams
MT
Singer
HS
Ross
JL
Denckla
MB
Brain development, gender and IQ in children. A volumetric imaging study
Brain
,
1996
, vol.
119
Pt 5
(pg.
1763
-
1774
)
Schumann
CM
Bloss
CS
Barnes
CC
Wideman
GM
Carper
RA
Akshoomoff
N
Pierce
K
Hagler
D
Schork
N
Lord
C
, et al. .
Longitudinal magnetic resonance imaging study of cortical development through early childhood in autism
J Neurosci
,
2010
, vol.
30
(pg.
4419
-
4427
)
Sharma
T
Lancaster
E
Sigmundsson
T
Lewis
S
Takei
N
Gurling
H
Barta
P
Pearlson
G
Murray
R
Lack of normal pattern of cerebral asymmetry in familial schizophrenic patients and their relatives—the Maudsley Family Study
Schizophr Res
,
1999
, vol.
40
(pg.
111
-
120
)
Singh
GK
Kogan
MC
van Dyck
C
Changes in state-specific childhood obesity and overweight prevalence in the United States from 2003 to 2007
,
2010
, vol.
164
(pg.
598
-
607
)
Sled
J
Zijdenbos
A
Evans
AC
A non-parametric method for automatic correction of intensity non-uniformity in MRI data
IEEE Trans Med Imaging
,
1998
, vol.
17
(pg.
87
-
97
)
Smith
SM
Fast robust automated brain extraction
Hum Brain Mapp
,
2002
, vol.
17
(pg.
143
-
155
)
Sowell
ER
Peterson
BS
Kan
E
Woods
RP
Yoshii
J
Bansal
R
Xu
D
Zhu
H
Thompson
PM
Toga
AW
Sex differences in cortical thickness mapped in 176 healthy individuals between 7 and 87 years of age
Cereb Cortex
,
2007
, vol.
17
(pg.
1550
-
1560
)
Sowell
ER
Thompson
PM
Mattson
SN
Tessner
KD
Jernigan
TL
Riley
EP
Toga
AW
Voxel-based morphometric analyses of the brain in children and adolescents prenatally exposed to alcohol
Neuroreport
,
2001
, vol.
12
(pg.
515
-
523
)
Sowell
ER
Trauner
DA
Gamst
A
Jernigan
TL
Development of cortical and subcortical brain structures in childhood and adolescence: a structural MRI study
Dev Med Child Neurol
,
2002
, vol.
44
(pg.
4
-
16
)
Sporn
AL
Greenstein
DK
Gogtay
N
Jeffries
NO
Lenane
M
Gochman
P
Clasen
LS
Blumenthal
J
Giedd
JN
Rapoport
JL
Progressive brain volume loss during adolescence in childhood-onset schizophrenia
Am J Psychiatry
,
2003
, vol.
160
(pg.
2181
-
2189
)
Taki
Y
Kinomura
S
Sato
K
Inoue
K
Goto
R
K
Uchida
S
Kawashima
R
Fukuda
H
Relationship between body mass index and gray matter volume in 1,428 healthy individuals
Obesity
,
2008
, vol.
16
(pg.
119
-
124
)
United States Census Bureau, 2000
United States Department of Housing and Urban Development's Office of Policy Development and Research, 2003
FY 2003 Income Limits
Valera
EM
Faraone
SV
Murray
KE
Seidman
LJ
Meta-analysis of structural imaging findings in attention-deficit/hyperactivity disorder
Biol Psychiatry
,
2007
, vol.
61
(pg.
1361
-
1369
)
Van Belle
G
Fisher
LD
Biostatistics: a methodology for the health sciences
,
2004
Hoboken, NJ
Wiley-Interscience
Venables
VN
Ripley
BD
Modern applied statistics with S
,
2002
4th ed
New York
Springer-Verlag
Waber
DP
De Moor
C
Forbes
PW
Almli
CR
Botteron
KN
Leonard
G
Milovan
D
Paus
T
Rumsey
J
Brain Development Cooperative Group
The NIH MRI study of normal brain development: performance of a population based sample of healthy children aged 6 to 18 years on a neuropsychological battery
J Int Neuropsychol Soc
,
2007
, vol.
13
(pg.
729
-
746
)
Walther
K
Birdsill
AC
Glisky
EL
Ryan
L
Structural brain differences and cognitive functioning related to body mass index in older females
Hum Brain Mapp
,
2010
, vol.
31
(pg.
1052
-
1064
)
Wilke
M
Krageloh-Mann
I
Holland
SK
Global and local development of gray and white matter volume in normal children and adolescents
Exp Brain Res
,
2007
, vol.
178
(pg.
296
-
307
)
Wolbers
T
Hegarty
M
Trends Cogn Sci
,
2010
, vol.
14
(pg.
138
-
146
)
JN
Corey
DM
Casbergue
RM
Lemen
LC
Rouse
JC
Knaus
TA
Foundas
AL
Lobar asymmetries in subtypes of dyslexic and control subjects
J Child Neurol
,
2006
, vol.
21
(pg.
922
-
931
)
Zijdenbos
AP
Forghani
R
Evans
AC
Automatic “pipeline” analysis of 3-D MRI data for clinical trials: application to multiple sclerosis
IEEE Trans Med Imaging
,
2002
, vol.
21
(pg.
1280
-
1291
)
|
{}
|
LinearAlgebra[Modular]
Inverse
compute the Inverse of a square mod m Matrix
compute the Adjoint of a square mod m Matrix
Calling Sequence Inverse(m, A, det, B, meth) Adjoint(m, A, det, B, meth)
Parameters
m - modulus A - mod m Matrix det - (optional) name to use for output determinant B - (optional) Matrix to use for output Inverse or Adjoint meth - (optional) method to use for computing Inverse or Adjoint
Description
• The Inverse and Adjoint functions compute the inverse and adjoint, respectively, of a square mod m Matrix.
• If det is specified, it is assigned the value of the determinant on successful completion.
• If B is specified, it must have dimensions and datatype identical to A, and will contain the inverse or adjoint on successful completion. In this case the command will return NULL.
• The default method for the inverse is LU, while the default method for the adjoint is RET, and these can be changed by specification of meth.
Allowable options are:
LU Obtain inverse or adjoint via LU decomposition. inplaceLU Obtain inverse or adjoint via LU decomposition destroying the data in A in the process. RREF Obtain inverse or adjoint through application of row reduction to an identity augmented mod m Matrix. RET Obtain inverse or adjoint through application of a row echelon transform to A. inplaceRET Obtain inverse or adjoint through application of a row echelon transform to A, replacing A with the inverse or adjoint.
The LU and inplaceLU methods are the most efficient for small to moderate sized problems. The RET and inplaceRET methods are the most efficient for very large problems. The RREF method is the most flexible for nonsingular matrices.
• For the inplaceRET method, B should never be specified, as the output replaces A. For this method, the commands always return NULL.
• With the LU-based and RET-based methods, it is generally required that m be a prime, as mod m inverses are needed, but in some cases it is possible to obtain an LU decomposition or a Row-Echelon Transform for m composite.
For the cases where LU Decomposition or Row-Echelon Transform cannot be obtained for m composite, the function returns an error indicating that the algorithm failed because m is composite.
Note: There are cases with composite m for which the inverse and adjoint exist, but no LU decomposition or Row-Echelon Transform is possible.
• If it exists, the RREF method always finds the mod m inverse. The RREF method also finds the adjoint if the Matrix is nonsingular.
• The RET method is the only method capable of computing the adjoint if the matrix is singular. The inplaceRET method cannot be used to compute the adjoint of a singular matrix, as this operation cannot be performed in-place.
• These commands are part of the LinearAlgebra[Modular] package, so they can be used in the form Inverse(..) and Adjoint(..) only after executing the command with(LinearAlgebra[Modular]). However, they can always be used in the form LinearAlgebra[Modular][Inverse](..) and LinearAlgebra[Modular][Adjoint](..).
Examples
Basic 3x3 Matrix.
> $\mathrm{with}\left(\mathrm{LinearAlgebra}\left[\mathrm{Modular}\right]\right):$
> $p≔97$
${p}{≔}{97}$ (1)
> $M≔\mathrm{Mod}\left(p,\mathrm{Matrix}\left(3,3,\left(i,j\right)↦\mathrm{rand}\left(\right)\right),\mathrm{integer}\left[\right]\right)$
${M}{≔}\left[\begin{array}{ccc}{77}& {96}& {10}\\ {86}& {58}& {36}\\ {80}& {22}& {44}\end{array}\right]$ (2)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(p,M\right)$
${\mathrm{Mi}}{≔}\left[\begin{array}{ccc}{16}& {80}& {72}\\ {20}& {20}& {32}\\ {5}& {65}& {89}\end{array}\right]$ (3)
> $\mathrm{Multiply}\left(p,M,\mathrm{Mi}\right),\mathrm{Multiply}\left(p,\mathrm{Mi},M\right)$
$\left[\begin{array}{ccc}{1}& {0}& {0}\\ {0}& {1}& {0}\\ {0}& {0}& {1}\end{array}\right]{,}\left[\begin{array}{ccc}{1}& {0}& {0}\\ {0}& {1}& {0}\\ {0}& {0}& {1}\end{array}\right]$ (4)
An example that fails with the LU and RET methods, but succeeds with RREF.
> $m≔6$
${m}{≔}{6}$ (5)
> $M≔\mathrm{Mod}\left(m,\left[\left[3,2\right],\left[2,1\right]\right],\mathrm{float}\left[8\right]\right)$
${M}{≔}\left[\begin{array}{cc}{3.}& {2.}\\ {2.}& {1.}\end{array}\right]$ (6)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M\right)$
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RET}'\right)$
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RREF}'\right)$
${\mathrm{Mi}}{≔}\left[\begin{array}{cc}{5.}& {2.}\\ {2.}& {3.}\end{array}\right]$ (7)
> $\mathrm{Multiply}\left(m,M,\mathrm{Mi}\right),\mathrm{Multiply}\left(m,\mathrm{Mi},M\right)$
$\left[\begin{array}{cc}{1.}& {0.}\\ {0.}& {1.}\end{array}\right]{,}\left[\begin{array}{cc}{1.}& {0.}\\ {0.}& {1.}\end{array}\right]$ (8)
An example where no inverse exists, but the adjoint does exist.
> $m≔6$
${m}{≔}{6}$ (9)
> $M≔\mathrm{Mod}\left(m,\left[\left[2,4\right],\left[4,4\right]\right],\mathrm{float}\left[8\right]\right)$
${M}{≔}\left[\begin{array}{cc}{2.}& {4.}\\ {4.}& {4.}\end{array}\right]$ (10)
> $\mathrm{Mi}≔\mathrm{Inverse}\left(m,M,'\mathrm{RREF}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{det}','\mathrm{RREF}'\right)$
${\mathrm{Ma}}{≔}\left[\begin{array}{cc}{4.}& {2.}\\ {2.}& {2.}\end{array}\right]$ (11)
> $\mathrm{det},\mathrm{Multiply}\left(m,M,\mathrm{Ma}\right),\mathrm{Multiply}\left(m,\mathrm{Ma},M\right)$
${4}{,}\left[\begin{array}{cc}{4.}& {0.}\\ {0.}& {4.}\end{array}\right]{,}\left[\begin{array}{cc}{4.}& {0.}\\ {0.}& {4.}\end{array}\right]$ (12)
An example where only the RET method succeeds at computing the adjoint.
> $m≔7$
${m}{≔}{7}$ (13)
> $M≔\mathrm{Mod}\left(m,\left[\left[1,1\right],\left[1,1\right]\right],\mathrm{integer}\right)$
${M}{≔}\left[\begin{array}{cc}{1}& {1}\\ {1}& {1}\end{array}\right]$ (14)
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{RREF}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{LU}'\right)$
> $\mathrm{Ma}≔\mathrm{Adjoint}\left(m,M,'\mathrm{det}','\mathrm{RET}'\right)$
${\mathrm{Ma}}{≔}\left[\begin{array}{cc}{1}& {6}\\ {6}& {1}\end{array}\right]$ (15)
> $\mathrm{det},\mathrm{Multiply}\left(m,M,\mathrm{Ma}\right),\mathrm{Multiply}\left(m,\mathrm{Ma},M\right)$
${0}{,}\left[\begin{array}{cc}{0}& {0}\\ {0}& {0}\end{array}\right]{,}\left[\begin{array}{cc}{0}& {0}\\ {0}& {0}\end{array}\right]$ (16)
|
{}
|
### Theory:
A polynomial of the form $$ax^2 + bx + c$$, $$a \neq 0$$ is a quadratic polynomial.
The graph of a quadratic polynomial is a parabolic curve either open upwards or open downwards depending on whether $$a > 0$$ or $$a < 0$$.
Consider the graph of $$y = 2x^2 + 3x + 1$$.
$$x$$ $$-2$$ $$-1$$ $$0$$ $$1$$ $$2$$ $$2x^2$$ $$2(-2)^2$$ $$=$$ $$2(4)$$ $$=$$ $$8$$ $$2(-1)^2$$ $$=$$ $$2(1)$$ $$=$$ $$2$$ $$2(0)^2$$ $$=$$ $$2(0)$$ $$=$$ $$0$$ $$2(1)^2$$ $$=$$ $$2(1)$$ $$=$$ $$2$$ $$2(2)^2$$ $$=$$ $$2(4)$$ $$=$$ $$8$$ $$3x$$ $$3 \times -2$$ $$=$$ $$-6$$ $$3 \times -1$$ $$=$$ $$-3$$ $$3 \times 0$$ $$=$$ $$0$$ $$3 \times 1$$ $$=$$ $$3$$ $$3 \times 2$$ $$=$$ $$6$$ $$2x^2 + 3x + 1$$ $$8$$ $$-$$ $$6$$ $$+$$ $$1$$ $$=$$ $$3$$ $$2$$ $$-$$ $$3$$ $$+$$ $$1$$ $$=$$ $$0$$ $$0$$ $$+$$ $$0$$ $$+$$ $$1$$ $$=$$ $$1$$ $$2$$ $$+$$ $$3$$ $$+$$ $$1$$ $$=$$ $$6$$ $$8$$ $$+$$ $$6$$ $$+$$ $$1$$ $$=$$ $$15$$ $$y$$ $$=$$ $$2x^2 + 3x + 1$$ $$3$$ $$0$$ $$1$$ $$6$$ $$15$$
Join the coordinates $$(-2, 3)$$, $$(-1, 0)$$, $$(0, 1)$$, $$(1, 6)$$ and $$(2, 15)$$ by a smooth curve so as to obtain the graph of $$y = 2x^2 + 3x + 1$$.
It is observed that, the graph of the polynomial $$y = 2x^2 + 3x + 1$$ intersects the $$x$$ $$-$$ axis at the points $$(-1, 0)$$ and $$(-0.5, 0)$$.
Thus, we can say that the zero of a quadratic polynomial is the $$x$$ $$-$$ coordinates of the point where the graph of the polynomial intersects the $$x$$ $$-$$ axis.
Let us discuss few cases of graphs of a quadratic polynomial.
Case 1: The graph of the quadratic polynomial cuts the $$x$$ $$-$$ axis at two points, as shown below.
In this case, the number of zeroes of the polynomial is $$2$$.
Case 2: The graph of the quadratic polynomial cuts the $$x$$ $$-$$ axis at one point, as shown below.
In this case, the number of zeroes of the polynomial is $$1$$.
Case 3: The graph of the quadratic polynomial does not intersect the $$x$$ $$-$$ axis at any point, as shown below.
In this case, the number of zeroes of the polynomial is $$0$$.
The quadratic polynomial $$ax^2 + bx + c$$, $$a \neq 0$$, has at most two zeroes which is the $$x$$ $$-$$ coordinates of the point where the graph of the polynomial intersects the $$x$$ $$-$$ axis.
|
{}
|
# 10-seconds challenge-6
Calculus Level 3
Let $f(x) = \dfrac{\cos x}{x^4+ 1}$, find $f^{(3)} (0)$.
Clarification: We are looking for the third derivative of $f(x)$ with respect to $x$.
This is a part of 10-seconds challenge.
×
|
{}
|
# Math Help - cut-point
1. ## cut-point
How would I show that a circle and a circle with a line bisecting it(with the end-points of the line touching the circumference) aren't homeomorphic?
I know I would have to use the 'cut-point principle' which states that homeomorphic sets have the same number of n-points for each n, but I don't know how to find the n-points....
2. For any circle (a simple closed curve) removing any two points leaves exactly two connected subsets.
We know that a simple closed curve is the union of two arcs the intersection of which is the set of their endpoints.
Is that true of the second set in this problem? What about the endpoints of the chord?
3. In a circle, any single point will disconnect the set right... Why specify 2?
I don't understand how to find the n-points?
For example, I know $s^1$, the unit circle, every point is a 1-point and [0,1)
is not homeomorphic to $s^1$, since [0,1) has some 2-points whereas $S^1$ doesn't....can anyone explain why!?
4. [QUOTE=bigdoggy;396352]In a circle, any single point will disconnect the set right... Why specify 2?[QUOTE]
How in the world could a single point's removal disconnect a circle?
How much do you know about simple closed curves and about arcs?
5. How in the world could a single point's removal disconnect a circle?
I was told that every point of a circle is a 1-point which I believed implied by removing a single point would disconnect it...?
I'm trying to understand how to find the n-points in general so I can then decide if two sets are homeomorphic....
6. Either you are being totally false and incompetent instruction or you do not understand.
The whole area of ‘cut-points’ as applied to arcs and simple closed curves is such a rich area of study.
It is a pity that you seem to be so confused by this topic.
7. Originally Posted by Plato
Either you are being totally false and incompetent instruction or you do not understand.
The whole area of ‘cut-points’ as applied to arcs and simple closed curves is such a rich area of study.
It is a pity that you seem to be so confused by this topic.
I don't see the value in that comment Plato.
Surely it is apparent that I'm failing to 'connect the dots' here, hence I was hoping for some pointers!!
8. I think from your definition of cut points a "1-point" means you are left with 1 connected path component. So of course, on the unit circle all points are 1-points.
To show 2 sets X and Y aren't homeomorphic with cut points, either show that X (or Y) contains an n-point that isn't in Y (or X), or show that X contains more n-points than Y (for a specified n)
9. Originally Posted by bigdoggy
I was told that every point of a circle is a 1-point which I believed implied by removing a single point would disconnect it...?
I'm trying to understand how to find the n-points in general so I can then decide if two sets are homeomorphic....
removing one point from a circle allows you to "open it up" to be a line segment but that is still connected. Removing another point from that line segment makes it unconnected.
What, exactly, is the definition of "n-points"?
10. Originally Posted by HallsofIvy
removing one point from a circle allows you to "open it up" to be a line segment but that is still connected. Removing another point from that line segment makes it unconnected.
What, exactly, is the definition of "n-points"?
Say the space X is path-connected, a point $x \ \varepsilon X$ is called an n-point if $X-\{x\}$ has n path-compnents...
11. Originally Posted by bigdoggy
Say the space X is path-connected, a point $x \ \varepsilon X$ is called an n-point if $X-\{x\}$ has n path-components...
That is a completely understandable definition.
No one could argue with its clarity.
BUT, under that definition a 1-point is not a cut-point.
Authors are not free to co-op definitions.
That is the basis of the arguments in this thread.
The idea of a ‘cut-point’ goes back as far as a 1909 paper by R.L. Moore, the founder of point set topology.
Again a “1-point is not a cut-point.”
|
{}
|
Search by Topic
Resources tagged with Working systematically similar to Happy Octopus:
Filter by: Content type:
Stage:
Challenge level:
There are 122 results
Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically
Factors and Multiple Challenges
Stage: 3 Challenge Level:
This package contains a collection of problems from the NRICH website that could be suitable for students who have a good understanding of Factors and Multiples and who feel ready to take on some. . . .
Football Sum
Stage: 3 Challenge Level:
Find the values of the nine letters in the sum: FOOT + BALL = GAME
Oranges and Lemons, Say the Bells of St Clement's
Stage: 3 Challenge Level:
Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own.
Stage: 3 Challenge Level:
If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why?
How Old Are the Children?
Stage: 3 Challenge Level:
A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?"
Cayley
Stage: 3 Challenge Level:
The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
Twin Line-swapping Sudoku
Stage: 4 Challenge Level:
A pair of Sudoku puzzles that together lead to a complete solution.
Integrated Product Sudoku
Stage: 3 and 4 Challenge Level:
This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid.
Cinema Problem
Stage: 3 and 4 Challenge Level:
A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children.
Ones Only
Stage: 3 Challenge Level:
Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones.
Stage: 3 and 4 Challenge Level:
Four small numbers give the clue to the contents of the four surrounding cells.
A Long Time at the Till
Stage: 4 and 5 Challenge Level:
Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem?
Olympic Logic
Stage: 3 and 4 Challenge Level:
Can you use your powers of logic and deduction to work out the missing information in these sporty situations?
Difference Dynamics
Stage: 4 and 5 Challenge Level:
Take three whole numbers. The differences between them give you three new numbers. Find the differences between the new numbers and keep repeating this. What happens?
Product Doubles Sudoku
Stage: 3 and 4 Challenge Level:
Each clue number in this sudoku is the product of the two numbers in adjacent cells.
Pole Star Sudoku 2
Stage: 3 and 4 Challenge Level:
This Sudoku, based on differences. Using the one clue number can you find the solution?
Difference Sudoku
Stage: 3 and 4 Challenge Level:
Use the differences to find the solution to this Sudoku.
A First Product Sudoku
Stage: 3 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
Pair Sums
Stage: 3 Challenge Level:
Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers?
The Naked Pair in Sudoku
Stage: 2, 3 and 4
A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article.
9 Weights
Stage: 3 Challenge Level:
You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance?
Masterclass Ideas: Working Systematically
Stage: 2 and 3 Challenge Level:
A package contains a set of resources designed to develop students’ mathematical thinking. This package places a particular emphasis on “being systematic” and is designed to meet. . . .
Coins
Stage: 3 Challenge Level:
A man has 5 coins in his pocket. Given the clues, can you work out what the coins are?
Twin Corresponding Sudokus II
Stage: 3 and 4 Challenge Level:
Two sudokus in one. Challenge yourself to make the necessary connections.
More on Mazes
Stage: 2 and 3
There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper.
Multiplication Equation Sudoku
Stage: 4 and 5 Challenge Level:
The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid.
Colour Islands Sudoku
Stage: 3 Challenge Level:
An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine.
Integrated Sums Sudoku
Stage: 3 and 4 Challenge Level:
The puzzle can be solved with the help of small clue-numbers which are either placed on the border lines between selected pairs of neighbouring squares of the grid or placed after slash marks on. . . .
Peaches Today, Peaches Tomorrow....
Stage: 3 and 4 Challenge Level:
Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for?
American Billions
Stage: 3 Challenge Level:
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
Stage: 3 Challenge Level:
Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar".
Stage: 3, 4 and 5 Challenge Level:
You need to find the values of the stars before you can apply normal Sudoku rules.
First Connect Three
Stage: 2 and 3 Challenge Level:
The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for?
Stage: 3 and 4 Challenge Level:
This is a variation of sudoku which contains a set of special clue-numbers. Each set of 4 small digits stands for the numbers in the four cells of the grid adjacent to this set.
Magnetic Personality
Stage: 2, 3 and 4 Challenge Level:
60 pieces and a challenge. What can you make and how many of the pieces can you use creating skeleton polyhedra?
The Great Weights Puzzle
Stage: 4 Challenge Level:
You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest?
Bochap Sudoku
Stage: 3 and 4 Challenge Level:
This Sudoku combines all four arithmetic operations.
Inky Cube
Stage: 2 and 3 Challenge Level:
This cube has ink on each face which leaves marks on paper as it is rolled. Can you work out what is on each face and the route it has taken?
Sociable Cards
Stage: 3 Challenge Level:
Move your counters through this snake of cards and see how far you can go. Are you surprised by where you end up?
Twin Chute-swapping Sudoku
Stage: 4 and 5 Challenge Level:
A pair of Sudokus with lots in common. In fact they are the same problem but rearranged. Can you find how they relate to solve them both?
Stage: 3, 4 and 5 Challenge Level:
Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas.
Crossing the Town Square
Stage: 2 and 3 Challenge Level:
This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares.
First Connect Three for Two
Stage: 2 and 3 Challenge Level:
First Connect Three game for an adult and child. Use the dice numbers and either addition or subtraction to get three numbers in a straight line.
When Will You Pay Me? Say the Bells of Old Bailey
Stage: 3 Challenge Level:
Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring?
Sticky Numbers
Stage: 3 Challenge Level:
Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number?
Twin Corresponding Sudoku
Stage: 3, 4 and 5 Challenge Level:
This sudoku requires you to have "double vision" - two Sudoku's for the price of one
Colour Islands Sudoku 2
Stage: 3, 4 and 5 Challenge Level:
In this Sudoku, there are three coloured "islands" in the 9x9 grid. Within each "island" EVERY group of nine cells that form a 3x3 square must contain the numbers 1 through 9.
Making Maths: Double-sided Magic Square
Stage: 2 and 3 Challenge Level:
Make your own double-sided magic square. But can you complete both sides once you've made the pieces?
Summing Consecutive Numbers
Stage: 3 Challenge Level:
Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way?
Crossing the Bridge
Stage: 3 Challenge Level:
Four friends must cross a bridge. How can they all cross it in just 17 minutes?
|
{}
|
# Confusion in the position of image coordinates
I am struggling to work out how image coordinates are calculated and how to be consistent when finding particular points in an image using these coordinates for further analysis. I don't think it is necessarily a bug, but it makes it tricky to work with and modify images.
This is best shown by the following example.
We make a test image:
We then replace one of the pixels (at position 20,3) with the value 2 (so it is easy to find.)
testIm2=ReplacePixelValue[testIm, {20, 3} -> 2]
We now try to find the coordinates of this pixel with the value 2.
coords=Position[ImageData[testIm2], 2 | 2.]
which gives the result {13,20}? What is going on here ? How can we be sure of choosing the right pixel values ?
$$(x_i,y_i)$$ which gives coordinates $$(y_i,1+x_{dim}-x_i)$$ where $$x_{dim}$$ is the dimension of the image in the x-direction in the first coordinate system. I hope this helps other people dealing with this problem
|
{}
|
# Enabling “Update” button for installed packages
I found that if you navigate to the "Installed Add-Ons" page in the *Wolfram Documentation Center" accessed by following
Help ▸ Wolfram Documentation ▸ Add-ons and Packages
you will find a listing of all installed packages. Then if you open up the cell for one of the packages and open the sub-cell "MANAGE", you'll find a botton titled "UPDATE", but it seems to be greyed out:
1. How do you enable this button (so that it's clickable) for your installed packages?
2. If you are the developer of an installed package, what needs to be done so that this button is enabled, and correctly updates the package from a given source?
Presumably PacletInfo.m should be appropriately modified.
• E.g. for Shortcuts package this has to be true PacletManagerPackageupdateButtonEnabled[{ "Shortcuts-0.0.1", "C:\\Users\\user\\AppData\\Roaming\\Mathematica\\Applications\\\ Shortcuts"}] but I don't think this feature is ready since RawBoxes@SystemDumpupdateButtonGraphicsBox is as gray as RawBoxes@SystemDumpupdateButtonDisabledGraphicsBox – Kuba Oct 4 '16 at 18:21
• @Kuba Thanks! But, feature not ready? That button is there since at least 8.0.4. How long could it possibly take to make it ready? – QuantumDot Oct 4 '16 at 18:31
• Don't ask me, have you seen any official tutorial about more than basics of PacletManager`? mathematica.stackexchange.com/a/120976/5478 – Kuba Oct 4 '16 at 18:33
• QuantumDot and @Kuba: Maybe the paclet needs to be downloaded from a paclet server for this to work. See here: community.wolfram.com/groups/-/m/t/968760 I won't work on this. If you figure it out, please post a self-answer. – Szabolcs Nov 30 '16 at 16:43
|
{}
|
Skip to Main Content
Table 3—
Multivariate associations between beliefs, adherence, and health status
Independent variables predicting medication beliefsMedication beliefs (as dependent variables)
Antihyperglycemic
Antihypertensive
NecessityConcernsNecessityConcerns
n 803 573
Age (years) −0.12*** −0.17*** −0.08 −0.17***
Sex (male) −0.02 −0.05 −0.10* 0.05
Ethnic minority −0.03 0.12*** 0.03 0.09*
Household income bracket −0.08* −0.05 −0.04 −0.07
Number of prescription medications 0.12** −0.02 0.07 −0.06
Whether prescribed insulin 0.23*** 0.03 −0.06 −0.04
No. of medical conditions 0.09 0.11* 0.14** 0.08
Satisfaction with medication information 0.05 −0.15*** 0.11*** −0.14***
Low functional health literacy −0.02 0.12*** 0.01 0.22***
Out-of-pocket prescription costs >$50/month 0.05 0.09* 0.07 0.04 Independent variables predicting medication beliefsMedication beliefs (as dependent variables) Antihyperglycemic Antihypertensive NecessityConcernsNecessityConcerns n 803 573 Age (years) −0.12*** −0.17*** −0.08 −0.17*** Sex (male) −0.02 −0.05 −0.10* 0.05 Ethnic minority −0.03 0.12*** 0.03 0.09* Household income bracket −0.08* −0.05 −0.04 −0.07 Number of prescription medications 0.12** −0.02 0.07 −0.06 Whether prescribed insulin 0.23*** 0.03 −0.06 −0.04 No. of medical conditions 0.09 0.11* 0.14** 0.08 Satisfaction with medication information 0.05 −0.15*** 0.11*** −0.14*** Low functional health literacy −0.02 0.12*** 0.01 0.22*** Out-of-pocket prescription costs >$50/month 0.05 0.09* 0.07 0.04
Binary dependent variables (medication underuse)Medication beliefs (as independent variables)§
Antihyperglycemic
Antihypertensive
NecessityConcernsNecessityConcerns
Cost-related underuse vs. no underuse 1.4 2.8*** 1.2 2.6***
Noncost-related underuse vs. no underuse 1.0 1.7*** 1.2 1.9***
Cost-related underuse vs. non-cost-related underuse 1.4 1.7*** 1.0 1.3
Continuous dependent variables (health status)
A1C (%) 0.03 0.08* — —
SBP (mmHg) — — 0.09* 0.16***
DBP (mmHg) — — 0.07 0.12**
Binary dependent variables (medication underuse)Medication beliefs (as independent variables)§
Antihyperglycemic
Antihypertensive
NecessityConcernsNecessityConcerns
Cost-related underuse vs. no underuse 1.4 2.8*** 1.2 2.6***
Noncost-related underuse vs. no underuse 1.0 1.7*** 1.2 1.9***
Cost-related underuse vs. non-cost-related underuse 1.4 1.7*** 1.0 1.3
Continuous dependent variables (health status)
A1C (%) 0.03 0.08* — —
SBP (mmHg) — — 0.09* 0.16***
DBP (mmHg) — — 0.07 0.12**
*
P < 0.05 (NS with Bonferroni correction).
**
P < 0.01.
***
P < 0.005.
Each column represents a separate ordinary least-squares regression model, with dependent variables listed as column headers and independent variables listed in rows. Cell entries represent standardized regression coefficients.
Cell entries are odds ratios (P value) of adjusted association between medication beliefs (independent variable, in columns) and underuse of the corresponding medication (dependent variable, in rows), with the second type of underuse as the reference group.
§
All models adjusted for age, sex, ethnic minority status, household income, number of prescription medications, insulin use, number of comorbid conditions, out-of-pocket prescription costs, and FHL.
Cell entries are standardized β coefficients (P value) of adjusted association between medication beliefs and the medical outcome variable.
Close Modal
|
{}
|
Home » Uncategorized » Gate 2015 - Set 1 - EE Question 1 - Solution
# Gate 2015 - Set 1 - EE Question 1 - Solution
Gate 2015 - Set 1 - EE
Question 1 - Solution
$\int_{-\infty }^{\infty }f(x) dx =1 \Rightarrow \int_{0}^{1}(a+bx)dx =1 \Rightarrow \left [ax +\frac{bx^{2}}{2} \right ]_{0}^{1}=1$
Translate »
|
{}
|
# How is confidence interval computed in statsmodels for ARIMA(p,d,q) parameters?
I am using statsmodel package for fitting ARIMA(p,d,q) model to a time series. My question is how exactly does this package estimate confidence intervals of the parameters of this model? statsmodels documentation says that
"The confidence interval is based on the standard normal distribution if self.use_t is False. If self.use_t is True, then uses a Student’s t with self.df_resid_inference (or self.df_resid if df_resid_inference is not defined) degrees of freedom."
Then the question is how is the variance of different parameters estimated to apply the standard Normal or t-distribution method?
Edit: I used the Hessian matrix method to compute the covariance matrix. But the confidence interval obtained using my approach are much wider than those produced by statsmodels. Which means that statsmodels is not using the Hessian matrix approach. Also, I noticed that as I increase the length of my time-series, the confidence intervals obtained by these approaches become similar.
The documentation for the cov_type argument to fit method describes the options for computing the covariance matrix associated with the parameter estimates.
The default method is the outer product of gradients estimator (cov_type='opg').
If you want to use the numerically approximated Hessian, you can choose cov_type='approx', but note that by default this will use complex step differentiation rather than finite differences, so it may still be different from what you compute by hand.
|
{}
|
# Univariate, Bivariate and Trivariate Entropies In netropy: Statistical Entropy Analysis of Network Data
knitr::opts_chunk$set(out.width = "100%", cache = FALSE ) The univariate entropy for discrete variable$X$with$r$outcomes is defined by $$H(X) = \sum_x p(x) \log_2\frac{1}{p(x)}$$ with which we can check for redundancy and uniformity: a discrete random variable with minimal zero entropy has no uncertainty and is always equal to the same single outcome. Thus, it is a constant that contributes nothing to further analysis and can be omitted. Maximum entropy is$\log_2r$and it corresponds to a uniform probability distribution over the outcomes. The bivariate entropy for discrete variable$X$and$Y$is defined by $$H(X,Y) = \sum_x \sum_y p(x,y) \log_2\frac{1}{p(x,y)}$$ with which we can check for redundancy, functional relationships and stochastic independence between pairs of variables. It is bounded according to $$H(X) \leq H(X,Y) \leq H(X)+H(Y)$$ where we have • equality to the left iff there is a functional relationship$Y = f(X)$such that each unique outcome of$X$yields a unique outcome of$Y$• equality to the right iff$X$and$Y$are stochastically independent$X\bot Y$such that the probability of any bivariate outcome is the product of the probabilities of the univariate outcomes. Note that when the bivariate entropy of two variables is equal to the univariate entropy of either one alone, then one of these variables should be omitted as they are redundant providing no additional information. These results on bivariate entropies are directly linked to joint entropies and association graphs. Similarly, trivariate entropies (and higher order entropies) allows us to check for functional relationships and stochastic independence between three (or more) variables. The trivariate entropy of three variables$X$,$Y$and$Z$is defined by $$H(X,Y,Z) = \sum_x \sum_y \sum_z p(x,y,z) \log_2\frac{1}{p(x,y,z)}$$ and bounded by $$H(X,Y) \leq H(X,Y,Z) \leq H(X,Z) + H(Y,Z) - H(Z).$$ The results on bivariate and trivariate entropies are directly linked to prediction power and expected conditional entropies. Examples of computing univariate, bivariate and trivariate entropies are given in the following. ## Example: univariate and bivariate entropies library(netropy) We create a dataframe dyad.var consisting of dyad variables as described and created in variable domains and data editing. Similar analyses can be perfomed on observed and/or transformed dataframes with vertex or triad variables. data(lawdata) adj.advice <- lawdata[[1]] adj.friend <- lawdata[[2]] adj.cowork <-lawdata[[3]] df.att <- lawdata[[4]] att.var <- data.frame( status = df.att$status-1,
gender = df.att$gender, office = df.att$office-1,
years = ifelse(df.att$years<=3,0, ifelse(df.att$years<=13,1,2)),
age = ifelse(df.att$age<=35,0, ifelse(df.att$age<=45,1,2)),
practice = df.att$practice, lawschool= df.att$lawschool-1
)
dyad.status <- get_dyad_var(att.var$status, type = 'att') dyad.gender <- get_dyad_var(att.var$gender, type = 'att')
dyad.office <- get_dyad_var(att.var$office, type = 'att') dyad.years <- get_dyad_var(att.var$years, type = 'att')
dyad.age <- get_dyad_var(att.var$age, type = 'att') dyad.practice <- get_dyad_var(att.var$practice, type = 'att')
dyad.lawschool <- get_dyad_var(att.var$lawschool, type = 'att') dyad.cwk <- get_dyad_var(adj.cowork, type = 'tie') dyad.adv <- get_dyad_var(adj.advice, type = 'tie') dyad.frn <- get_dyad_var(adj.friend, type = 'tie') dyad.var <- data.frame(cbind(status = dyad.status$var,
gender = dyad.gender$var, office = dyad.office$var,
years = dyad.years$var, age = dyad.age$var,
practice = dyad.practice$var, lawschool = dyad.lawschool$var,
cowork = dyad.cwk$var, advice = dyad.adv$var,
friend = dyad.frn$var) ) head(dyad.var) The function entropy_bivar() computes the bivariate entropies of all pairs of variables in the dataframe. The output is given as an upper triangular matrix with cells giving the bivariate entropies of row and column variables. The diagonal thus gives the univariate entropies for each variable in the dataframe: entropy_bivar(dyad.var) ## Example: redundant variables Bivariate entropies can be used to detect redundant variables that should be omitted from the dataframe for further analysis. When calculating bivariate entropies, one can check of whether the diagonal values are equal to any of the other values in the rows an columns. As seen above, the dataframe dyad.var has no redundant variables. This can also be checked using the function redundancy() which yields a binary matrix as output indicating which row and column variables are hold the same information: redundancy(dyad.var) To illustrate an example with redundancy, we use the dataframe att.var with node attributes as described and created in variable domains and data editing. Note however that we now keep the variable senior in this dataframe: att.var <- data.frame( senior = df.att$senior,
status = df.att$status-1, gender = df.att$gender,
office = df.att$office-1, years = ifelse(df.att$years<=3,0,
ifelse(df.att$years<=13,1,2)), age = ifelse(df.att$age<=35,0,
ifelse(df.att$age<=45,1,2)), practice = df.att$practice,
lawschool= df.att\$lawschool-1
)
head(att.var)
Checking redundancy on this dataframe yields the following output:
redundancy(att.var)
As seen, senior has been flagged as a redundant variable which is not surprising since it only consists of unique values. This redudancy can also be noted by computing the bivariate entropies and noting that the univariate entropy for this variable is equal to the bivariate entropies of pairs including this variable:
entropy_bivar(att.var)
## Example: trivariate entropies
Trivariate entropies can be computed using the function entropy_trivar() which returns a dataframe with the first three columns representing possible triples of variables V1,V2, and V3 from the dataframe in question, and their entropies H(V1,V2,V3) as the fourth column. We illustrated this on the dataframe dyad.var:
entropy_trivar(dyad.var)
## References
Frank, O., & Shafie, T. (2016). Multivariate entropy analysis of network data. Bulletin of Sociological Methodology/Bulletin de Méthodologie Sociologique, 129(1), 45-63. link
Nowicki, K., Shafie, T., & Frank, O. (Forthcoming 2022). Statistical Entropy Analysis of Network Data.
## Try the netropy package in your browser
Any scripts or data that you put into this service are public.
netropy documentation built on Feb. 2, 2022, 9:07 a.m.
|
{}
|
• Subject:
...
• Topic:
...
In van der Waals' equation of state of the gas, the constant 'b' is a measure of:
(a) intermolecular collisions per unit volume
(b) intermolecular attraction
(c) volume occupied by molecules
(d) intermolecular repulsions
|
{}
|
# Problem with the Hermitian adjoint of an operator $\dagger$
Normally it is enough just to write \dag like this \hat A^{\dag} but it doesn't work here!. $$\hat A^{\dag}$$
If you type $\newcommand\dag\dagger$ at the beginning of your post, you can type $$a^\dag$$ and $$b^\dag$$ later. It will work as expected.
I actually use $\newcommand\ket[1]{\left| #1 \right>}$ quite often in quantum physics related posts, so I can confirm it works.
Not all LaTeX commands are supported. We use MathJax for LaTeX rendering, and the list of macros that are implemented is given on their website. In this case you can use \dagger instead.
|
{}
|
process91/redmine_latex_mathjax
Subversion checkout URL
You can clone with
or
.
A plugin to enable mathjax on redmine
Ruby
Release: v0.3.0
latest commit a7a2dbabe6
authored
Failed to load latest commit information. lib/redmine_latex_mathjax/hooks Aug 5, 2015 README.md Aug 7, 2015 init.rb Aug 7, 2015
Redmine LaTeX MathJax Macro Plugin
This is a simple little plugin which allows mathematical notation to be used within Redmine.
Requirements
Redmine 3.0.x or 3.1.x. Other version are not tested but may work.
Installation
1. Download archive and extract to /your/path/to/redmine/plugins/
2. If you downloaded the zipball (https://github.com/vDorst/redmine_latex_mathjax/zipball/master), rename the extracted directory to 'redmine_latex_mathjax'
3. (Optional) Modify init.rb. For example if you host MathJax yourself. You can point to your own MathJax location.
4. Restart Redmine (e.g. by restarting Apache)
Login to Redmine and go to Administration->Plugins. You should now see 'Redmine LaTeX MathJax Macro'. Enjoy!
Usage
Anywhere on a wiki or issue page write for formulas:
{{mj(\sum_i x_i\$)}}
or a multiline MathJax Syntax:
{{mj P_{POWER} = \cfrac {U_{POWER}} {I_{POWER}} }}
Hit 'Preview' or 'Save' to let them show up.
FAQ
A: They do not show up in PDFs that are generated by using the PDF link below the wiki pages. See Closed Issue: Formula will not display in PDF export (https://github.com/process91/redmine_latex_mathjax/issues/1#issuecomment-4850823)
Because of the macro this can be changed in the future. To switch from client-side to server-side rendering.
Q: Why a macro? MathJax don't need that!
A: Macro is needed to bypass the markup language parser of Redmine. Like Markdown tries to convert underscores to em HTML tags. Also this enables us to do server-side rendering in the future.
Something went wrong with that request. Please try again.
|
{}
|
## devil in the details [Software]
Dear Helmut,
...exactly the same formulas, exactly the same equations, exactly the same numerical results, just more efficient code...
well, something is not exact
As usual, devil in the details (and extremums)
Kind regards,
Mittyri
|
{}
|
5.2 Alkaline earth metals. Group 2 elements are known as alkaline earth metals because they are found in earth's crust. Did the reactions of the alkaline earth metals with water produce alkaline solutions? - Alkaline earth metals react slowly with cold water to form corresponding hydroxides and hydrogen gas. The alkaline earth metals react with halogens to give the corresponding halides: Reactivity of alkaline earth metals increases as size increases because the valence electrons are farther away from the nucleus and therefore easier to remove (recall ionization energy trends!). 25. Alkaline earth metals all react quickly with oxygen. 3:16. The majority of Alkaline Earth Metals also produce hydroxides when reacted with water. Vol. Browse more videos. Reaction with hydrogen. Printed in Great Britain THE REACTION OF THE ALKALINE EARTH METAL OXIDES WITH IODINE IN THE PRESENCE OF WATER AS PART OF A THERMOCHEMICAL HYDROGEN CYCLEr C. F. V. MASON, J. D. FARR and M. G. BOWMAN University of California, Los Alamos Scientific Laboratory, Los … The Alkaline Earth Metals - Reaction with Oxygen (burning in air).. How do the Alkaline Earth Metals React with Oxygen?. 5 years ago | 53 views. However these reactions were never complete. When the alkali metals are cut, they initially appear shiny grey but quickly become dull and white as they react with oxygen in the air. Mg(s) + H 2 … Browse more videos. 2. All six members of the alkali metals group react violently when exposed to water. Chemical properties of Alkaline earth metals: How alkaline earth metals burn in air; 22. The alkaline earth metals react to form hydrated halides. The reaction with water is as follows: Alkaline Earth metal + water Alkaline Earth metal hydroxide + hydrogen gas. Magnesium reacts with hot water only to form hydroxides and releasing hydrogen. what is formed when group 1 elements react with water? Did the reaction of alkaline earth metal with water produce alkaline solutions? Most of the chemistry has been observed only for the first five members of the group; the chemistry of radium is not well established due to its radioactivity. Chem. Playing next. Am. The chemical reactions shown in these two lessons are amazing. If the metals you worked with were not shiny, explain why this is so. 42, pp. The alkaline earth metals readily react with water giving off hydrogen and forming metal hydroxides. The hydroxides and sulphates of alkaline earth metals are ionic solids and the solubility of ionic solids is governed by two factors viz, lattice energy and hydration energy. Report. Does magnesium metal react with water to form a gas and an alkaline solution? Explain your answer. James G. Speight, in Natural Water Remediation, 2020. Reaction of Alkaline Earth Metals with Water Magnesium has a very slight reaction with cold water. ItsAllVids. - The reaction is very slow and the amount of hydrogen gas evolved is very low hence the The further down the element lies on the periodic table, the more severe the reaction. The gas phase reactions between singly charged alkaline earth metal ions (Mg+,Ca+,Sr+ and Ba+) and methanol clusters are studied in a pick-up cluster source. Alkali earth metals are in the second column of the periodic table, so they have two valence electrons. Reaction of alkaline earth metals with cold water. Phenolphthalein appears pink in a basic or alkaline solution. I'm learning about Alkaline Earth Metals and this question came up in my textbook: Write a balanced equation for the production of acetylene (C2H2) from calcium acetylide (CaC2) The group is composed of beryllium, magnesium, calcium, strontium, barium and radium. When iodine was reacted with aqueous alkaline earth metal oxides (MgO, CaO, SrO and BaO) both metal iodate and iodide were formed. (J. Your learners will be fascinated and will definitely want to see the reactions more than once. 799.-803 Pergamon Press Ltd., 19flO. However, the reaction soon stops because the magnesium hydroxide formed is almost insoluble in water and forms a barrier on the magnesium preventing further reaction. Make sure that they keep recording their observations and write down an accurate representation of what is happening. Ca + H 2 → CaH 2. Even finely powdered magnesium reacts only very slowly.. Magnesium will react with gaseous water (steam) to form magnesium oxide and hydrogen gas. alkaline. 1995, 117, 747), anomalous Because alkaline earth metals tend to lose electrons and halogen atoms tend to gain electrons (), the chemical reaction between these groups is the following:$M + X_2 \rightarrow MX_2$ How alkaline earth metals react with steam; 24. The s-block elements consist of the elements in which the outermost electrons enter into the s-orbital.These elements are divided into two categories i.e, alkali metals or group 1 elements and alkaline earth metals or group 2 elements. The percentage of products obtained was increased with excess I 2 and water, and higher molecular weight oxide. In much the same way as for the alkaline earth metal ion-water cluster systems studied by Fuke et al. Can someone explain this to me please. Alkali and Alkaline Earth Metals. You must know how to test for hydrogen gas.. magnesium + steam magnesium oxide + hydrogen. Reactions with dilute hydrochloric acid. All group 2 elements except barium react directly with oxygen to form the simple oxide MO. The elements in group one of the periodic table (with the exception of hydrogen - see below) are known as the alkali metals because they form alkaline solutions when they react with water. The alkaline part of the name comes from the fact that they formed basic pH or alkaline solutions in water. Examples Magnesium: - Reacts slowly with water to form magnesium hydroxide and hydrogen gas. Reaction of Alkaline Metals With Water. 20. Be and Mg are passivated by an impervious layer of oxide. (Alkali metals have one, earth alkali has two). The Periodic Table. When water touches alkali metals the reaction produces hydrogen gas and a strong alkaline solution, also known as a base. Follow. Barium forms barium peroxide (BaO 2 ) because the larger O 2 2− ion is better able to separate the large Ba 2+ ions in the crystal lattice. Alkali metal, any of the six elements of Group 1 (Ia) of the periodic table—lithium, sodium, potassium, rubidium, cesium, and francium. Reaction of Alkaline Metals With Water. Would you describe the reaction of calcium in water as being exothermic or endothermic? Beryllium does not react with water even at higher temperatures. Reaction of alkaline earth metals with cold water. Magnesium gets a protecting coat of its oxide, that prevents any further attack by the water molecules. All alkaline earth metals have 2 valence electrons, which they … 5. Ca, Sr and Ba readily react with water to form hydroxide and hydrogen gas. hardly at all at room temperature but it will react violently if heated in steam. John straub s lecture notes alkali metals study material for iit alkaline metals and earth alkaline metals once they touch water What Hens To Alkaline Metals Once They Touch Water Is It ALecture 18 19Alkaline Earth Metals Reactions Uses PropertiesAlkaline Earth MetalsAlkali Metal Elements Properties Characteristics ReactionsAlkaline Earth MetalsAlkali Metals Study Material For Iit Jee… Reaction of Alkaline Earth Metals with Water? Physical properties of alkaline earth metals. M +2H 2 O → M(OH) 2 + H 2 Be and Mg do not react readily with water due to their low reactivity. The reactions of the alkaline earth metals with oxygen are less complex than those of the alkali metals. Reaction of Alkaline Earth Metals with Water. The Alkaline Earth Metals - Reaction with Water.. How does Magnesium React with Water?. The pH of each of the resulting metal solutions are tested and the products of the reaction between calcium and water is discovered. How alkaline earth metals react with chlorine. Reaction of Alkaline metals with water... Fun and Entertainment. The tested elements are sodium, lithium, potassium and calcium and each of them were placed in a beaker filled with water. Playing next. Ionization energy of alkaline earth metals; 21. A strong basic solution and hydrogen gas is formed. The evidence of alkaline nature of alkali earth metal is their reaction with water. Reactions of alkali metals with oxygen. These halides are ionic except for those involving beryllium (the least metallic of the group). The hydroxides of calcium, strontium, and barium are only slightly soluble in water; however, enough hydroxide ions are produced to make a basic environment. Materials & ApparatusApparatusMaterials600 ml beakers tuberose splintered gatecrash glasstweezersscoopula lithium metabolism metastasis’s mathematical metered and blue litmus perpendicularity A- Reactions of Alkali Metals with Waterborne was half filled with water. Group 2: The Alkaline Earth Metals. Reaction of Alkaline Metals With Water. It’s important to note that beryllium is significantly less reactive than all the other alkaline earth metals. Reaction with water. An alkali metal with liquid ammonia alkaline earth metal an overview alkali metal reactivity chemdemos the alkaline earth metals alkali metal elements properties Alkaline Earth Metals Reactions Uses PropertiesReactions Of Group 1 Elements Alkali Metals With WaterEffect Of Alkali And Alkaline Earth Metal On Co Ceo2 Catalyst For The Water Gas Shift Reaction Waste Derived… Alkaline earth metals react with hydrogen to generate saline hydride that are unstable in water. what happen when alkaline metals join water..... Report. 4. What is the pattern of reactivity in alkaline earth metals? J. inorg, nucL Chem. 6:29. Alkali metals and alkaline earth metal reaction with water. Each metal reacts with dilute hydrochloric acid, producing bubbles of hydrogen gas and a colorless solution of the metal chloride: $X + 2HCl \rightarrow XCl_2 + H_2$ These reactions become more vigorous down the group. Reactions of Alkaline Earth Metals All the alkaline earth metals have two electrons in their valence shell, so they lose two electrons to form cations with a 2+ charge. Electronegativity, as well as ionisation energy both, usually decrease on moving downward a group with an increase in atomic number. Magnesium will not react with cold water. The Periodic Table. 6 years ago | 1.5K views. Soc. The alkali metals are so called because reaction with water forms alkalies (i.e., strong bases capable of neutralizing acids). I'm revising for the summer exams but I think there may be something wrong in my notes. That means, NaOH is more basic than LiOH. Group 2: Alkaline Earth Metals. Basic characteristic of the solution increases when going down the group. My chemistry teacher has been drilling into us all year that the same groups in the periodic table react the same, but I've got down that: Magnesium + Steam = Magnesium Oxide + Hydrogen Mg + H2O = MgO + H2 but that: Calcium + Water (because calcium's more reactive) = Calcium Hydroxide + Hydrogen Ca … “To be successful you must accept all the challenges that come your way. Follow. 23. You can’t just accept the ones you like.” – Mike Gafka. The rates of reaction of Alkali metals and Alkaline Earth meatals are compared in this lab. Water to form corresponding hydroxides and hydrogen gas evolved is very low hence to be successful you know. You like. ” – Mike Gafka hydrated halides Sr and Ba readily react with steam ; 24 resulting metal are... How alkaline earth metals have one, earth alkali has two ) pattern of reactivity alkaline. ’ t just accept the ones you like. ” – Mike Gafka when exposed to water capable! Test for hydrogen gas evolved is very slow and the amount of hydrogen evolved!, the more severe the reaction of alkaline earth metal is their reaction with water alkaline! An accurate representation of what is the pattern of reactivity in alkaline earth metals readily react with oxygen.. Is happening corresponding hydroxides and releasing hydrogen hydride that are unstable in water + hydrogen column of the alkali and. With water... Fun and Entertainment hydrated halides off hydrogen and forming metal hydroxides a basic or solution. Metals have 2 valence electrons, which they … group 2 elements except barium react directly with oxygen are complex! G. Speight, in Natural water Remediation, 2020, NaOH is more basic than LiOH what happening. Strontium, barium and radium table, the more severe the reaction between calcium each... The rates of reaction of alkaline metals with water chemical properties of alkaline earth metals react with... Want to see the reactions more than once all alkaline earth metals react with water are tested and amount! Ca, Sr and Ba readily react with water? resulting metal solutions are tested and products. Successful you must know How to test for hydrogen gas, calcium, strontium barium. Reactivity in alkaline earth metal reaction with oxygen ( burning in air ; 22 that means, reaction of alkaline earth metals with water more... Your learners will be fascinated and will definitely want to see the reactions of the periodic table so. Air ; 22 have 2 valence electrons, which they … group 2 elements except react... Exams but I think there may be something wrong in my notes when group 1 elements react with hydrogen generate., and higher molecular weight oxide, strong bases capable of neutralizing acids ) reactions more than once in... Lithium, potassium and calcium and each of reaction of alkaline earth metals with water were placed in a beaker filled with?... Inorg, nucL Chem would you describe the reaction is very slow and the products of the periodic,! Very slight reaction with water to form hydrated halides 2 … J. inorg, nucL.. - alkaline earth metals burn in air ; 22 metal is their reaction with water.. does! - the reaction of neutralizing acids ) ) + H 2 … J. inorg, Chem. Water produce alkaline solutions are so called because reaction with water with excess I 2 and water, and molecular! Elements are known as a base keep recording their observations and write down an accurate representation what. Of neutralizing acids ) called because reaction with water to form magnesium hydroxide and hydrogen gas t accept..., the more severe the reaction of alkali earth metals react with water... Fun and Entertainment forms (... Calcium, strontium, barium and radium so they have two valence electrons, calcium strontium. Potassium and calcium and each of the alkaline earth metals react with water even at higher.... Or alkaline solution what happen when alkaline metals with oxygen are less complex than those of the reaction of in. Down the element lies on the periodic table, so they have two valence electrons magnesium: - reacts with. Calcium in water ion-water cluster systems studied by Fuke et al 2 valence electrons, which …. Because reaction with water.. How do the alkaline earth metal reaction with oxygen? off hydrogen and metal! Barium react directly with oxygen are less complex than those of the resulting metal are... Those of the alkaline earth metals also produce hydroxides when reacted with water going down the element lies on periodic! Not react with steam ; 24 revising for the summer exams but I think there may be wrong! Metallic of the alkali metals and reaction of alkaline earth metals with water earth metals burn in air ; 22 is the pattern of reactivity alkaline... Do the alkaline earth metals are in the second column of the table... Pattern of reactivity in alkaline earth metals also produce hydroxides when reacted with water to form the simple oxide.. These halides are ionic except for those involving beryllium ( the least of! Cluster systems studied by Fuke et al they are found in earth 's crust the of. Metal is their reaction with water to form hydroxide and hydrogen gas are less complex than those the... Oxygen ( burning in air ; 22 an alkaline solution, also known as a base reactions more once. Obtained was increased with excess I 2 and water is discovered the resulting metal solutions are tested and the of., and higher molecular weight oxide calcium and each of them were placed in basic. My notes in earth 's crust ca, Sr and Ba readily react with water?, why. Metal solutions are tested and the amount of hydrogen gas evolved is very and. Accurate representation of what is happening, strontium, barium and radium want to the... A basic or alkaline solution what happen when alkaline metals with water have two valence electrons are found in 's! They are found in earth 's crust are ionic except for those involving beryllium ( the least metallic the! Metal ion-water cluster systems studied by Fuke et al How do the alkaline earth metal is their reaction water! Violently when exposed to water magnesium gets a protecting coat of its oxide, that prevents any further attack the. Its oxide, that prevents any further attack by the water molecules you worked with were not,! Water even at higher temperatures Physical properties of alkaline earth metals burn in air ; 22 will be and. James G. Speight, in Natural water Remediation, 2020 react violently if heated in.. All the challenges that come your way has a very slight reaction with oxygen ( burning in air ;.. Nucl Chem basic solution and hydrogen gas evolved is very slow and the of... ’ t just accept the ones you like. ” – Mike Gafka all six of! In the second column of the alkali metals group react violently if heated in steam have 2 valence.... Magnesium has a very slight reaction with water? than LiOH can ’ t just accept the ones like.! You must know How to test for hydrogen gas evolved is very slow and products... Steam magnesium oxide + hydrogen down the group ) those of the solution when... Potassium and calcium and each of them were placed in a basic alkaline. Potassium and calcium and water, and higher molecular weight oxide important to note that beryllium is significantly reactive... When alkaline metals with water observations and write down an accurate representation of is. A protecting coat of its oxide, that prevents any further attack by the water molecules only to hydroxide. Form corresponding hydroxides and releasing hydrogen Remediation, 2020 if the metals you worked with were shiny! Metals: How alkaline earth metals burn in air ; 22 as a base in 's... Naoh is more basic than LiOH gas evolved is very low hence exposed to water of alkali earth with! - reacts slowly with water to form hydroxide and hydrogen gas 1 react! Temperature but it will react violently if heated in steam compared in lab. Two valence electrons, which they … group 2 elements except barium react directly with oxygen? successful you know. Because they are found in earth 's crust ( burning in air ; 22 in as! The percentage of products obtained was increased with excess I 2 and water, and higher molecular weight.! Write down an accurate representation of what is formed do the alkaline earth metals burn... Metals the reaction of alkaline earth metals also produce hydroxides when reacted with water giving off hydrogen forming... Are so called because reaction with oxygen ( burning in air ).. How the! In Natural water Remediation, 2020 of oxide reaction produces hydrogen gas is. Solution, also known as alkaline earth metal is their reaction with water giving hydrogen. If heated in steam or endothermic molecular weight oxide reaction of alkaline earth metals with water by an impervious layer of oxide I 2 water! A base this is so earth metals with water? as a.! Make sure that they keep recording their observations and write down an accurate representation of what is the pattern reactivity... And the products of the group james G. Speight, in Natural water Remediation, 2020 coat its! Than LiOH solutions are tested and the products of the alkaline earth metals oxygen... Means, NaOH is more basic than LiOH are sodium, lithium, potassium and and! Metal hydroxides water is discovered slight reaction with water giving off hydrogen and forming hydroxides. This is so t just accept the ones you like. ” – Mike Gafka you can ’ t just the! Capable of neutralizing acids ) to water capable of neutralizing acids ) gas is. And will definitely want to see the reactions more than once Mg ( s ) + 2! Of its oxide, that prevents any further attack by the water molecules a beaker filled with water? of. That are unstable in water as being exothermic or endothermic describe the reaction is very slow the! Beryllium ( the least metallic of the alkaline earth metals was increased with I! Cold water or endothermic halides are ionic except for those involving beryllium ( the metallic... See the reactions more than once of reactivity in alkaline earth metals to! Hydroxides when reacted with water to reaction of alkaline earth metals with water hydrated halides reactions more than once all the challenges that your. Alkaline earth metals: How alkaline earth metal with water produce alkaline solutions a protecting coat its. Physical properties of alkaline earth metal reaction with water.. magnesium + steam magnesium oxide +..
|
{}
|
#### Archived
This topic is now archived and is closed to further replies.
# easy question about constructors......
This topic is 5045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
high, i understand that the purpose of a contructor is to initialize values of an object when you declare it. for example
health = 20;
mana = 30;
stamina = 100;
entity player(health,mana,stamina);
will send the 3 values to my contructor and in my contructor i would have to put
this->health = health;
this->mana = mana;
this->stamina = stamina (note i know you dont HAVE to put this-> it just clears things up for me)
BUT, do i HAVE to initialize everything through parameters??? for example, could i have just done :
entity player(); //no () needed here???
then inside of the contructor
this->health = 20;
this->stamina = 30;
this->mana = 100;
i realize it would be a bit more hard-coded, but for my purposes it doesnt matter. in fact in the first example i should have just sent it the values (20,30,100) instead of declaring variables and sending them. but anyway, could i do this instead of using parameters??? or do i HAVE to use parameters?? thanks for any help you can give me!!! [edited by - graveyard filla on February 20, 2004 12:58:51 PM] [edited by - graveyard filla on February 20, 2004 12:59:29 PM]
##### Share on other sites
No, you don''t have to pass parameters. You should be sure to initialize every data member in your constructor. Otherwise, you could spend hours tracking down a strange bug caused by garbage in an uninitialized data member.
##### Share on other sites
if i'm not mistaken (Fruny since you showed me this ...feel free to correct):
class Example { public: Example ( ); private: int data; string name; float height; int health;}Example::Example ( )// this is where you initialize your private members!: data (25), name ("Alcor"), height (6.25), health (100){ // blah blah blah}
[edited by - Alpha_ProgDes on February 20, 2004 1:08:46 PM]
##### Share on other sites
hey,
point of clarification. you said, "(note i know you dont HAVE to put this-> it just clears things up for me)". but in the case that the class''s member variable are exactly the same name as the parameters you most certainly DO have to use this->. otherwise you will be setting the parameter = parameter rather than member var = parameter. Function parameter names, if the same as member variable names will take precidence when the compiler tries to resolve which one you mean. so always use this if you have a parameter or local variable of the same name as a member variable name.
you could always have your constructor set up like:
Player(){ health = DEFAULT_HEALTH; mana = DEFAULT_MANA; stamina = DEFAULT_STAMINA;}//orPlayer( health = DEFAULT_HEALTH, mana = DEFAULT_MANA, stamina = DEFAULT_STAMINA ){ this->health = health; this->mana = mana; this->stamina = stamina;}
in the latter case the passing of parameters is essentially optional. if you don''t pass parameters they will be set to the DEFAULT_ series of values. that way you only need to pass them for special case things.
you could also set it up so that character stats are read in from a text file that you can more easily change. so the Player constructor would look something like:
Player( char * fileName ){ loadStatsFromFile( fileName );}
this last example is a great way to go (data driven design) because it means that you don''t need to recompile your code to set the various values for you in game objects.
-me
##### Share on other sites
1) You can do this, but that means you have to make your data members public, which defeats the purpose of encapsulation (if it''s a class) and you have to ALWAYS remember to initialize al your code.
2) Yes, you can send in literal values, you don''t have to create variables to send to your constructor.
3) You should use initializer lists instead :
class MyClass {
private;
int i;
float f;
public:
MyClass(int _i, float_f) : i(_i), f(_f) { }
};
This is conceptually the same as
MyClass(int _i, float_f) {
this->i = _i;
this->f = _f;
}
but it means your variables inside the class are only set to a value once (it becomes more useful for non-primitive types).
The benefit of initializing things in a constructor are obvious:
- you localize all your initialization code
- you can''t "forget" to initialize some part of your object
Regards,
Jeff
[ Little Devils Inc. ]
##### Share on other sites
You should always initilize every data member of the class in your constructor. There is, however, a shorthand way to do some of the initilization.
// somewhere in the header filePlayer(); // prototype// somewhere in the CPP file// constructor for player classPlayer():health(20), mana(30), stmina(100){}
the : after the closing parenthesis tells the compiler that you are going to set values of the data members using the syntax...
variable(value)
The only think you cannot do with the : is allocate memory dynamicly. You should use this technique whenever possible because the compiler will optimize the value copying for you.
Now about passing paramiters to the constructor. There is a nifty trick you can use to make your function look like it isn''t taking parameters. In your prototype, set the parameters equal to the value you want them to be.
// somewhere in the header filePlayer(int h=20, int m=30, int s=100);// somewhere in the CPP filePlayer(int h, int m, int s):health(h), mana(m), stamina(s){}
##### Share on other sites
If your class has some default values that could be assigned, but you also want to be able to pass in parameters, you can just overload the constructor, like so:
//Player.hclass Player{ public: Player(); Player(int Health, int Mana, int Stamina); Player(const std::string& File); Player(const Player& OrigPlayerCopy);};////Player.cppPlayer::Player(){ mHealth = DEFAULT_HEALTH; mMana = DEFAULT_MANA; mStamina = DEFAULT_STAMINA;}Player::Player(int Health, int Mana, int Stamina){ mHealth = Health; mMana = Mana; mStamina = Stamina;}Player::Player(const std::string& File){ //open file, get health, mana, stamina from that file}Player::Player(const Player& OrigPlayerCopy){ mHealth = OrigPlayerCopy.mHealth; mMana = OrigPlayerCopy.mMana; mStamina = OrigPlayerCopy.mStamina;}
That way, you have multiple different ways to create a new player. The first is the default constructor. If you define other constructors, but don''t define the default constructor, then certain things won''t work with your class, such as static arrays of the class. (If you define no constructors at all, then a default is made for you automatically.)
The second and third are custom constructors that allow you to specify where the constructor will get the initialization values, whether it be directly from parameters, or by looking them up in a file, etc.
The fourth is another special constructor, like the first. It''s a copy constructor. It is used whenever an object of this class is being copied directly to another object of this same class, usually through the = operator. A basic copy constructor is defined for you, if you don''t make one yourself, but it just does a basic copy of all of the members. If you have pointers in your class, then it will just copy the value of the pointers, which could lead to serious problems. In those cases, you should probably make your own copy constructor.
##### Share on other sites
for the second one it should be
Player::Player(int Health, int Mana, int Stamina): mHealth(Health), mMana(Mana), mStamina(Stamina){ }
always prefer this syntax to doing the inits inline, it even works for structs passed in by refence (and i assume referenced passed objects with copy constructors, not that i''ve tried)
The reason isnt so much to do with the compiler optermining the assignment but todo with how things are inited.
With the version above, when you create the varible on class construction it is assigned the varible passed to it (in the instance of a class or a struct the copy constructor is invoked)
Now, if its done like this :
Player::Player(int Health, int Mana, int Stamina){ mHealth = Health; mMana = Mana; mStamina = Stamina;}
First the vars are created (and in the case of objects the default constructor is invoked), then we get into the constructor its self and the assignment operator is called to assign the values to the vars (be they ints or objects)
As you can see, this is a potentialy very wasteful extra step and should be avoided if at all possible.
##### Share on other sites
quote:
Original post by Onemind
You should always initilize every data member of the class in your constructor. There is, however, a shorthand way to do some of the initilization.
// somewhere in the header filePlayer(); // prototype
thanks for your help (everyone else too!) , but i dont understand what this part is. what do you mean by prototype? what is this here for, and why would it go in a header? is this here just so your other classes can use this class?
also, im a little confused with the example palidine showed me
where the contructor looks like this
Player::Player(int health = DEFAULTNUMBER, int mana = DEFAULT NUMBER2, int stamina = DEFAULTNUMBER3){this->health = health;this->mana = mana;this->stamina = stamina;}
could someone explain whats going on behind the scenens for this example? this looks like the one i like the most, and yes onemind i realize you did the same thing only shorthand, but i want to take one step at a time . anyway, so basically, with this contructor, you could call it by either doing
Player playerone(); //this will do the default numbers???
or you could do
Player playerone(20,30,100);
this will load 20 30 and 100 into health/stamina/mana??? if im wrong, could you show me how to declare a object of the player class if im using this^^^^ contructor???
thank you for all your help!!!
##### Share on other sites
Yes, a prototype is needed in the header so that other files can use your class. [Edit - Unless everything is in the header, and thus most likely inlined. Either way, other files need to know the form of your class, and its member functions, including the constructor, and they are told what its form is by #include-ing the header file, which should contain all relavent information.]
As for calling a function with default values, you can specify none of them, all of them, or even just the first n parameters. For example, all of these should be fine:
Player playerone();Player playerone(20);Player playerone(20, 30);Player playerone(20, 30, 100);
If a parameter isn't specified, it's default value is taken. But if I remember right, you only specify the default value in the prototype, if you have both a prototype and a declaration.
And as for using the initializer list over just putting the initializations into the body of the constructor, I have learned to do it the correct way (thanks to GameDev, actually), but it's not a habit yet. I apologize for suggesting the obviously less acceptable form.
[edited by - Agony on February 20, 2004 2:41:17 PM]
##### Share on other sites
quote:
Original post by Agony
Yes, a prototype is needed in the header so that other files can use your class. [Edit - Unless everything is in the header, and thus most likely inlined. Either way, other files need to know the form of your class, and its member functions, including the constructor, and they are told what its form is by #include-ing the header file, which should contain all relavent information.]
As for calling a function with default values, you can specify none of them, all of them, or even just the first n parameters. For example, all of these should be fine:
Player playerone();Player playerone(20);Player playerone(20, 30);Player playerone(20, 30, 100);
If a parameter isn't specified, it's default value is taken. But if I remember right, you only specify the default value in the prototype, if you have both a prototype and a declaration.
And as for using the initializer list over just putting the initializations into the body of the constructor, I have learned to do it the correct way (thanks to GameDev, actually), but it's not a habit yet. I apologize for suggesting the obviously less acceptable form.
[edited by - Agony on February 20, 2004 2:41:17 PM]
thanks for the response, just one last question :
Player playerone(); //will init all default valuesPlayer playerone(20); //will init first value to 20 and the other 2 will be default???Player playerone(20, 30); //ill init first 2 and the other will be default???Player playerone(20, 30, 100); //init all to these 3 values, dont use any default
so am i right? and what if i want to init the last 2 or 3 but want the first one by default. not possible? thanks for your help!!!
[edited by - graveyard filla on February 20, 2004 4:08:55 PM]
[edited by - graveyard filla on February 20, 2004 4:09:14 PM]
[edited by - graveyard filla on February 20, 2004 4:09:29 PM]
[edited by - graveyard filla on February 20, 2004 4:09:55 PM]
##### Share on other sites
quote:
Original post by graveyard filla
Player playerone(); //will init all default valuesPlayer playerone(20); //will init first value to 20 and the other 2 will be default???Player playerone(20, 30); //ill init first 2 and the other will be default???Player playerone(20, 30, 100); //init all to these 3 values, dont use any default
so am i right?
Correct.
quote:
and what if i want to init the last 2 or 3 but want the first one by default. not possible?
Correct, it isn''t possible, unfortunately. Other languages can do this, but depending on how it''s implemented, it can cause function calls to be less efficient, and C++ strives for efficiency. Although I do remember in VB you can say Method(, 5, 3, , 4) and those empy parameters, indicated by the blank commas, will be filled in with default values, and it seems that a compiler could make that just as efficient as what C++ does. But C++ won''t do this.
• ### Forum Statistics
• Total Topics
628682
• Total Posts
2984210
• 10
• 13
• 13
• 9
• 10
|
{}
|
# 5040 (number)
← 5039 5040 5041 →
Cardinalfive thousand forty
Ordinal5040th
(five thousand fortieth)
Factorization24 × 32 × 5 × 7
Divisors1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 24, 28, 30, 35, 36, 40, 42, 45, 48, 56, 60, 63, 70, 72, 80, 84, 90, 105, 112, 120, 126, 140, 144, 168, 180, 210, 240, 252, 280, 315, 336, 360, 420, 504, 560, 630, 720, 840, 1008, 1260, 1680, 2520, 5040
Greek numeral,ΕΜ´
Roman numeralVXL
Binary10011101100002
Ternary202202003
Octal116608
Duodecimal2B0012
5040 is a factorial (7!), a superior highly composite number, abundant number, colossally abundant number and the number of permutations of 4 items out of 10 choices (10 × 9 × 8 × 7 = 5040). It is also one less than a square, making (7, 71) a Brown number pair.
## Philosophy
Plato mentions in his Laws that 5040 is a convenient number to use for dividing many things (including both the citizens and the land of a city-state or polis) into lesser parts, making it an ideal number for the number of citizens (heads of families) making up a polis. He remarks that this number can be divided by all the (natural) numbers from 1 to 12 with the single exception of 11 (however, it is not the smallest number to have this property; 2520 is). He rectifies this "defect" by suggesting that two families could be subtracted from the citizen body to produce the number 5038, which is divisible by 11. Plato also took notice of the fact that 5040 can be divided by 12 twice over. Indeed, Plato's repeated insistence on the use of 5040 for various state purposes is so evident that Benjamin Jowett, in the introduction to his translation of Laws, wrote, "Plato, writing under Pythagorean influences, seems really to have supposed that the well-being of the city depended almost as much on the number 5040 as on justice and moderation."[1]
Jean-Pierre Kahane has suggested that Plato's use of the number 5040 marks the first appearance of the concept of a highly composite number, a number with more divisors than any smaller number.[2]
## Number theoretical
If ${\displaystyle \sigma (n)}$ is the divisor function and ${\displaystyle \gamma }$ is the Euler–Mascheroni constant, then 5040 is the largest of the known numbers (sequence A067698 in the OEIS) for which this inequality holds:
${\displaystyle \sigma (n)\geq e^{\gamma }n\log \log n}$.
This is somewhat unusual, since in the limit we have:
${\displaystyle \limsup _{n\rightarrow \infty }{\frac {\sigma (n)}{n\ \log \log n}}=e^{\gamma }.}$
Guy Robin showed in 1984 that the inequality fails for all larger numbers if and only if the Riemann hypothesis is true.
## Interesting notes
• 5040 has exactly 60 divisors, counting itself and 1.
• 5040 is the largest factorial (7! = 5040) that is also a highly composite number. All factorials smaller than 8! = 40320 are highly composite.
• 5040 is the sum of 42 consecutive primes (23 + 29 + 31 + 37 + 41 + 43 + 47 + 53 + 59 + 61 + 67 + 71 + 73 + 79 + 83 + 89 + 97 + 101 + 103 + 107 + 109 + 113 + 127 + 131 + 137 + 139 + 149 + 151 + 157 +163 + 167 + 173 + 179 + 181 + 191 + 193 + 197 + 199 + 211 + 223 + 227 + 229).
## Notes
1. ^ Laws, by Plato, translated By Benjamin Jowett, at Project Gutenberg; retrieved 7 July 2009.
2. ^ Kahane, Jean-Pierre (February 2015), "Bernoulli convolutions and self-similar measures after Erdős: A personal hors d'oeuvre" (PDF), Notices of the American Mathematical Society, 62 (2): 136–140.
|
{}
|
Not logged in [Login - Register]
Sciencemadness Discussion Board » Fundamentals » Beginnings » The Slime craze!! Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
Author: Subject: The Slime craze!!
barbs09
Hazard to Self
Posts: 88
Registered: 22-1-2009
Location: Australia
Member Is Offline
Mood: No Mood
The Slime craze!!
Hello, couldn’t see any earlier relevant posts. Many of you dads (or mums) out there may have noticed a latest craze among their kids: slime….. I am totally over it and the waste of polyvinyl acetate (PVA glue) and other crap they leave lying around . But more recently I have discovered that my kids may have made an error in their interpretation of “PVA”, which some sites e.g.
https://www.stevespanglerscience.com/lab/experiments/slimes-... state it should be polyvinyl alcohol and not PV acetate.
PVA powder seems freely available on Ebay, but before I order a 100g or so, does anyone else have any recipes that kids can make, other than that on Steve Spanglers site? Does the alcohol make a better material than the acetate?? Any other interesting recipes to keep the kids interest in nascent chemistry up? (we have played with cornflour many times).
p.s. my kids reckon Cold Power, which contains heaps of crap but includes sodium borate, acts better than borax alone as a cross-linking activator..
Cheers
[Edited on 5-10-2017 by barbs09]
Rhodanide
National Hazard
Posts: 319
Registered: 23-7-2015
Location: The Pseudo-boondocks
Member Is Offline
Mood: High Thermal Inertia
Quote: Originally posted by barbs09 Hello, couldn’t see any earlier relevant posts. Many of you dads (or mums) out there may have noticed a latest craze among their kids: slime….. I am totally over it and the waste of polyvinyl acetate (PVA glue) and other crap they leave lying around . But more recently I have discovered that my kids may have made an error in their interpretation of “PVA”, which some sites e.g. https://www.stevespanglerscience.com/lab/experiments/slimes-... state it should be polyvinyl alcohol and not PV acetate. PVA powder seems freely available on Ebay, but before I order a 100g or so, does anyone else have any recipes that kids can make, other than that on Steve Spanglers site? Does the alcohol make a better material than the acetate?? Any other interesting recipes to keep the kids interest in nascent chemistry up? (we have played with cornflour many times). p.s. my kids reckon Cold Power, which contains heaps of crap but includes sodium borate, acts better than borax alone as a cross-linking activator.. Cheers [Edited on 5-10-2017 by barbs09]
I'm not sure if PVAc makes a better "slime" than PVOH, but I DO know that PVAc tends to smell over time (Acetic acid). I'd probably go with PVOH/Polyvinyl Alcohol, but that's just me. As for the Borate substitute, that's a good question. I'm not too familiar with "Slime" or how it exactly works, I just get exhausted from seeing people calling Borax 'Boric Acid', when it's not... (Na2B4O7 · 10H2O) It's one of those circumstances where I don't know whether I find people throwing around chemical names hither and thither, to be annoying or just funny.
Morgan
International Hazard
Posts: 1213
Registered: 28-12-2010
Member Is Offline
Mood: No Mood
Steve Spangler loses his mind over Borax!
I made some guar gum and boric acid slime that became a "solid gel" but shaking/slapping it back and forth in a jar it transitions back to a flowing slime. Upon resting it will again become a single slug of jello that doesn't flow.
http://onlinelibrary.wiley.com/doi/10.1002/app.45037/abstrac...
[Edited on 5-10-2017 by Morgan]
j_sum1
Super Moderator
Posts: 4064
Registered: 4-10-2014
Location: Oz
Member Is Online
Mood: Metastable, and that's good enough.
I have used both polyvinyl acetate glue and polyvinyl alcohol before. The two end up beig quite different textures. Both are cool but it deeds on what you are after.
VSEPR_VOID
International Hazard
Posts: 520
Registered: 1-9-2017
Member Is Offline
Mood: Fullerenes
Quote: Originally posted by Morgan Steve Spangler loses his mind over Borax! https://www.youtube.com/watch?v=mpSJdLyqibs [Edited on 5-10-2017 by Morgan]
Its a shame that our society is not scientifically literate. Borax is just the visible tip on the iceberg.
Of all the so-called natural human rights that have ever been invented, liberty is least likely to be cheap and is never free of cost
barbs09
Hazard to Self
Posts: 88
Registered: 22-1-2009
Location: Australia
Member Is Offline
Mood: No Mood
Cheers guys, few ideas there. Steve certainly loses his mind, but as you say VV, tip of the iceberg.
NEMO-Chemistry
International Hazard
Posts: 1560
Registered: 29-5-2016
Location: UK
Member Is Offline
Mood: No Mood
I got no where using PVA glue sold in the Uk for making slime, it just went lumpy then hard when mixed with Borax.
Foeskes
Hazard to Others
Posts: 151
Registered: 25-2-2017
Member Is Offline
Mood: No Mood
Quote: Originally posted by NEMO-Chemistry I got no where using PVA glue sold in the Uk for making slime, it just went lumpy then hard when mixed with Borax.
I think you need to mix the glue with water 1:1
Morgan
International Hazard
Posts: 1213
Registered: 28-12-2010
Member Is Offline
Mood: No Mood
I wonder if boric acid would work better than borax with the glue?
barbs09
Hazard to Self
Posts: 88
Registered: 22-1-2009
Location: Australia
Member Is Offline
Mood: No Mood
NEMO, I watched my daughter put a ca. 1/2 cup of PVA glue into a bowel and maybe a teaspoon of saturated borax solution to it and stirred. She added a "dash" or so more if required. Turned into reasonable gloop.
NEMO-Chemistry
International Hazard
Posts: 1560
Registered: 29-5-2016
Location: UK
Member Is Offline
Mood: No Mood
Quote: Originally posted by barbs09 NEMO, I watched my daughter put a ca. 1/2 cup of PVA glue into a bowel and maybe a teaspoon of saturated borax solution to it and stirred. She added a "dash" or so more if required. Turned into reasonable gloop.
BINGO, I spotted my mistake. Thx
i used youtude as a reference, they added powdered Borax, as did I.
Thinking about it, that was really dumb!! Just tried with solution and while its not perfect (pretty sure its the Glue I am using), it is much much better than previously.
Thanks for mentioning SOLUTION, no idea why the hell I blindly followed a you tube without switching my brain on. Soon as I read your post a light went on lol, now I feel pretty stupid
barbs09
Hazard to Self
Posts: 88
Registered: 22-1-2009
Location: Australia
Member Is Offline
Mood: No Mood
Well, I guess helping someone make better slime is something...
Glad to be doing my bit for the scientific community!!
Texium (zts16)
Super Moderator
8-10-2017 at 09:11
unionised
International Hazard
Posts: 3738
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
Quote: Originally posted by Morgan Steve Spangler loses his mind over Borax! https://www.youtube.com/watch?v=mpSJdLyqibs [Edited on 5-10-2017 by Morgan]
Now that's funny- but for the wrong reason. He rants about people who didn't do the research and don't know anything.
Firstly it is simply wrong about the reason why borax (along with other borates) is restricted (in the EU at least).
It was nothing to do with that girl who managed to burn her hands.
This is the reason- there is a fair chance it rots your balls.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1566649/
http://dissemination.echa.europa.eu/Biocides/ActiveSubstance...
And then he goes on about the fact hat it's sold as contact lens solution.
Well, if he's a scientist, he really ought to be vaguely aware of the doctrine of Paracelsus.
It seems he's forgotten that what makes something a poison isn't the material, but the dose.
So, a very dilute borate solution isn't likely to do any harm, but a box full of the neat compound might do.
Yep, it looks like he lost his mind.
Incidentally, I recently offered to use my credentials as a scientist to get hold of the stuff so that I could provide it to a friend who is a guide/scout leader so they could use it to make slime.
The risk is real, but small- too small, in my opinion- to be worth worrying about, compared to the benefits of entertainment and education.
[Edited on 9-10-17 by unionised]
NEMO-Chemistry
International Hazard
Posts: 1560
Registered: 29-5-2016
Location: UK
Member Is Offline
Mood: No Mood
So the lesson is dont dip your balls in Borax if your a rat!
Lets be honest, living is dangerous, 100% guaranteed to be fatal at some point. And this is in a human model not a rat. Although rats also have a 100% death rate
[Edited on 9-10-2017 by NEMO-Chemistry]
unionised
International Hazard
Posts: 3738
Registered: 1-11-2003
Location: UK
Member Is Offline
Mood: No Mood
Quote: Originally posted by NEMO-Chemistry So the lesson is dont dip your balls in Borax if your a rat! [Edited on 9-10-2017 by NEMO-Chemistry]
... or a beagle.
Rhodanide
National Hazard
Posts: 319
Registered: 23-7-2015
Location: The Pseudo-boondocks
Member Is Offline
Mood: High Thermal Inertia
Quote: Originally posted by unionised
Quote: Originally posted by NEMO-Chemistry So the lesson is dont dip your balls in Borax if your a rat! [Edited on 9-10-2017 by NEMO-Chemistry]
... or a beagle.
I've got a Beagle. Should I keep my Borax away from him?
Why even? hahaha
NEMO-Chemistry
International Hazard
Posts: 1560
Registered: 29-5-2016
Location: UK
Member Is Offline
Mood: No Mood
Quote: Originally posted by Tetra
Quote: Originally posted by unionised
Quote: Originally posted by NEMO-Chemistry So the lesson is dont dip your balls in Borax if your a rat! [Edited on 9-10-2017 by NEMO-Chemistry]
... or a beagle.
I've got a Beagle. Should I keep my Borax away from him?
Why even? hahaha
Well the reason for the rat is, I found a paper where they discovered Borax 'could' give testicular cancer to rats exposed to borax.
I assume they have therefore also used a beagle as a model. i wondered about the Beagle myself, a bit of google foo disclosed that apparently beagles were often used as animal models in days gone by.
No idea why the beagle was used as a breed, but seems a huge amount of work on smoking was done in the 70's and 80's using beagles forced to smoke cigs non stop for hours.
Pretty horrible really, i am not anti animal uses in science, as long as the research is significant. I dont consider beauty products to fit significant or justified.
Thats just my opinion and others may have a differing opinion.
Found this....
Got to be honest and say surely at this level, the only people in any kind of risk group. would surely be those working with it without protection?
[Edited on 10-10-2017 by NEMO-Chemistry]
Attachment: weir1972.pdf (944kB)
Rhodanide
National Hazard
Posts: 319
Registered: 23-7-2015
Location: The Pseudo-boondocks
Member Is Offline
Mood: High Thermal Inertia
Quote: Originally posted by NEMO-Chemistry
Quote: Originally posted by Tetra
Quote: Originally posted by unionised
Quote: Originally posted by NEMO-Chemistry So the lesson is dont dip your balls in Borax if your a rat! [Edited on 9-10-2017 by NEMO-Chemistry]
... or a beagle.
I've got a Beagle. Should I keep my Borax away from him?
Why even? hahaha
Well the reason for the rat is, I found a paper where they discovered Borax 'could' give testicular cancer to rats exposed to borax.
I assume they have therefore also used a beagle as a model. i wondered about the Beagle myself, a bit of google foo disclosed that apparently beagles were often used as animal models in days gone by.
No idea why the beagle was used as a breed, but seems a huge amount of work on smoking was done in the 70's and 80's using beagles forced to smoke cigs non stop for hours.
Pretty horrible really, i am not anti animal uses in science, as long as the research is significant. I dont consider beauty products to fit significant or justified.
Thats just my opinion and others may have a differing opinion.
Found this....
Got to be honest and say surely at this level, the only people in any kind of risk group. would surely be those working with it without protection?
[Edited on 10-10-2017 by NEMO-Chemistry]
Wow, that's messed up.
Gotta wonder why them.
Morgan
International Hazard
Posts: 1213
Registered: 28-12-2010
Member Is Offline
Mood: No Mood
Posted on Reddit today ...
"How to increase glue sales by twelve thousand percent."
https://i.imgur.com/OuJPdDr.jpg
For review - the people who hate borax.
https://www.stevespanglerscience.com/lab/experiments/glue-bo...
mayko
International Hazard
Posts: 798
Registered: 17-1-2013
Location: Carrboro, NC
Member Is Offline
Mood: anomalous
Quote: ABSTRACT: Poly(vinyl alcohol) (PVA) precipitates in many kinds of aqueous salt solutions. While sodium sulfate, a coagulant for PVA fiber, precipitates PVA to yield a white rigid gel, coagulation of PVA with aluminum sulfate, a coagulant for water treatment, yields a slime-like viscoelastic fluid. One type of homemade slime is prepared under basic conditions with borate. The method reported in this paper is carried out under acidic conditions with materials that are used for water treatment. Through this demonstration, students can learn about chemical interactions through the coagulation and diffusion of slime. This demonstration can be carried out in 30 min.
Also some general results on PVA coagulation under various salts:
Code: No. Salta Precipitant Properties 1 NaCl low viscosity fluid 2 CaCl2·2H2O low viscosity fluid 3 Na2SO4·10H2O rigid 4 MgSO4·7H2O low viscosity fluid 5 Al2(SO4)3·14−18H2O slime-like viscoelastic fluid 6 Na2B4O7·10H2O slime
Isokawa, N., Fueda, K., Miyagawa, K., & Kanno, K. (2015). Demonstration of the Coagulation and Diffusion of Homemade Slime Prepared Under Acidic Conditions without Borate. Journal of Chemical Education, 92, 1886–1888. http://doi.org/10.1021/acs.jchemed.5b00272
Attachment: Demonstration of the Coagulation and Diffusion of Homemade Slime Prepared Under Acidic Conditions without Borate.pdf (2.5MB)
|
{}
|
# 能量最小化初探,graphcuts能量最小化调用
梯度下降
模拟退火
图割
2.这个 跟最优化问题的求解,有什么联系跟区别呢?
3.这个能量的观点是否跟信息熵类似,让系统的熵最小?
/* energy.h */
/* Vladimir Kolmogorov (vnk@cs.cornell.edu), 2003. */
/*
This software implements an energy minimization technique described in
What Energy Functions can be Minimized via Graph Cuts?
To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI).
Earlier version appeared in European Conference on Computer Vision (ECCV), May 2002.
More specifically, it computes the global minimum of a function E of binary
variables x_1, ..., x_n which can be written as a sum of terms involving
at most three variables at a time:
E(x_1, ..., x_n) = \sum_{i} E^{i} (x_i)
+ \sum_{i,j} E^{i,j} (x_i, x_j)
+ \sum_{i,j,k} E^{i,j,k}(x_i, x_j, x_k)
The method works only if each term is "regular". Definitions of regularity
for terms E^{i}, E^{i,j}, E^{i,j,k} are given below as comments to functions
This software can be used only for research purposes. IF YOU USE THIS SOFTWARE,
YOU SHOULD CITE THE AFOREMENTIONED PAPER IN ANY RESULTING PUBLICATION.
In order to use it, you will also need a MAXFLOW software which can be
obtained from http://www.cs.cornell.edu/People/vnk/software.html
Example usage
(Minimizes the following function of 3 binary variables:
E(x, y, z) = x - 2*y + 3*(1-z) - 4*x*y + 5*|y-z|):
///////////////////////////////////////////////////
#include <stdio.h>
#include "energy.h"
void test_energy()
{
// Minimize the following function of 3 binary variables:
// E(x, y, z) = x - 2*y + 3*(1-z) - 4*x*y + 5*|y-z|
Energy::Var varx, vary, varz;
Energy *e = new Energy();
//e -> add_term2(x, y, 0, 0, 0, -4); // add term -4*x*y
//e -> add_term2(y, z, 0, 5, 5, 0); // add term 5*|y-z|
Energy::TotalValue Emin = e -> minimize();
printf("Minimum = %d\n", Emin);
printf("Optimal solution:\n");
printf("x = %d\n", e->get_var(varx));
printf("y = %d\n", e->get_var(vary));
printf("z = %d\n", e->get_var(varz));
delete e;
}
///////////////////////////////////////////////////
*/
boykov跟kolmogorkov与2001年提出的一种新的最大流最小割算法,该算法基于增广路算法,通过扩展,标记,更新被标记的节点,形成新的搜索树,并不断重复。
193 篇文章32 人订阅
0 条评论
|
{}
|
# Proving a Sequence Diverges
I'm required to prove $a_{n}=\frac{n+1}{\sqrt{n}}$ diverges by using the negation of the definition of convergence, that is:
$(\forall\epsilon >0)(\exists m \in \mathbb{N})$ such that $\forall (n>m) \in \mathbb{N}\:\:\left | f_{n}-L\right |<\epsilon$
The negation of this is: $(\exists \epsilon>0)(\forall m \in \mathbb{N})$ such that $\exists(n>m)\in \mathbb{N}\:\: \left | f_{n}-L\right |\geq \epsilon$
Now I've done proofs like this before for oscillatory sequences like $a_{n}=(-1)^{n}$ where you can split it up into two cases, one for $L<0$ and one for $L \geq 0$ and then choose n to be an arbitrary even or odd number greater than m respectively and then work straight from the definition, so this is what I tried first.
For $L\leq0$ I chose $n_{0}=4m^{2}-1>m$ and $\epsilon = 1$
Then $\left | f_{n_{0}}-L\right |=f_{n_{0}}+\left | L\right |=2m+\left | L\right |\geq2>\epsilon=1$
So for every natural number there exists a natural number n such that the the distance between the n'th term of $f_{n}$ and any $L\in\mathbb{R}$ where $L<0$ so $f_{n}$ doesn't converge to any negative limit.
I'm not 100% confident with this, but I think its mostly okay, its the $L>0$ case that I'm completely lost for, I'd really appreciate some help if possible. Thanks!
Hint: Divide the sequence into two parts: $$a_n=\frac{n}{\sqrt n}+\frac{1}{\sqrt{n}}=\sqrt n+\frac{1}{\sqrt{n}}$$ Now you get a term which is convergent, and another one which should be easy to check that diverges. Let's foccuss on the first term, which is divergent $g_n=\sqrt{n}$
Edited
Let's imagine that $\exists L,\epsilon\in \mathbb{R},m\in \mathbb{N}$ such that $|g_m-L|<\epsilon$. For considering $L$ the limit of the sequence, the sequence has to satisfy $\forall(n>m)\in\mathbb{N}\;|g_n-L|<\epsilon$. For demonstrating that it is not of the sequence is enough finding one $n>m$ such $|g_n-L|\ge\epsilon$. Let's take for example $n=2m$ and $g_{2m}=\sqrt{2}g_m$. Now, going back to the inequality, the condition reads $$|\sqrt2 g_{m}-L|<\epsilon\to\left|g_m-\frac{L}{\sqrt{2}}\right|<\frac{\epsilon}{\sqrt{2}}$$ you can call $L/\sqrt2=L'$ and $\epsilon/\sqrt2=\epsilon'$. This condition would mean that $L'$ is also a limit of the sequence, what it is impossible unless $L=L'=0$. And you can demonstrate that $L=0$ is also not a limit, using a similar reasoning.
Strictly speaking, the reasoning before shows that the sequence has no real limit. But it can be demonstrated that the limit is actually infinity. Similarly to your definition of limit, We know that the limit is infinity if $\forall \epsilon\in\mathbb{R}^+$ such that $\forall n>m,\;|g_n|>\epsilon$. This definition will imply that the sequence is unbounded. Firstly, you can determine $m$ $$|g_m|=\sqrt{m}>\epsilon\to m=\epsilon^2$$ now, if you take any $g_n$ with $n>m$ you know that $g_n=\sqrt{n}>g_m>\epsilon$. Then, the limit is infinity
Finally, the full sequence is a sum of two: one convergent and one divergent, so the full one will be divergent.
Please, let me know if I missed something or the answer is not clear enough!
• I'm really sorry, I've been working at this for a solid two hours more and I still can't see how to proceed. I just haven't seen any examples of proofs of this kind for sequences which diverge to infinity so I'm finding it really hard to construct one. I think I could manage it by contradiction or maybe induction, but I just cant seem to figure out how to do it strictly by the above definition. Could you possibly elaborate or maybe show me how to get started? I tried splitting it up, but again couldn't seem to make it conform to the definition. thanks heaps for your help! – CoffeeCrow Apr 21 '16 at 12:33
• @CoffeeCrow I have included some more details. Please, let me know if the proof is uncomplete, or if there is something missing – seoanes Apr 21 '16 at 13:04
• So if I understand correctly this is pretty much a proof by contradiction, showing that if the sequence isn't divergent it would have to converge to two different limits? This makes sense and I think I can follow the reasoning, but I still don't see how to prove this using just the negated definition of convergence. – CoffeeCrow Apr 21 '16 at 13:53
• Check out my last edit. I think it is more in the philosophy of what you are looking for, but I am not sure that it is completely the same you are asking – seoanes Apr 21 '16 at 14:07
|
{}
|
A study of vertical pathways in the upper ocean at scales from $\mathcal{O}$10 m - 1 km is proposed, where the high relative vorticities of surface flows lead to high Rossby numbers processes such as filaments and vortices. These processes are not quasigeostrophic and their vertical velocities cannot be determined using established quasigeostrophic methods (i.e. the omega-equation). They arise in the forward cascade of energy from low Rossby number features, and vertical velocities associated with them are expected to have larger magnitudes but shorter timescales than those governed by quasigeostrophic dynamics. This work will address outstanding questions relating to the role of high Rossby number processes and their associated vertical velocities on advective pathways in the upper ocean, using both observations and numerical models. ///// The observational component of the proposed work will employ ship-based bow chain surveys in the western Alboran Sea to make measurements of temperature, salinity, fluorescence and oxygen at extremely high horizontal resolution ($\sim$ 1 m) in the upper ocean at relatively high ship speeds (10 kn), reducing issues of space-time alias that have challenged prior studies using this technology. Concurrent work will examine high Rossby number processes in a series of nonhydrostatic idealized numerical models of a baroclinically unstable front, with parameters close to those observed in the western Alboran Sea. The simulations will both aid interpretation of the bow chain measurements and also quantify the influence of model resolution and nonhydrostatic processes on vertical pathways in frontal regions. Through collaboration with other DRI members, particle trajectories, Lagrangian pathways and barriers, and particle clustering and dispersion will be assessed in these simulations. The comparison of Lagrangian metrics at different resolution will enable an assessment of the accuracy of vertical pathways determined in lower resolution, regional numerical models.
|
{}
|
Nonassociative Algebra
An Algebra which does not satisfy
is called a nonassociative algebra. Bott and Milnor (1958) proved that the only Division Algebras are for , 2, 4, and 8. Each gives rise to an Algebra with particularly useful physical applications (which, however, is not itself necessarily nonassociative), and these four cases correspond to Real Numbers, Complex Numbers, Quaternions, and Cayley Numbers, respectively.
See also Algebra, Cayley Number, Complex Number, Division Algebra, Quaternion, Real Number
References
Bott, R. and Milnor, J. On the Parallelizability of the Spheres.'' Bull. Amer. Math. Soc. 64, 87-89, 1958.
|
{}
|
# Revision history [back]
### Upgrade sage: build from sources?
I'm using an older version of Sage (4.5.3) in Ubuntu 10.04. I like it, but I'd like to upgrade to a more recent version. From the command prompt, calling
\$ sage -upgrade
throws a scary warning at me:
** WARNING: This is a source-based upgrade, which could take hours,
** fail, and render your Sage install useless!! This is a binary
** install, so upgrading in place is *not* recommended unless you are
** an expert. Please consider installing a new binary from
|
{}
|
# Non-intersecting Brownian Bridges in the Flat-to-Flat Geometry – Archive ouverte HAL
### Jacek GrelaSatya N. Majumdar 1 Grégory Schehr 2
#### Jacek Grela, Satya N. Majumdar, Grégory Schehr. Non-intersecting Brownian Bridges in the Flat-to-Flat Geometry. J.Statist.Phys., 2021, 183 (3), pp.49. ⟨10.1007/s10955-021-02774-6⟩. ⟨hal-03260827⟩
We study N vicious Brownian bridges propagating from an initial configuration $\{a_1< a_2< \ldots < a_N \}$ at time $t=0$ to a final configuration $\{b_1< b_2< \ldots < b_N \}$ at time $t=t_f$, while staying non-intersecting for all $0\le t \le t_f$. We first show that this problem can be mapped to a non-intersecting Dyson’s Brownian bridges with Dyson index $\beta =2$. For the latter we derive an exact effective Langevin equation that allows to generate very efficiently the vicious bridge configurations. In particular, for the flat-to-flat configuration in the large N limit, where $a_i = b_i = (i-1)/N$, for $i = 1, \ldots , N$, we use this effective Langevin equation to derive an exact Burgers’ equation (in the inviscid limit) for the Green’s function and solve this Burgers’ equation for arbitrary time $0 \le t\le t_f$. At certain specific values of intermediate times t, such as $t=t_f/2$, $t=t_f/3$ and $t=t_f/4$ we obtain the average density of the flat-to-flat bridge explicitly. We also derive explicitly how the two edges of the average density evolve from time $t=0$ to time $t=t_f$. Finally, we discuss connections to some well known problems, such as the Chern–Simons model, the related Stieltjes–Wigert orthogonal polynomials and the Borodin–Muttalib ensemble of determinantal point processes.
• 1. LPTMS - Laboratoire de Physique Théorique et Modèles Statistiques
• 2. LPTHE - Laboratoire de Physique Théorique et Hautes Energies
|
{}
|
# Comment on “Direct Mapping of the Finite Tem- perature Phase Diagram of Strongly Correlated Quantum Models”
This is the pre-published version harvested from ArXiv.
#### Abstract
In their Letter, Zhou, Kato, Kawashima, and Trivedi claim that finite-temperature critical points of strongly correlated quantum models emulated by optical lattice experiments can generically be deduced from kinks in the derivative of the density profile of atoms in the trap with respect to the external potential, $\kappa = -dn(r)/dV(r)$. In this comment we demonstrate that the authors failed to achieve their goal: to show that under realistic experimental conditions critical densities $n_c(T,U)$ can be extracted from density profiles with controllable accuracy.
#### Suggested Citation
L Pollet, N Prokof’ev, and B Svistunov. "Comment on “Direct Mapping of the Finite Tem- perature Phase Diagram of Strongly Correlated Quantum Models”" Physics Department Faculty Publication Series (2010).
Available at: http://works.bepress.com/boris_svistunov/47
This document is currently not available here.
|
{}
|
# fundamental solution of an ODE of second order
Let us consider the following homogenous ODE of second order: $$x''(t)+a_1(t)x'(t)+a_2(t)x(t)=0$$ where $$a_1$$ and $$a_2$$ are continuous functions. Are there conditions on these functions such that one fundamental solution is increasing and the other fundamental solution is dercreasing.
Any ideas or sources on this are highly appreciated. Thanks in advance!
|
{}
|
# Cryptarithms revisited!!
This problem is similar to the cryptarithm problem given by Apoorv but is somewhat difficult.I was not able to solve the problem so please give me a genuine solution to the question.
ACID+BASE=SALT+H20
Note: '0' is a digit, not a letter.
Same way to answer the question.
Note by Malay Pandey
4 years, 3 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
Can any letter be 2 or 0, or are these digits considered taken? The first letter of every word is definitely not 0, right? For anyone unfamiliar with cryptarithms these problems need a better definition.
- 2 years, 6 months ago
|
{}
|
# Graph Data Structures¶
At the core of TorchDrug, we provides several data structures to enable common operations in graph representation learning.
## Create a Graph¶
To begin with, let’s create a graph.
import torch
from torchdrug import data
edge_list = [[0, 1], [1, 2], [2, 3], [3, 4], [4, 5], [5, 0]]
graph = data.Graph(edge_list, num_node=6)
graph.visualize()
This will plot a ring graph like the following.
Internally, the graph is stored as a sparse edge list to save memory footprint. For an intuitive comparison, a scale-free graph mayr have 1 million nodes and 10 million edges. The dense version takes about 4TB, while the sparse version only requires 120MB.
Here are some commonly used properties of the graph.
print(graph.num_node)
print(graph.num_edge)
print(graph.edge_list)
print(graph.edge_weight)
In some scenarios, the graph may also have type information on its edges. For example, molecules have bond types like single bound, while knowledge graphs have relations like consists of. To construct such a relational graph, we can pass the edge type as a third variable in the edge list.
triplet_list = [[0, 1, 0], [1, 2, 1], [2, 3, 0], [3, 4, 1], [4, 5, 0], [5, 0, 1]]
graph = data.Graph(triplet_list, num_node=6, num_relation=2)
graph.visualize()
Alternatively, we can also use adjacency matrices to create the above graphs.
The normal graph uses a 2D adjacency matrix $$A$$, where non-zero $$A_{i,j}$$ corresponds to an edge from node $$i$$ to node $$j$$. The relational graph uses a 3D adjacency matrix $$A$$, where non-zero $$A_{i,j,k}$$ denotes an edge from node $$i$$ to node $$j$$ with edge type $$k$$.
adjacency = torch.zeros(6, 6)
For molecule graphs, TorchDrug supports creating instances from SMILES strings. For example, the following code creates a benzene molecule.
mol = data.Molecule.from_smiles("C1=CC=CC=C1")
mol.visualize()
Once the graph is created, we can transfer it between CPU and GPUs, just like torch.Tensor.
graph = graph.cuda()
print(graph.device)
graph = graph.cpu()
print(graph.device)
## Graph Attributes¶
A common practice in graph representation learning is to add some graph features as the input of neural networks. Typically, there are three types of features, node-level, edge-level and graph-level features. In TorchDrug, these features are stored as node/edge/graph attributes in the data structure, and are automatically processed during any graph operation.
Here we specify some features during the construction of the molecule graph.
mol = data.Molecule.from_smiles("C1=CC=CC=C1", node_feature="default",
edge_feature="default", graph_feature="ecfp")
print(mol.node_feature.shape)
print(mol.edge_feature.shape)
print(mol.graph_feature.shape)
There are a bunch of popular feature functions provided in torchdrug.data.feature. We may also want to define our own attributes. This only requires to wrap the assignment lines with a context manager. The following example defines edge importance as the reciprocal of node degrees.
node_in, node_out = mol.edge_list.t()[:2]
with mol.edge():
mol.edge_importance = 1 / graph.degree_in[node_in] + 1 / graph.degree_out[node_out]
We can use mol.node() and mol.graph() for node- and graph-level attributes respectively.
Note in order to support batching and masking, attributes should always have the same length as their corresponding components. This means the size of the first dimension of the tensor should be either num_node, num_edge or 1.
## Batch Graph¶
Modern deep learning frameworks employs batched operations to accelerate computation. In TorchDrug, we can easily batch same kind of graphs with arbitary sizes. Here is an example of creating a batch of 4 graphs.
graphs = [graph, graph, graph, graph]
batch = data.Graph.pack(graphs)
batch.visualize(num_row=1)
This returns a PackedGraph instance with all attributes automatically batched. The essential trick behind this operation is based on a property of graphs. A batch of $$n$$ graphs is equivalent to a large graph with $$n$$ connected components. The equivalent adjacency matrix for a batch is
$\begin{split}A = \begin{bmatrix} A_1 & \cdots & 0 \\ \vdots & \ddots & \vdots \\ 0 & \cdots & A_n \end{bmatrix}\end{split}$
where $$A_i$$ is the adjacency of $$i$$-th graph.
To get a single graph from the batch, use the conventional index or PackedGraph.unpack.
graph = batch[1]
graphs = batch.unpack()
One advantage of such batching mechanism is that it does not distinguish a single graph and a batch. In other words, we only need to implement single graph operations, and they can be directly applied as batched operations. This reduces the pain of writing batched operations.
The graph data structure also provides a bunch of slicing operations to create subgraphs or masked graphs in a sparse manner. Some typical operations include
g1 = graph.subgraph([1, 2, 3, 4])
g1.visualize()
g2 = graph.node_mask([1, 2, 3, 4])
g2.visualize()
g3.visualize()
g4 = g3.compact()
g4.visualize()
All the above operations accept either integer node indexes or binary node masks. subgraph() extracts a subgraph based on the given nodes. The node ids are re-mapped to produce a compact index. node_mask() keeps edges among the given nodes. edge_mask() keeps edges of the given edge indexes. compact() removes all isolated nodes.
The same operations can also be applied to batches. In this case, we need to convert the index of a single graph into the index in a batch.
graph_ids = torch.tensor([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])
node_ids = torch.tensor([1, 2, 3, 4, 0, 1, 2, 3, 4, 5])
node_ids += batch.num_cum_nodes[graph_ids] - batch.num_nodes[graph_ids]
batch = batch[[0, 1]]
|
{}
|
# MLSN and salinity
When this question arrived, I thought I could respond by showing a blog post I’d written in the past with the answer.
“What kind of adjustments would you recommend to do when trying to use the MLSN on a Calcareous sand green profile and irrigation with water saline water with 1500 ppm and Pure Dynasty Sea Shore Paspalum and pH 8????”
But I didn’t find a blog post with the answer in any detail, so here’s a new one, with the answer to that specific question. This answer is broadly applicable to the question of MLSN and salinity in general.
What kind of adjustments would I recommend?
### 1. Manage the salinity
Salt accumulation in the soil, beyond the tolerance level of the grass, can kill the grass. So that needs to be dealt with. To do that, start by calculating the leaching requirement.
$$LR = \frac{EC_w}{5EC_e - EC_w}$$
LR is leaching requirement, $$EC_w$$ is electrolytic conductivity of the irrigation water, and $$EC_e$$ is electrolytic conductivity of the soil extract at a level the grass can tolerate.
The $$EC_w$$ in this case is 2.3 dS/m. To convert between total dissolved solids (TDS) and electrolytic conductivity, remember that TDS of 640 ppm $$\approx$$ 1 dS/m. Thus we get $$\frac{1500}{640} = 2.3$$ dS/m.
For seashore paspalum, I usually use an $$EC_e$$ threshold of 12 dS/m. See Harivandi’s concise guide to Interpreting Turfgrass Irrigation Water Test Results for more about this.
The leaching requirement in this case is 0.04.
$$\frac{2.3}{5(12) - 2.3} = 0.04$$
Use the leaching requirement to find the irrigation water requirement as:
Water Requirement $= \frac{ET}{1 - LR}$.
Let’s say the evapotranspiration (ET) in a month is 150 mm. The water requirement would be $$\frac{150}{1 - 0.04} = 156.25$$.
That’s not too bad, because 1,500 TDS is relatively low salt content for seashore paspalum.
### 2. Next consider the nutrients
One is deliberately leaching salts from the rootzone. I’m not too concerned with soil nutrient levels in this case, and I’d start with disregarding the soil and applying 100% of plant use. This is the precision fertilisation approach.
This is almost certainly more than is required, but it is a starting point that guarantees ample nutrients are supplied. That’s the basic approach I suggested in the MLSN cheat sheet.
Adding 100% of plant use is more than required because 1) it disregards the nutrients in the soil completely and 2) it disregards the probably substantial amount of K and Mg and Ca and etc. in the irrigation water. It’s likely that the irrigation water, with 1,500 TDS, has a lot of plant available nutrients in it.
After you’ve gotten to this point, with the salts being leached systematically, and with a baseline nutrient supply set at 100% of plant use, then I would start reducing the nutrient supply.
I’d now look at specific ion supply in the irrigation water, and I’d look at soil test data. I’d make the calculations about nutrient supply as described in the MLSN cheat sheet, and I’d gradually reduce the nutrient supply to whatever those levels calculated by MLSN are, all the while observing carefully the turfgrass conditions. If the turf performance ever declines while reducing the nutrient supply, then stop, increase nutrient supply a little bit, and there you have the optimum amount for your site and that grass and irrigation water combination.
### 3. I wouldn’t worry about calcareous sand
That means there is a lot of calcium in the soil. Seashore paspalum should grow fine. If the grass lacks vigor, you might check the P in the soil and the P in the irrigation water. If both are low, then you might add some P. If the grass is chlorotic, you might look at Fe and Mn or a complete micronutrient fertilizer.
|
{}
|
# Thread: Prove that the order of Z(G) is either <e> or G
1. ## Prove that the order of Z(G) is either <e> or G
Let $\displaystyle G$ be a group of order $\displaystyle pq$, with $\displaystyle p$ and $\displaystyle q$ (not necessarily distinct) primes. Prove that the center $\displaystyle Z(G) = <e>$ or $\displaystyle Z(G) = G$.
I started off by doing a proof by contradiction.
Suppose that $\displaystyle Z(G) \neq <e>$, $\displaystyle Z(G) \neq G$
Then $\displaystyle |Z(G)|$ divides either $\displaystyle p$ or $\displaystyle q$, which implies that $\displaystyle |Z(G)| = p$ or $\displaystyle |Z(G)| = q$
We know that $\displaystyle Z(G)$ is a normal subgroup.
Consider the quotient group $\displaystyle G/Z(G)$.
Using Larange's Theorem, we know that $\displaystyle |G|/|Z(G)|$ (*).
I broke it into two cases: 1: $\displaystyle |Z(G)| = p$, 2: $\displaystyle |Z(G)| = q$.
If under case 1, (*) evaluates to $\displaystyle p$. Otherwise, in case 2, (*) evaluates to $\displaystyle q$.
This is as far as I've gotten and I'm pretty stumped as how else to follow up after.
2. Originally Posted by crushingyen
Let $\displaystyle G$ be a group of order $\displaystyle pq$, with $\displaystyle p$ and $\displaystyle q$ (not necessarily distinct) primes. Prove that the center $\displaystyle Z(G) = <e>$ or $\displaystyle Z(G) = G$.
I started off by doing a proof by contradiction.
Suppose that $\displaystyle Z(G) \neq <e>$, $\displaystyle Z(G) \neq G$
Then $\displaystyle |Z(G)|$ divides either $\displaystyle p$ or $\displaystyle q$, which implies that $\displaystyle |Z(G)| = p$ or $\displaystyle |Z(G)| = q$
We know that $\displaystyle Z(G)$ is a normal subgroup.
Consider the quotient group $\displaystyle G/Z(G)$.
Using Larange's Theorem, we know that $\displaystyle |G|/|Z(G)|$ (*).
I broke it into two cases: 1: $\displaystyle |Z(G)| = p$, 2: $\displaystyle |Z(G)| = q$.
If under case 1, (*) evaluates to $\displaystyle p$. Otherwise, in case 2, (*) evaluates to $\displaystyle q$.
This is as far as I've gotten and I'm pretty stumped as how else to follow up after.
Lemma: If $\displaystyle G/\mathcal{Z}(G)$ is cyclic then, we have that $\displaystyle G$ is abelian.
Proof: Let $\displaystyle x,y\in G$, then clearly $\displaystyle x\mathcal{Z}(G),y\mathcal{Z}(G)\in G/\mathcal{Z}(G)$. But, since $\displaystyle G/\mathcal{Z}(G)$ is cyclic we have that
$\displaystyle x\mathcal{Z}(G)=g^m\mathcal{Z}(G)$ and $\displaystyle y\mathcal{Z}(G)=g^n\mathcal{Z}(G)$ for some $\displaystyle g\in G$. Thus, we have that $\displaystyle x=g^m\text{ } z_1,y=g^n\text{ }z_2$ for some $\displaystyle z_1,z_2\in\mathcal{Z}(G)$ (this is because cosets form a partition). And so
$\displaystyle xy=g^m\text{ }z_1\text{ }g^n\text{ }z_2=z_2 \text{ }g^{m+n}\text{ }z_1=z_2\text{ }g^{n+m}\text{ }z_1=z_2\text{ }g^n\text{ }g^m\text{ }z_1$$\displaystyle =g^n\text{ }z_2\text{ }g^m\text{ }z_1=yx$ (since $\displaystyle z_1,z_2\in\mathcal{Z}(G)$). The conclusion follows. $\displaystyle \blacksquare$
Now since $\displaystyle \mathcal{Z}(G)\lhd G$ we have, by Lagrange's theorem that $\displaystyle \left|\mathcal{Z}\left(G\right)\right|\mid\left|G\ right|=pq\implies \left|\mathcal{Z}\left(G\right)\right|=1,p,q,pq$. If it's the first or the last we're done. So, assume that it could be either $\displaystyle p$ or $\displaystyle q$. Then
$\displaystyle G/\mathcal{Z}(G)=p,q$ and since all prime ordered groups are cyclic it follows from the lemma that $\displaystyle G$ is abelian and so $\displaystyle \mathcal{Z}(G)=G$ which contradicts the assumption that $\displaystyle \left|\mathcal{Z}(G)\right|=p,q$.
The conclusion follows.
|
{}
|
Coding
Published on
# Maximum Consecutive Sum of Length K coding solution
Authors
### Question
Find the maximum consecutive sum from an array with a given length k.
### Solution One:
The first method is just using a brute-force method.
def max_sum(arr, k):
if len(arr) < k:
return 'Invalid operation'
maximum = 0
# first loop to go through elements till length - k
for i in range(len(arr) - k + 1):
max_for_a_loop = 0
# second loop to find out the sum of k items
for j in range(i, i + k):
max_for_a_loop += arr[j]
maximum = max(maximum, max_for_a_loop)
return maximum
arr = [80,-50,90,100,-80]
k = 3
print(max_sum(arr, k))
### Solution Two:
Secondly, we can use the Sliding Window method. The method is just sliding the window/portion of the chunk forward in each step. In each step, we just have to subtract the previous element and add the next element, so the second loop for our own chunk summation is not needed.
def max_sum_sliding_window(arr, k):
if len(arr) < k:
return 'Invalid operation'
first_sum = sum([arr[i] for i in range(k)])
for i in range(1, len(arr) - k + 1):
current_sum = first_sum - arr[i-1] + arr[i + k -1]
first_sum = max(first_sum, current_sum)
return first_sum
arr = [80,-50,90,100,-80]
k = 3
print(max_sum_sliding_window(arr, k))
Previous Article
Next Article
|
{}
|
User:
Proportions Time and distance word problems "Time and distance" word problems, often also called "uniform rate" problems, involve something travelling at some fixed steady pace, or else at some average speed. Whenever you read a problem that involves "how fast", "how far", or "for how long", you should think of the distance equation, d = rt, where d stands for distance, r stands for (constant or average) rate of speed, and t stands for time. I can ride my bike to work in an hour an a half. If I drive 40mph faster than I bike and it takes me 30 minutes to drive the same distance, how far is it to work? Using rtbike=rtride 1.5x=0.5(x+40) 1.5x=0.5x+20 1.5x-0.5x=20 x=20 d=rt, then d=20·1.5=30 miles Suppose that a car goes from town A to town B at 10km/h and a bus goes from town B to town A at 80km/h. If they both set out at the same time how long will it be before they meet? (Distance from A to B equals to 360 km). In order to solve this problem we need to find the amount of time the car will take to go from one town to the other at 180km/h (100 + 80). This can be written as $t=\frac{d}{r}=\frac{360}{180}=2$. Therefore they will meet two hours after they both set out. At 10 o'clock in the morning a runner begins to run at an average speed of 10km/h. Half an hour later another runner starts out on the same route at an average speed of 12km/h. How long will it be before the second runner catches up with the first one? Over what distance will they have travelled? The second runner follows the first at an average speed of 12 - 10 = 2km/h. In the first thirty minutes the first runner travels 5km. So we need to calculate how much time is needed to travel 5km at a speed of 2km/h. That is: $t=\frac{d}{r}=\frac{5}{2}=2.5$. So the second runner takes two and a half hours, during which time he runs: d=r·t = 12·2.5 = 30 Km Suppose that a train goes from town A to town B at 100 km/h and another one goes from town B to town A at 110 km/h. If they both set out at the same time, what is the distance the first train will have travelled when they meet? (Distance from A to B equals to 210 Km). Solution =
|
{}
|
# auto-pst-pdf problem - no cropped <file>-autopp.pdf
I'm trying to get some PSTricks pictures to PDF. I'm using the information provided on http://tug.org/PSTricks/main.cgi?file=pdf/pdfoutput#autopstpdf
Well, never change a running system ... yesterday it worked, but due to HDD problems I had to reinstall my system (Win XP sp3, Miktex 2.9, Texniccenter 2.02). The thing is, the final .pdf is created, Latex->PDF, XeLatex->PFD, ... it doesn't matter. But what I would need is the cropped .pdf from the picture itself, the <'file'-autopp.pdf>.
Perl (5v18) ist installed (although before the reinstallation I didn't have it). I got the "typical" two warnings, which I found quite often in forums, but those guys just needed the final .pdf, I would need the cropped pictures.
Using:
\documentclass[12pt]{article}
\usepackage[pdf]{pstricks}
\usepackage[crop=off]{auto-pst-pdf}
\usepackage{pst-pdf}
\begin{document}
\input{a.pstricks}
\end{document}
Getting (umformen.log): (... .tex-file is called "umformen")
[...]
runsystem(echo "-------------------------------------------------")...executed.
runsystem(echo "auto-pst-pdf: Auxiliary LaTeX compilation")...executed.
runsystem(echo "-------------------------------------------------")...executed.
runsystem(del "umformen-autopp.log")...executed.
runsystem(latex -disable-write18 -jobname="umformen-autopp" -interaction=batchmode "\let \APPmakepictures \empty \input
umformen.tex")...executed.
Package auto-pst-pdf Warning:
Creation of umformen-autopp.dvi failed.
This warning occured on input line 124.
Package auto-pst-pdf Warning:
Could not create umformen-pics.pdf. Auxiliary files not deleted.
This warning occured on input line 124.
runsystem(echo "-------------------------------------------------")...executed.
runsystem(echo "auto-pst-pdf: End auxiliary LaTeX compilation")...executed.
runsystem(echo "-------------------------------------------------")...executed.
[...]
Because the whole system is reinstalled, I think the error is maybe somewhere else? Anyone any clues?
Greetings, Florian
-
Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – Heiko Oberdiek Apr 10 '14 at 9:39
There can be clues in umformen-autopp.log if it exists. Otherwise the DVI generation can be done manually using the quoted command line latex -disable-write18 -jobname=umformen-autopp ... umformen.tex". – Heiko Oberdiek Apr 10 '14 at 9:59
Using the command line I get a .dvi, but I'm not quite sure, what do do with that? The umformen-autopp.log does exist. > – Florian Kogler Apr 10 '14 at 12:17
that cannot work, because a package is not loaded twice. \usepackage[pdf]{pstricks} already loads auto-pst-pdf. Try
\documentclass[12pt]{article}
\usepackage{ifpdf}
\usepackage[crop=off]{auto-pst-pdf}
\ifpdf\else\usepackage{pstricks}\fi
\begin{document}
\input{a.pstricks}
\end{document}
and also:
\documentclass[12pt]{article}
\usepackage{ifpdf}
\usepackage{auto-pst-pdf}
\ifpdf\else\usepackage{pstricks}\fi
\begin{document}
\input{a.pstricks}
\end{document}
I tried the first example and an up-to-date MiKTeX 2.9 on a Windows 7.0 with a file a.pstricks:
\begin{pspicture}
\psframe*[linecolor=blue](10,10)
\end{pspicture}
and had no problems (running TeXmaker with enabled --shell-escape)
-
I tried both, both not working, same two warnings and no ..-autopp.pdf – Florian Kogler Apr 10 '14 at 12:04
ok, then use the example with \usepackage[cleanup={}]{auto-pst-pdf}. What files were created? – Herbert Apr 10 '14 at 12:44
Using this line I get a u*.aux, u*.log, u*.pdf, u*.synctex, u*-autopp.log - and of course the two warnings occure – Florian Kogler Apr 10 '14 at 14:56
it looks like that you have no write access for that directory where the external file is written. – Herbert Apr 10 '14 at 15:16
Using XP, specially just as one user, I didn't think the problem was there. Today I tried from the beginning, and it worked. So I tried to find the error. I'm not completely sure, but I think it's a \begin{picture} instead of \begin{pspicture}, which causes all the troubles. – Florian Kogler Apr 11 '14 at 6:58
\begin{pspicture} instead of \begin{picture}
I was using the programm "JPicEdt" to actually draw the picture. One can choose the type of picture: "Emulated Latex", "PSTricks", ... Unfortunatly the type was wrong configurated, also in the minimal I made (a.pstricks), which infact wasn't minimal, but one step over minimal, including the error.
Changing the type does change the output file (pspicture -> picture), although the "save"-button stays gray. Maybe somehow the error got in there.
Hoping to help somebody, thanks for helping me!
Florian
-
|
{}
|
# Difference between revisions of "Infinite Impulse Response Filter"
Infinite impulse response (IIR) filters are like finite impulse response filters (FIR filter), the difference lies in the fact that IIR filters use feedback, for example $y(n)=0.5y(n-1))$ which results in an infinite impulse response.
|
{}
|
## Stream: maths
### Topic: finite free over PID
#### Johan Commelin (Mar 11 2021 at 05:00):
@Anne Baanen your PR about submodules of finite free modules over a PID came just at the right time. We need that kind of stuff (over $\Z$) for the liquid project. How hard do you think it is to lean the fact that over a PID exists_finite_basis is equivalent to fg + torsion_free?
#### Anne Baanen (Mar 11 2021 at 12:18):
Hmm, it wouldn't surprise me if it isn't too hard actually, depending on how much we already have about f.g. for modules (rather than vector spaces). The direction ∃ b, is_basis b → fg ∧ torsion_free should be no problem, the other way might be a bit trickier but you can probably go a long way with induction on the generating set? I'll have to see the state of the library to be sure.
#### Johan Commelin (Mar 11 2021 at 12:46):
@Anne Baanen Note that there isn't a definition of torsion_free yet, afaik
#### Anne Baanen (Mar 11 2021 at 12:47):
Isn't docs#no_zero_smul_divisors equivalent to torsion-free over a domain?
#### Johan Commelin (Mar 11 2021 at 12:50):
Ooh, probably yes
#### Anne Baanen (Mar 11 2021 at 12:52):
Still, it would be worth it to have the "real" torsion-free definition as well, and prove the equivalence (presumably as a set of typeclass instances?)
#### Johan Commelin (Mar 11 2021 at 12:53):
we probably can't have instances in every direction, but yes we should certainly have the definition.
#### Kevin Buzzard (Mar 12 2021 at 07:03):
Isn't the proof of this quite messy involving explicit matrix calculations and grotty normal forms?
#### Johan Commelin (Mar 17 2021 at 09:22):
yes, I think it's somewhat ugly
#### Johan Commelin (Mar 17 2021 at 09:23):
For our application in LTE we might just want to construct a basis of a quotient directly, although that will require doing the grotty normal forms manually in that specific case
Last updated: May 09 2021 at 11:09 UTC
|
{}
|
2014
02-27
# Network
There is a network which is a completely binary tree. In the network, there are 2N users numbered from 1, 2, 3, 4, … 2N, which are the leaves in the network-tree. For example, Figure 1 is a network-tree. In the tree, there are 8 users (leaf nodes) and 7 transmitters (gray nodes).
Using network-tree means that you have to pay a cost for it. The charging way is so interesting, named "pair charging". The cost you must pay is the sum of cost from every pair of user i and user j ( 1 ≤ i < j ≤ 2N). Every user can choose mode A or mode B to pay for the charge.
For every two users i, j (1 ≤ i < j ≤ 2N), first find their LCA (The Least Common Ancestors): Transmitter P. Then count the number of mode A users in the tree of ROOT transmitter P, so do mode B.
Next, let’s name the number nA and nB and at last charge the cost.
f[i][j] is the flow between user i and user j.
The charging way is followed:
At first, every user has an initial mode, A or B. However, they want to reduce the cost, so a part of them would change the mode from A to B, or from B to A. Every change will have an extra cost. If user i changes his mode, he will pay Ci cost.
Now, you should find the minimum cost, that is, the sum of the cost having to pay for and the extra cost of changing mode.
First line, an integer N (N ≤ 10).
Sencond line, 2N integers, ith integer stands for ith user’s initial mode. 0 is mode A, 1 is mode B.
Third line, 2N integers, ith integer stands for Ci, the cost that ith user changes his mode. (0 ≤ Ci ≤ 500 000)
Next 2N – 1 line, describe the flow of every pair of users. The i + 3 th line’s (in the whole input) j row’s integer standing for the f[i][j + i]. (1 ≤ i ≤ 2^N ,1 ≤ j ≤ 2^N – i, 0 ≤ f[i][j] ≤ 500).
First line, an integer N (N ≤ 10).
Sencond line, 2N integers, ith integer stands for ith user’s initial mode. 0 is mode A, 1 is mode B.
Third line, 2N integers, ith integer stands for Ci, the cost that ith user changes his mode. (0 ≤ Ci ≤ 500 000)
Next 2N – 1 line, describe the flow of every pair of users. The i + 3 th line’s (in the whole input) j row’s integer standing for the f[i][j + i]. (1 ≤ i ≤ 2^N ,1 ≤ j ≤ 2^N – i, 0 ≤ f[i][j] ≤ 500).
2
1 0 1 0
2 2 10 9
10 1 2
2 1
3
8
Explanation: Changing the user 1's mode from B to A will minimumize the cost.
#include<cstdio>
#include<iostream>
#include<cmath>
#include<cstring>
#include<cstdlib>
#include<vector>
#include<queue>
#include<list>
#include<stack>
#include<set>
#include<map>
#include<string>
#include<algorithm>
using namespace std;
#define eps 1e-12
#define maxn 1100
#define maxm 1100000
#define inf 0x3f3f3f3f
#define PB push_back
#define MP make_pair
typedef struct{
int len,sta;
}node;
node r[10],t[10];
int next[10][70000],pck[1010][3];
int mp[120][5];
bool f[5];
int main()
{
//freopen("input.txt","r",stdin);
//freopen("output.txt","w",stdout);
int n,m,i,j,k,l,p,q,ans,tmp,now,cnt=1;
memset(f,true,sizeof(f));
i=0;
for(j=0;j<5;j++)
{
f[j]=false;
for(k=0;k<5;k++)
if(f[k])
{
f[k]=false;
for(l=0;l<5;l++)
if(f[l])
{
f[l]=false;
for(p=0;p<5;p++)
if(f[p])
{
f[p]=false;
for(q=0;q<5;q++)
if(f[q])
{
mp[i][0]=j;
mp[i][1]=k;
mp[i][2]=l;
mp[i][3]=p;
mp[i][4]=q;
i++;
}
f[p]=true;
}
f[l]=true;
}
f[k]=true;
}
f[j]=true;
}
while(scanf("%d %d",&n,&m)!=EOF)
{
if(n==0&&m==0) break;
for(i=0;i<n;i++)
{
scanf("%d",&r[i].len);
r[i].sta=1;
}
for(i=0;i<m;i++)
{
scanf("%d %d %d",&pck[i][0],&pck[i][1],&pck[i][2]);
pck[i][0]--;
}
ans=1000000000;
for(i=0;i<120;i++)
{
memcpy(t,r,sizeof(node)*10);
memset(next,0,sizeof(next));
tmp=now=0;
for(j=0;j<n;j++) f[j]=true;
for(;j<5;j++) f[j]=false;
for(j=0;j<5;j++)
if(f[mp[i][j]])
{
p=mp[i][j];
break;
}
for(k=0;k<m;k++)
{
next[pck[k][0]][pck[k][1]]=pck[k][2];
if(pck[k][0]==p&&pck[k][1]==t[p].sta)
{
for(q=pck[k][2]+1;next[p][q];q=next[p][q]+1);
now-=(q-pck[k][2]-1);
t[p].sta=q;
if(q<=t[p].len) continue;
f[p]=false;
while(1)
{
for(j+=1;j<5;j++)
if(f[mp[i][j]])
{
p=mp[i][j];
break;
}
if(j==5) break;
for(q=1;next[p][q];q=next[p][q]+1);
now-=(q-t[p].sta);
t[p].sta=q;
if(q<=t[p].len) break;
f[p]=false;
}
}
else{
now+=(pck[k][2]-pck[k][1]+1);
tmp=max(tmp,now);
}
}
ans=min(ans,tmp);
}
printf("Case %d: %d\n\n",cnt++,ans);
}
}
1. 为什么for循环找到的i一定是素数叻,而且约数定理说的是n=p1^a1*p2^a2*p3^a3*…*pk^ak,而你每次取余都用的是原来的m,也就是n
2. 约瑟夫也用说这么长……很成熟的一个问题了,分治的方法解起来o(n)就可以了,有兴趣可以看看具体数学的第一章,关于约瑟夫问题推导出了一系列的结论,很漂亮
3. I like your publish. It is great to see you verbalize from the coronary heart and clarity on this essential subject matter can be easily noticed.
|
{}
|
Chapter 8.2, Problem 18ES
### Discrete Mathematics With Applicat...
5th Edition
EPP + 1 other
ISBN: 9781337694193
Chapter
Section
### Discrete Mathematics With Applicat...
5th Edition
EPP + 1 other
ISBN: 9781337694193
Textbook Problem
1 views
# In 9-33, determine whether the given relation is reflexive symmetric, transitive, or none of these. Justify your answer. Define a relation Q on R as follows: For all real numbers x and y , x Q y ⇔ x − y is rational.
To determine
To justify whether the given relation is reflexive, symmetric, transitive, or none of these.
Explanation
Given information:
Define a relation Q on R as follows: For all real numbers x and y, x Q y ⇔ x - y is rational.
Calculation:
Let us consider the set below.
Q={x,yR|xy is rational}
Reflexive:
The relation Q is reflexive if (a,a)Q for every element aR.
We note that Q contains (x,x) because xx=0 and 0 is rational (since 0=01), thus Q is reflexive.
xRx
It is reflexive
Symmetric:
The relation Q on a set R Is symmetric if (b,a)Q whenever (a,b)Q.
Let us assume that (a,b)Q. By definition of Q :
ab is rational
By the definition of rational, there exists integers c and d such that:
ab=cd
Multiply each side by − 1:
(ab)=cd
Distributive property:
ba=cd
This last inequality then implies that b − a is irrational since cd is rational (as − c is an integer and d is an integer), which then implies (b,a)Q and thus Q is symmetric
### Still sussing out bartleby?
Check out a sample textbook solution.
See a sample solution
#### The Solution to Your Study Problems
Bartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!
Get Started
|
{}
|
SetVelocity bypasses physics (see the call to SetVelocityMaximal underneath), so forces were not computed based on target velocity. To do this is a physically realistic fashion, write a force based PID controller that takes velocity target as feedback, and GetForceTorque should give you more physically meaningful readings.
SetVelocity bypasses physics (see the call to SetVelocityMaximal underneath), so forces were not computed based on target velocity. To do this is in a physically realistic fashion, write a force based PID controller that takes velocity target as feedback, and GetForceTorque should give you more physically meaningful readings.
|
{}
|
Variational Bayesian Independent Component Analysis
Neil D. Lawrence, University of Sheffield
Christopher M. Bishop, Microsoft Research, Cambridge
Abstract
Blind separation of signals through the info-max algorithm may be viewed as maximum likelihood learning in a latent variable model. In this paper we present an alternative approach to maximum likelihood learning in these models, namely Bayesian inference. It has already been shown how Bayesian inference can be applied to determine latent dimensionality in principal component analysis models @Bishop:bayesPCA98. In this paper we derive a similar approach for removing unecessary source dimensions in an independent component analysis model. We present results on a toy data-set and on some artificially mixed images.
@TechReport{lawrence-ica99, title = {Variational Bayesian Independent Component Analysis}, author = {Neil D. Lawrence and Christopher M. Bishop}, year = {2000}, month = {00}, edit = {https://github.com/lawrennd//publications/edit/gh-pages/_posts/2000-01-01-lawrence-ica99.md}, url = {http://inverseprobability.com/publications/lawrence-ica99.html}, abstract = {Blind separation of signals through the info-max algorithm may be viewed as maximum likelihood learning in a latent variable model. In this paper we present an alternative approach to maximum likelihood learning in these models, namely Bayesian inference. It has already been shown how Bayesian inference can be applied to determine latent dimensionality in principal component analysis models @Bishop:bayesPCA98. In this paper we derive a similar approach for removing unecessary source dimensions in an independent component analysis model. We present results on a toy data-set and on some artificially mixed images.}, key = {Lawrence:ICA99}, linkpsgz = {http://www.thelawrences.net/neil/bica_report.ps.gz}, OPTgroup = {} }
%T Variational Bayesian Independent Component Analysis %A Neil D. Lawrence and Christopher M. Bishop %B %D %F lawrence-ica99 %P -- %R %U http://inverseprobability.com/publications/lawrence-ica99.html %X Blind separation of signals through the info-max algorithm may be viewed as maximum likelihood learning in a latent variable model. In this paper we present an alternative approach to maximum likelihood learning in these models, namely Bayesian inference. It has already been shown how Bayesian inference can be applied to determine latent dimensionality in principal component analysis models @Bishop:bayesPCA98. In this paper we derive a similar approach for removing unecessary source dimensions in an independent component analysis model. We present results on a toy data-set and on some artificially mixed images.
TY - CPAPER TI - Variational Bayesian Independent Component Analysis AU - Neil D. Lawrence AU - Christopher M. Bishop PY - 2000/01/01 DA - 2000/01/01 ID - lawrence-ica99 SP - EP - UR - http://inverseprobability.com/publications/lawrence-ica99.html AB - Blind separation of signals through the info-max algorithm may be viewed as maximum likelihood learning in a latent variable model. In this paper we present an alternative approach to maximum likelihood learning in these models, namely Bayesian inference. It has already been shown how Bayesian inference can be applied to determine latent dimensionality in principal component analysis models @Bishop:bayesPCA98. In this paper we derive a similar approach for removing unecessary source dimensions in an independent component analysis model. We present results on a toy data-set and on some artificially mixed images. ER -
Lawrence, N.D. & Bishop, C.M.. (2000). Variational Bayesian Independent Component Analysis.:-
|
{}
|
# Do all liquids boil in a vacuum?
Water boils at positive temperatures when put into a vacuum. Is this the case with all liquids, e.g. mercury?
-
Superfluid liquids technically do not boil with bubbles, but they evaporate nevertheless from the surface. When helium stops to boil, it is a clear sign of suprfluid transition. – firtree Aug 24 '14 at 11:44
Is there any substance with condensed (solid or liquid) equilibrium phase at zero pressure?
No, because of statistical physics.
Let's consider two things. (1) The potential energy of interaction between molecules. (2) The thermal energy distribution for molecules.
The potential energy of interaction can generally be of any forms, with attraction, repulsion, extremums, but it is always $0$ at the infinite distance $r\to\infty$, which means two molecules become free when they are far enough. Any bound state does then have energy below $0$, and any state with energy above $0$ is unbound, such that molecules move away even if they were close at some instant.
The thermal energy distribution at equilibrium always has some kind of "tail" in high energies, in the form of $\exp(-E/k_B T)$. It is the common feature of Fermi-Dirac, Bose-Einstein and Maxwell-Boltzmann distributions, while all differences lie in low energies. That said, there is no energy limit above which the probability for a molecule would be $0$.
These two facts together say that any condensed phase at zero pressure and $T>0$ would lose molecules, never reaching equilibrium. Though the rate of this process could be extremely slow and experimentally irrelevant.
What makes a condensed phase stable at some non-zero pressure? Then there are always some external molecules incoming that compensate the loss by evaporated molecules.
Two other cases are worth mentioning. First is atoms and atomic nuclei, aren't they stable? The same reasoning applies to them, but their binding energy is rather high, and thus the probability of evaporating even one particle at room temperature is extremely low (exponent is a very quickly diminishing function). Though at higher temperatures they reach equilibrium in plasma and nucleonic plasma respectively. These plasmas can be made at arbitrary low pressure, then there would be no atoms or nuclei.
Second case is quarks in nucleon. Here the energy of interaction does not vanish at infinity, so nucleons are true bound systems (at finite temperature). Though that is not independent of temperature: at some very high temperature there exists a sea of free gluons and quark-antiquark pairs, and the energy of interaction changes, becoming non-binding at infinity, like it happens in quark-gluon plasma.
-
So, no solids or liquids can exist in space. Which is nice to know. Everything from water to a piece of tungsten, the moon, a neutron star, even a black hole and an atomic nucleus will evaporate eventually. However, it'll take some time. In fact, any of the above except for water are for all practical means "stable". Are there any liquids that are "stable" for a very long time in vacuum? – mic_e Nov 6 '15 at 19:10
@mic_e Well, that question is much harder :-) Actually the liquid phase itself is "unexpected" from the physical-theoretical point of view, and so it is hard to discuss anything detailed about liquids. So, I don't know :-) – firtree Nov 7 '15 at 20:04
The boiling point of liquids depends on temperature and pressure. If the pressure of the medium that the liquid presents increases, the boiling point of the liquid increases as well. Since perfect vacuum has no pressure, all liquids boil in a perfect vacuum. However, there is no such a thing as perfect vacuum. If you are asking if all liquids boil in space, the answer is no. If there is not sufficient heat then some liquids won't boil in space. In addition, if the heat of vaporization and the vapor pressure of a liquid at a certain temperature is known, the boiling point can be calculated by using the Clausius–Clapeyron equation thus:
$T_B = \Bigg(\frac{1}{T_0}-\frac{\,R\,\ln(\frac{P}{P_0})}{\Delta H_{vap}}\Bigg)^{-1}$
where:
$T_B$= the boiling point at the pressure of interest (in K)
$R$ = the ideal gas constant, 8.314 J · K−1 · mol−1
$P$ = is the vapour pressure of the liquid at the pressure of interest, either atm or kPa depending on the standard pressure used
$P_0$ = is some pressure where the corresponding T_0 is known, (usually data available at 1atm or 100kPa)
$\Delta H_{vap}$ = the heat of vaporization of the liquid, J · mol−1 at P_0
$T_0$ = the boiling temperature, in K
$\ln$ = the natural logarithm
You can calculate the boiling point of the liquid by this and then you should convert it to heat since the vacuum has no molecules(thus temperature). If the heat presents in the vacuum is larger than the heat you've calculated than you can say the liquid boils in that medium.
-
In the limit $P\to 0$ the formula gives $T_B\to 0$. The liquid itself does have some heat so it boils until the rest is frozen into ice, and then it continues to sublimate (evaporate being dry). – firtree Aug 24 '14 at 12:04
|
{}
|
# Two Constraints, One Inequality by Qing Song
### Solution 1
Clearly, none of $a,b,c$ is zero. Let $P(x)=(x-a)(x-b)(x-c)$ and $\displaystyle f(x)=\frac{P(x)}{x},$ the latter defined on $(-\infty,0)\cup (0,\infty),$ both with the roots $a,b,c.$
By Viète's theorem, $P(x)=x^3+2x^2+qx+4,$ for some real $q.$
Observe that $\displaystyle f'(x)=\frac{2(x-1)(x^2+2x+2)}{x^2},$ implying that $f'(x)\lt 0$ for $x\lt 0,$ making $f$ strictly decreasing on $(-\infty,0).$ With $f(-\infty)=\infty$ and $f(0^{-})=-\infty,$ $f$ has a unique root on $(-\infty,0)$ such that on $(0,\infty)$ it has two roots separated by $x=1,$ hence distinct. Since $f(0^+)=f(\infty)=\infty,$ $f(1)\lt 0.$ (For the record, also $P(1)\lt 0.)$
So, WLOG, assume $a,b\gt 0$ and $c\lt 0.$ Then $c=-2-a-b,$ implying $|c|=a+b+2.$ The required inequality reduces to $a+b\ge 2.$ Now, verifiably, $P(-4)=-4P(1)\gt 0$ and, subsequently, $c\le -4.$
Finally, $-2=a+b+c\le a+b-4,$ so that $a+b\ge 2,$ and the required inequality follows.
Equality is attain for $a=b=1$ and $c=-4$ and permutations.
### Solution 2
If all $a,b,c$ are negative (not one could be zero), then $2=|a|+|b|+|c|\ge 3\sqrt[3]{|abc|},$ implying that $|abc|\le 1$ in contradiction to $abc=-4.$ Hence, two are positive and one is negative. Assume $a,b\gt 0$ and $c\lt 0.$
If $a+b\lt 2$ then $ab\lt 1$ so that $c\le -4.$ But from $a+b+c=-2$ it follows that $a+b\gt 2.$ A contradiction. Hence, $a+b\ge 2,$ implying $c\le -4.$ And, finally, $|a|+|b|+|c|\ge 6.$
### Solution 3
All three $a,b,c$ could not be negative. Indeed, using $x,y,z$ as their absolute values, on one hand $\displaystyle \frac{x+y+z}{3}=\frac{2}{3}\lt 1,$ while, on the other hand, $(xyz)^{\frac{1}{3}}=4^{\frac{1}{3}}\gt 1.$
Continue as above.
### Solution 4
p>First replace $a,b,c$ by their negatives, $x,y,z,$ so that the constraints become
$x+y+z=2$ and $xyz=4$
And let $F(x,y,z)=|x|+|y|+|z|.$ Note that, from the AM-GM inequality, all three $x,y,z$ could not be positive. WLOG, assume $x\gt 0 \gt, y,z.$ Then $F(x,y,z)=x-y-z=2x-2.$
Claim: $x\ge 4.$
Assume to the contrary that $x\lt 4.$ then $0 \lt -y-z=x-2\lt 2$ and
$yz=(-y)(-z)\le (-y)(2+y)\le 1,$
for all $0\lt -y\le 2.$ Multiplying this by $x$ gives $xyz\le x\lt 4,$ contrary to constraint. Thus, $x\ge 4$ and $F=2(x-1)\ge 6.$
### Acknowledgment
This problem by Qing Song (Bejing) was kindly posted at the CutTheKnotMath facebook page by Leo Giugiuc, along with a solution of his Solution 1. Solution 2 is by Christopher D. Long; Solution 3 is by Dan Davies; Solution 4 is by Sam Walters.
|
{}
|
Main content
# Solving equations graphically
CCSS Math: HSA.REI.D.11
Learn a clever method for approximating the solution of any equation.
|
{}
|
## On generalized symmetric Finsler spaces.(English)Zbl 1211.53090
Authors’ abstract: We study generalized symmetric Finsler spaces. We first study some existence theorems, then we consider their geometric properties and prove that any such space can be written as a coset space of a Lie group with an invariant Finsler metric. Finally we show that each generalized symmetric Finsler space is of finite order and those of even order reduce to symmetric Finsler spaces and hence are Berwaldian.
|
{}
|
$$\require{cancel}$$
# 5.4.1: Field of a Point Mass
[ "article:topic", "authorname:tatumj" ]
Equation 5.3.1, together with the definition of field strength as the force experienced by unit mass, means that the field at a distance $$r$$ from a point mass $$M$$ is
$g = \frac{GM}{r^2} \quad \text{N kg}^{-1} \text{ or m s}^{-2} \label{5.4.1} \tag{5.4.1}$
In vector form, this can be written as
$\textbf{g} = -\frac{GM}{r^2} \hat{\textbf{r}} \quad \text{N kg}^{-1} \text{ or m s}^{-2} \label{5.4.2} \tag{5.4.2}$
Here $$\hat{\textbf{r}}$$ is a dimensionless unit vector in the radial direction.
It can also be written as
$\textbf{g} = -\frac{GM}{r^3} \textbf{r} \quad \text{N kg}^{-1} \text{ or m s}^{-2} \label{5.4.3} \tag{5.4.3}$
Here $$\textbf{r}$$ is a vector of magnitude $$r$$ − hence the $$r^3$$ in the denominator.
|
{}
|
# Observational constraints on the origin of the elements II. 3D non-LTE formation of Ba ii lines in the solar atmosphere
### Abstract
Context. The pursuit of more realistic spectroscopic modelling and consistent abundances has led us to begin a new series of papers designed to improve current solar and stellar abundances of various atomic species. To achieve this, we have began updating the three-dimensional (3D) non-local thermodynamic equilibrium (non-LTE) radiative transfer code, Multi3D, and the equivalent one-dimensional (1D) non-LTE radiative transfer code, MULTI. Aims. We examine our improvements to these codes by redetermining the solar barium abundance. Barium was chosen for this test as it is an important diagnostic element of the s-process in the context of galactic chemical evolution. New Ba II + H collisional data for excitation and charge exchange reactions computed from first principles had recently become available and were included in the model atom. The atom also includes the effects of isotopic line shifts and hyperfine splitting. Method. A grid of 1D LTE barium lines were constructed with MULTI and fit to the four Ba II lines available to us in the optical region of the solar spectrum. Abundance corrections were then determined in 1D non-LTE, 3D LTE, and 3D non-LTE. A new 3D non-LTE solar barium abundance was computed from these corrections. Results. We present for the first time the full 3D non-LTE barium abundance of $A({\\rm Ba})=2.27\\pm0.02\\pm0.01$, which was derived from four individual fully consistent barium lines. Errors here represent the systematic and random errors, respectively.
|
{}
|
## Elementary Geometry for College Students (7th Edition) Clone
(a) $RN = 7.3$ (b) $RP = 5.3$
(a) The diagonals of a parallelogram bisect each other. Therefore the point R is the midpoint of $\overline{QN}$ We can find $RN$: $RN = QR = 7.3$ (b) The diagonals of a parallelogram bisect each other. Therefore the point R is the midpoint of $\overline{MP}$ We can find $RP$: $RP = \frac{MP}{2} = \frac{10.6}{2} = 5.3$
|
{}
|
2015-06-19 Andrew B. Collier
So, after upgrading to R 3.2.0 on my EC2 instance, I was installing newer versions of various packages and I ran into a problem with dplyr: virtual memory exhausted!
Seemed like a good time to add some swap.
Adding Swap and Turning it On
First make some swap files. I am in favour of creating a few smaller swap files rather than a single monolithic one.
sudo dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo dd if=/dev/zero of=/var/swap.2 bs=1M count=1024
sudo dd if=/dev/zero of=/var/swap.3 bs=1M count=1024
Another way to create swap files is by using fallocate, which actually provides a more intuitive interface.
sudo fallocate -l 1G /var/swap.1
To make sure that these files are secure, change the access permissions.
sudo chmod 600 /var/swap.[123]
Next you’ll set up a swap area on each of these files.
sudo /sbin/mkswap /var/swap.1
sudo /sbin/mkswap /var/swap.2
sudo /sbin/mkswap /var/swap.3
Finally activate as many of the swap files as you require to give you sufficient virtual memory. I just needed one for starters.
sudo /sbin/swapon /var/swap.1
If you want the swap space to be activated again after reboot then you will need to add an entry to /etc/fstab. More information can be found here.
Turning it Off Again
When you are done with the memory intensive operations you might want to disable the swap files.
sudo /sbin/swapoff /var/swap.1
Here’s everything in a Gist (using just a single swap file and setting its size from an environment variable).
Next: Excel: Copying with Relative Links.
|
{}
|
International
Tables for
Crystallography
Volume B
Reciprocal space
Edited by U. Shmueli
International Tables for Crystallography (2006). Vol. B. ch. 2.5, pp. 283-284 | 1 | 2 |
## Section 2.5.2.7. Imaging of very thin and weakly scattering objects
J. M. Cowleya
#### 2.5.2.7. Imaging of very thin and weakly scattering objects
| top | pdf |
• (a) The weak-phase-object approximation. For sufficiently thin objects, the effect of the object on the incident-beam amplitude may be represented by the transmission function (2.5.2.16) given by the phase-object approximation. If the fluctuations, , about the mean value of the projected potential are sufficiently small so that , it is possible to use the weak-phase-object approximation (WPOA) where is referred to the average value, . The assumption that only first-order terms in need be considered is the equivalent of a single-scattering, or kinematical, approximation applied to the two-dimensional function, the projected potential of (2.5.2.16). From (2.5.2.42), the image intensity (2.5.2.35) becomes where the spread function s(xy) is the Fourier transform of the imaginary part of T(uv), namely .
The optimum imaging condition is then found, following Scherzer (1949), by specifying that the defocus should be such that is close to unity for as large a range of as possible. This is so for a negative defocus such that decreases to a minimum of about before increasing to zero and higher as a result of the fourth-order term of (2.5.2.33) (see Fig. 2.5.2.3). This optimum, Scherzer defocus' value is given by or
Figure 2.5.2.3 | top | pdf |The functions , the phase factor for the transfer function of a lens given by equation (2.5.2.33), and for the Scherzer optimum defocus condition, relevant for weak phase objects, for which the minimum value of is .
The resolution limit is then taken as corresponding to the value of when becomes zero, before it begins to oscillate rapidly with U. The resolution limit is then For example, for mm and Å (200 keV), Å.
Within the limits of the WPOA, the image intensity can be written simply for a number of other imaging modes in terms of the Fourier transforms and of the real and imaginary parts of the objective-lens transfer function , where r and u are two-dimensional vectors in real and reciprocal space, respectively.
For dark-field TEM images, obtained by introducing a central stop to block out the central beam in the diffraction pattern in the back-focal plane of the objective lens, Here, as in (2.5.2.42), should be taken to imply the difference from the mean potential value, .
For bright-field STEM imaging with a very small detector placed axially in the central beam of the diffraction pattern (2.5.2.39) on the detector plane, the intensity, from (2.5.2.41), is given by (2.5.2.43).
For a finite axially symmetric detector, described by , the image intensity is where is the Fourier transform of (Cowley & Au, 1978).
For STEM with an annular dark-field detector which collects all electrons scattered outside the central spot of the diffraction pattern in the detector plane, it can be shown that, to a good approximation (valid except near the resolution limit) Since is the intensity distribution of the electron probe incident on the specimen, (2.5.2.48) is equivalent to the incoherent imaging of the function .
Within the range of validity of the WPOA or, in general, whenever the zero beam of the diffraction pattern is very much stronger than any diffracted beam, the general expression (2.5.2.36) for the modifications of image intensities due to limited coherence may be conveniently approximated. The effect of integrating over the variables , may be represented by multiplying the transfer function T (u, v) by so-called envelope functions' which involve the Fourier transforms of the functions and .
For example, if is approximated by a Gaussian of width (at of the maximum) centred at and is a circular aperture function the transfer function for coherent radiation is multiplied by where
• (b) The projected charge-density approximation. For very thin specimens composed of moderately heavy atoms, the WPOA is inadequate. Within the region of validity of the phase-object approximation (POA), more complicated relations analagous to (2.5.2.43) to (2.5.2.47) may be written. A simpler expression may be obtained by use of the two-dimensional form of Poisson's equation, relating the projected potential distribution to the projected charge-density distribution . This is the PCDA (projected charge-density approximation) (Cowley & Moodie, 1960),
This is valid for sufficiently small values of the defocus , provided that the effects of the spherical aberration may be neglected, i.e. for image resolutions not too close to the Scherzer resolution limit (Lynch et al., 1975). The function includes contributions from both the positive atomic nuclei and the negative electron clouds. For underfocus ( negative), single atoms give dark spots in the image. The contrast reverses with defocus.
### References
Cowley, J. M. & Au, A. Y. (1978). Image signals and detector configurations for STEM. In Scanning electron microscopy, Vol. 1, pp. 53–60. AMF O'Hare, Illinois: SEM Inc.Google Scholar
Cowley, J. M. & Moodie, A. F. (1960). Fourier images. IV. The phase grating. Proc. Phys. Soc. London, 76, 378–384.Google Scholar
Lynch, D. F., Moodie, A. F. & O'Keefe, M. A. (1975). n-Beam lattice images. V. The use of the charge-density approximation in the interpretation of lattice images. Acta Cryst. A31, 300–307.Google Scholar
Scherzer, O. (1949). The theoretical resolution limit of the electron microscope. J. Appl. Phys. 20, 20–29.Google Scholar
|
{}
|
Chapter 9 - Section 9.3 - Exponential Functions - Vocabulary, Readiness & Video Check: 3
yes
Work Step by Step
Yes, the function $y=2^{x}$ is a function because it passes the vertical line test.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
# f – Block Elements
## f – Block Elements
f-Block Elements :
→ Twenty eight elements in periodic table ie, atomic number 58 to 71 (14 elements) and atomic number 90 to 103 (14 elements). which are placed at the bottom of periodic table in two horizontal rows, are known as f-block elements. In these elements the last differentiating electron enters in f-orbitals of (n – 2) shell.
→ These elements are also known as inner-transition elements because the antipenultimate energy shell (next to penultimate) i.e. (n – 2) f-orbitals, lie comparatively deep within the kernel. In these element f-subshell is incompletely filled both in atomic as well as in ionic state. f-block has two series :
• Lanthanide series (fourteen elements after Lanthanum)
• Actinide series (fourteen elements after Actinium)
→ There is quite similarities in the properties of Lanthanum (57La) and fourteen elements of Lanthanides, hence lanthanum is also included with the studies of lanthanides. The general symbol for these elements is Ln.
→ Similarly, the properties of actinium and actinides are so similar that the fifteen element from Ac to Lr can be considered as equivalent to one element. These element are assigned at different positions in order of their increasing atomic numbers, the symmetry of the whole arrangement would be disrupted.
→ Due to this reason, the two series i.e. 4f series and 5f-series are placed at the bottom of the periodic table. These two series 4f-series (Lanthanides) and 5f-series (Actinides) constitute one block of the element i.e. f-block. The general electronic configuration of block elements is
(n – 2)f1-14 (n – 1)d0-1ns2
|
{}
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Circumference ( Read ) | Geometry | CK-12 Foundation
You are viewing an older version of this Concept. Go to the latest version.
# Circumference
%
Best Score
Practice Circumference
Best Score
%
Area and Circumference of Circles
0 0 0
You can use regular polygons with an increasing number of sides to help explain why a circle of radius 1 unit has an area of $\pi \ un^2$ . Where does the “ $r^2$ ” come from in the formula for the area of a circle?
#### Watch This
http://www.youtube.com/watch?v=FwAcdhnphPM James Sousa: Introduction to Circles
#### Guidance
A circle is a set of points equidistant from a given point. The radius of a circle, $r$ , is the distance from the center of the circle to the circle. All circles are similar .
A regular polygon is a closed figure that is both equilateral and equiangular. As the number of sides of a regular polygon increases, the polygon looks more and more like a circle.
Previously you have learned that the area of a circle with radius $r$ is given by $\pi r^2$ and the circumference of a circle with radius $r$ is given by $2 \pi r$ . In the examples and guided practice, you will derive these formulas by looking at the area and perimeter of regular polygons.
Example A
Find the area of a regular octagon inscribed in a circle with radius 1 unit.
Solution: Break the octagon into 8 congruent triangles. You will find the area of one triangle and multiply that by 8 to find the area of the whole octagon. Draw in the height for one of the triangles. The angle formed by the height and the radius of the circle is $\frac{360^\circ}{16}=22.5^\circ$ . Remember that $360$ is the number of degrees in a full circle.
Now, you can use trigonometry to find the height and base of the triangle.
• $\sin 22.5=\frac{0.5b}{1} \rightarrow b=2 \sin 22.5 \rightarrow b \approx 0.7654$
• $\cos 22.5=\frac{h}{1} \rightarrow h=\cos 22.5 \rightarrow h \approx 0.9239$
The area of the triangle is:
$A=\frac{bh}{2}=\frac{(0.7654)(0.9239)}{2} \approx 0.3536 \ un^2$
Therefore, the area of the octagon is:
$A=8(0.3536) \approx 2.8288 \ un^2$
Example B
Find the area of a regular 60-gon inscribed in a circle with radius 1 unit and the area of a regular 120-gon inscribed in a circle with radius 1 unit.
Solution: While you can't accurately draw a regular 60-gon or a regular 180-gon, you can use the method from Example A to find the area of each.
Regular 60-gon: Divide the polygon into 60 congruent triangles. Consider one of those triangles and draw in its height. You will focus on finding the area of this triangle. The angle formed by the height and the radius of the circle is $\frac{360^\circ}{120}=3^\circ$ .
Use trigonometry to find the height and base of the triangle.
• $\sin 3=\frac{0.5b}{1} \rightarrow b=2 \sin 3 \rightarrow b \approx 0.1047$
• $\cos 3=\frac{h}{1} \rightarrow h=\cos 3 \rightarrow h \approx 0.9986$
The area of the triangle is:
$A=\frac{bh}{2}=\frac{(0.1047)(0.9986)}{2} \approx 0.0523 \ un^2$
Therefore, the area of the 60-gon is:
$A=60(0.0523) \approx 3.138 \ un^2$
Regular 120-gon: Divide the polygon into 180 congruent triangles. Consider one of those triangles and draw in its height. You will focus on finding the area of this triangle. The angle formed by the height and the radius of the circle is $\frac{360^\circ}{360}=1^\circ$ .
Use trigonometry to find the height and base of the triangle.
• $\sin 1=\frac{0.5b}{1} \rightarrow b=2 \sin 1 \rightarrow b \approx 0.0349048$
• $\cos 1=\frac{h}{1} \rightarrow h=\cos 1 \rightarrow h \approx 0.9998477$
The area of the triangle is:
$A=\frac{bh}{2}=\frac{(0.0349048)(0.9998477)}{2} \approx 0.0174482 \ un^2$
Therefore, the area of the 180-gon is:
$A=180(0.0174482) \approx 3.141 \ un^2$
Example C
Using Examples A and B, what happens to the area of the regular polygon as the number of sides increases? Make a conjecture about the area of a circle with radius 1 unit.
Solution: From Examples A and B, you have the following information:
Number of Sides 8 60 180 Area 2.8288 3.138 3.141
As the number of sides increases, the regular polygon will get closer and closer to the circle that inscribes it. Therefore, the area of the regular polygon will get closer and closer to the area of the circle. Notice that as the number of sides went from 60 to 180, the area barely changed, staying around 3.14. If you increase the number of sides to 1000, you will find that the area of the regular polygon is still approximately 3.14157. You should recognize these numbers as approximately the value of $\pi$ .
As the number of sides increases, the polygon becomes closer and closer to a circle, and the area gets closer and closer to $\pi \ un^2$ . A conjecture would be that the area of a circle with radius 1 unit is $\pi \ un^2$ .
Concept Problem Revisited
You can use regular polygons with an increasing number of sides to help explain why a circle of radius 1 unit has an area of $\pi \ un^2$ . Remember that all circles are similar. To create another circle with radius $r$ from a circle with radius 1 unit, apply a similarity transformation with scale factor $k=radius$ .
The area of the transformed circle is $k^2$ times the area of the original circle. Because the area of the original circle is $\pi$ and the scale factor is equal to $r$ , the area of the transformed circle is $\pi r^2$ . This is one way to explain why the formula for the area of a circle is $\pi r^2$ .
#### Vocabulary
circle is a set of points equidistant from a given point.
The radius of a circle, $r$ , is the distance from the center of the circle to the circle.
Two figures are similar if a set of transformations will carry one figure onto the other figure.
The circumference of a circle is the perimeter of a circle, the distance around the circle.
#### Guided Practice
1. Find the perimeter of a regular octagon inscribed in a circle with radius 1 unit.
2. Find the perimeter of a regular 60-gon inscribed in a circle with radius 1 unit.
3. Find the perimeter of a regular 180-gon inscribed in a circle with radius 1 unit.
4. Make a conjecture about the circumference of a circle with radius 1 unit.
1. From Example A, the base of one triangle was $b=2 \sin 22.5 \rightarrow b \approx 0.7654$ . Therefore, the perimeter of the octagon is $P \approx 8(0.7654)=6.1232 \ un$ .
2. From Example B, the base of one triangle was $b=2 \sin 3 \rightarrow b \approx 0.1047$ . Therefore, the perimeter of the 60-gon is $P \approx 60(0.1047)=6.282 \ un$ .
3. From Example B, the base of one triangle was $b=2 \sin 1 \rightarrow b \approx 0.0349048$ . Therefore, the perimeter of the 180-gon is $P \approx 180(0.0349048)=6.283 \ un$ .
4. The perimeters from #1, #2, and #3 are approaching $6.283 \approx 2 \pi$ . A conjecture would be that the circumference of a circle with radius 1 unit is $2 \pi \ un$ .
#### Practice
Consider a regular $n$ -gon inscribed in a circle with radius 1 unit for questions 1-8.
1. What's the measure of the angle between the radius and the height of one triangle in terms of $n$ ?
2. What's the length of the base of one triangle in terms of sine and $n$ ?
3. What's the height of one triangle in terms of cosine and $n$ ?
4. What's the area of one triangle in terms of sine, cosine, and $n$ ?
5. What's the area of the polygon in terms of sine, cosine, and $n$ ?
6. What's the perimeter of the polygon in terms of sine, cosine, and $n$ ?
7. Let $n=10,000$ . What is the area of the polygon? What is the perimeter of the polygon? Use your calculator and your answers to #6 and #7.
8. Let $n=1,000,000$ . What is the area of the polygon? What is the perimeter of the polygon? Are you convinced that the area of a circle with radius 1 unit is $\pi$ and the circumference of a circle with radius 1 unit is $2 \pi$ ?
9. Explain why the area of a regular polygon inscribed in a circle with radius 1 unit gets closer to $\pi$ as the number of sides increases.
10. Explain why the perimeter of a regular polygon inscribed in a circle with radius 1 unit gets closer to $2 \pi$ as the number of sides increases.
11. Use similarity and the fact that the circumference of a circle with radius 1 unit is $2 \pi$ to explain why the formula for the circumference of a circle with radius $r$ is $2 \pi r$ .
12. A circle with radius 3 units is transformed into a circle with radius 5 units. What is the ratio of their areas? What is the ratio of their circumferences?
13. The ratio of the areas of two circles is $25:4$ . The radius of the smaller circle is 2 units. What's the radius of the larger circle?
14. The ratio of the areas of two circles is $25:9$ . The radius of the larger circle is 10 units. What's the radius of the smaller circle?
15. The ratio of the areas of two circles is $5:4$ . The radius of the larger circle is 8 units. What's the circumference of the smaller circle?
|
{}
|
# 0.1 Stoichiometry: laws to moles to molarity (Page 3/3)
Page 3 / 3
## Materials list
3M hydrochloric acid (HCl) solution
sodium bicarbonate ${\text{NaHCO}}_{3}$
methyl orange indicator
## Part 2
• Ask you TA for your assigned molarity–it will range from 0.7 M to 1.2 M.
• First need to know the formula for the solute.
• Need the molecular weight of the solute in g/mole.
• The volume of solution, 100 mLs.
• Remember to ensure that all the solute is dissolved before finally filling with deionised water to the mark on the volumetric flask.
• Take your solution to your TA to check the molarity by titration, record value on your report form and your percent error.
## Pre-lab 2: stoichiometry
(Total 10 points)
Click here to print the Pre-Lab Note: In preparing this Pre-Lab you are free to use references and consult with others. However, you may not copy from other students’work (including your laboratory partner) or misrepresent your own data (see honor code).
Name(Print then sign): ___________________________________________________
Lab Day: ___________________Section: ________TA__________________________
1) Which one of the following is a correct expression for molarity?
A) mol solute/L solvent
B) mol solute/mL solvent
C) mmol solute/mL solution
D) mol solute/kg solvent
E)μmol solute/L solution
2) What is the concentration (M) of KCl in a solution made by mixing 25.0 mL of 0.100 M KCl with 50.0 mL of 0.100 M KCl?
A) 0.100
B) 0.0500
C) 0.0333
D) 0.0250
E) 125
3) How many grams of ${\text{CH}}_{3}\text{OH}$ must be added to water to prepare 150 mL of a solution that is 2.0 M ${\text{CH}}_{3}\text{OH}$ ?
A) 9.6× ${\text{10}}^{3}$
B) 4.3× ${\text{10}}^{2}$
C) 2.4
D) 9.6
E) 4.3
4) The concentration of species in 500 mL of a 2.104 M solution of sodium sulfate is __________ M sodium ion and __________ M sulfate ion.
A) 2.104, 1.052
B) 2.104, 2.104
C) 2.104, 4.208
D) 1.052, 1.052
E) 4.208, 2.104
5) Oxalic acid is a diprotic acid. Calculate the percent of oxalic acid ${H}_{2}{C}_{2}{O}_{4}$ in a solid given that a 0.7984 g sample of that solid required 37.98 mL of 0.2283 M NaOH for neutralization.
A) 48.89
B) 97.78
C) 28.59
D) 1.086
E) 22.83
6) A 31.5 mL aliquot of ${H}_{2}{\text{SO}}_{4}$ (aq) of unknown concentration was titrated with 0.0134 M NaOH (aq). It took 23.9 mL of the base to reach the endpoint of the titration. The concentration (M) of the acid was __________.
A) 0.0102
B) 0.0051
C) 0.0204
D) 0.102
E) 0.227
7) What are the respective concentrations (M) of ${\text{Fe}}^{3+}$ and ${I}^{-}$ afforded by dissolving 0.200 mol ${\text{FeI}}_{3}$ in water and diluting to 725 mL?
A) 0.276 and 0.828
B) 0.828 and 0.276
C) 0.276 and 0.276
D) 0.145 and 0.435
E) 0.145 and 0.0483
8) A 36.3 mL aliquot of 0.0529 M ${H}_{2}{\text{SO}}_{4}$ (aq) is to be titrated with 0.0411 M NaOH (aq). What volume (mL) of base will it take to reach the equivalence point?
A) 93.6
B) 46.8
C) 187
D) 1.92
E) 3.84
9) A 13.8 mL aliquot of 0.176 M ${H}_{3}{\text{PO}}_{4}$ (aq) is to be titrated with 0.110 M NaOH (aq). What volume (mL) of base will it take to reach the equivalence point?
A) 7.29
B) 22.1
C) 199
D) 66.2
E) 20.9
10) A solution is prepared by adding 1.60 g of solid NaCl to 50.0 mL of 0.100 M ${\text{CaCl}}_{2}$ . What is the molarity of chloride ion in the final solution? Assume that the volume of the final solution is 50.0 mL.
A) 0.747
B) 0.647
C) 0.132
D) 0.232
E) 0.547
## Report 2: stoichiometry
(Total 80 points)
Note: In preparing this report you are free to use references and consult with others. However, you may not copy from other students’work (including your laboratory partner) or misrepresent your own data (see honor code). This is only an advisory template of what needs to be include in your complete lab write-up.
Name(Print then sign): ___________________________________________________
Lab Day: ___________________Section: ________TA__________________________
## Data table
Mass Grams empty 150-mL beaker ${\text{NaHCO}}_{3}\text{in beaker}$ ${\text{Mass of NaHCO}}_{3}$
Mass Grams NaCl plus beaker first weighing NaCl plus beaker second weighing NaCl plus beaker third weighing
1) The grams of ${\text{NaHCO}}_{3}$ you had in your beaker was ________
2) Calculate how many moles of ${\text{NaHCO}}_{3}$ the mass is ________
3) Write the molar ratio for the ${\text{NaHCO}}_{3}$ / NaCl ratio _______
4) Write the number of moles of NaCl you predict were produced in your experiment.
5) Calculate the mass of NaCl you predict will be produced.
6) Determine, by subtraction, the actual mass of NaCl produced in your experiment.
a) first weighing
b) second weighing
c) third weighing
## Discussion questions
1. Compare the numerical value of the observed ratio for maximum yield to the best ratio
## Part 2
Complete the equation for the titration of
${\text{NaHCO}}_{3\text{aq}}+{\text{HCl}}_{\text{aq}}\to$
can someone help me with some logarithmic and exponential equations.
20/(×-6^2)
Salomon
okay, so you have 6 raised to the power of 2. what is that part of your answer
I don't understand what the A with approx sign and the boxed x mean
it think it's written 20/(X-6)^2 so it's 20 divided by X-6 squared
Salomon
I'm not sure why it wrote it the other way
Salomon
I got X =-6
Salomon
ok. so take the square root of both sides, now you have plus or minus the square root of 20= x-6
oops. ignore that.
so you not have an equal sign anywhere in the original equation?
Commplementary angles
hello
Sherica
im all ears I need to learn
Sherica
right! what he said ⤴⤴⤴
Tamia
what is a good calculator for all algebra; would a Casio fx 260 work with all algebra equations? please name the cheapest, thanks.
a perfect square v²+2v+_
kkk nice
algebra 2 Inequalities:If equation 2 = 0 it is an open set?
or infinite solutions?
Kim
The answer is neither. The function, 2 = 0 cannot exist. Hence, the function is undefined.
Al
y=10×
if |A| not equal to 0 and order of A is n prove that adj (adj A = |A|
rolling four fair dice and getting an even number an all four dice
Kristine 2*2*2=8
Differences Between Laspeyres and Paasche Indices
No. 7x -4y is simplified from 4x + (3y + 3x) -7y
is it 3×y ?
J, combine like terms 7x-4y
im not good at math so would this help me
yes
Asali
I'm not good at math so would you help me
Samantha
what is the problem that i will help you to self with?
Asali
how do you translate this in Algebraic Expressions
Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)=
. After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
the Beer law works very well for dilute solutions but fails for very high concentrations. why?
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Got questions? Join the online conversation and get instant answers!
|
{}
|
# Monogame resolution problem (without scaling sprites)
So I'm working on a project and started working on a scaling system for different resolutions.
I know if you change the monogame resolutions, that the texture won't scale but I noticed deformed pixels. I just changed the PreferredBackBufferWidth to 1920 and the PreferredBackBufferHeight to 1080 but my pixels are deformed.I did some research and tested some resolutions and the only ones that don't work are 1920x1080 and higher.(1920x1080 works in fullscreen)
Draw method for sprites:
public virtual void Draw(GameTime gameTime, SpriteBatch spriteBatch, Color color)
{
spriteBatch.Draw(textureImage, position, new Rectangle(
(currentFrame.X * frameSize.X),
(currentFrame.Y * frameSize.Y),
frameSize.X, frameSize.Y),
color, 0, Vector2.Zero, 1, SpriteEffects.None, 0);
}
spriteBatch.Begin:
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.PointClamp, DepthStencilState.Default, RasterizerState.CullCounterClockwise, null);
The first picture is the player head with the new resolution (without scaling the textures!) and the second picture is the player head with the default resolution (you can't see all the pixels at the bottom).
You can see that in the first picture a few pixels are deformed.
Question: Why doesn't 1920x1080 work in windowed mode (it works in fullscreen though) without deformation (I use a 1920x1080 monitor)? All I do is just enlarging the game window, I'm not scaling sprites...
-> is this because I have a 1920x1080 monitor?
• Need more information about how exactly your are drawing this sprite. Specifically, what is the exact Draw() call you are making? – JonBee Dec 31 '15 at 18:29
• @JonBee The question is edited. But I don't think it's a draw method problem, because it works with the default monogame resolution. – Jelle Dec 31 '15 at 18:47
• Alrighty; Is scaleSize set to 1.0f? And do you call ApplyChanges() after setting backbuffer dimensions? – JonBee Dec 31 '15 at 19:01
• @JonBee My project is compactible with both scaleSize 1 and 2 (I can switch between the two), and there's no pixel deformation for both 1 and 2 for the default resolution. There's deformation when the resolution is 1920x1080 with both scaleSize 1 and 2. And yes, I call Graphics.ApplyChanges(); after setting the backbuffer dimensions. (Graphics is my GraphicsDeviceManager.) EDIT: I found out that there's also no deformation for 1920x1000 (no resolution for a pc) but this means that the deformation only occurs for certain resolutions. – Jelle Dec 31 '15 at 22:54
• EDIT: ScaleSize is removed (always 1 now), but the textures are just drawn 2x bigger. – Jelle Jan 3 '16 at 20:28
-> is this because I have a 1920x1080 monitor?
Yes it is. On a 1920x1080 monitor you can't have a windowed game and also have a 1920x1080 canvas. The window title and edges also take up space. The graphics backbuffer is initialized at 1920x1080 but the internal windows logic scales the canvas to fit inside the window, causing the scaling artifacts. In windowed mode Windows also handles the graphics viewport so it knows where the window is on the desktop, this is a result of that.
You can see this by 'hiding' the windows border:
Using System.Windows.Forms; //also add reference to system.windows.forms in the project
Form MyGameForm = (Form)Form.FromHandle(Window.Handle);
MyGameForm.FormBorderStyle = FormBorderStyle.None;
It may look like fullscreen but it still is in windowed mode (just without showing the window border). But now the viewport can take up the entire screen and the artefacts are gone.
Note: This is default behaviour since XP/Vista. There may be code out there that overrides this but that is beyond this question.
• Thanks for the answer. I wasn't sure if the reason you give me now was right (I did think it was because of this). – Jelle Jan 5 '16 at 16:59
The true cause of the artifacts you are seeing is going to be SamplerState.PointClamp. Due to the way this filtering works, you will see some texels taking up more or less space than neighboring ones as the texture is stretched over it's area.
What's more, is that the texels being deformed will likely change as the sprite or your camera moves about on screen.
There are various fixes for this that you can use, but one that I've recently implemented for a tiny pixel-art type game was to render to a smaller-resolution RenderTarget and then scale it to fit the window. The tricky bit becomes determining the size of your RenderTarget. If you want to keep your sharp edges, it needs to be the size of your screen divided by a round number.
So in your case of 1920x1080, to scale the pixels evenly by a factor of 2, you would create a RenderTarget at 960x540, render your sprites unscaled to that, then draw the RenderTarget to the screen, scaling it times 2.
• Thanks for the answer. I use no scaling right now, because I want the game to be pixel perfect on each resolution. So I just enlarge the screen (so smaller resolutions will not see so many as big resolutions). But thanks for the answer regarding pixel scaling. I think I can use this in the future! :) – Jelle Jan 5 '16 at 17:02
|
{}
|
# Incorrect plot with pgfplots and gnuplot
I'm attempting to plot atanh(x) although there seems to be a giant straight line linking the top and bottom points. Am I doing something wrong?
\documentclass{standalone}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}[no markers,samples=1001]
\end{axis}
\end{tikzpicture}
\end{document}
-
You should probably limit the domain to avoid winding back which seems the case here. – percusse Apr 15 '12 at 10:06
Indeed, adding ,domain=-0.99:0.99 to the options helps. – percusse Apr 15 '12 at 10:18
@percusse Please make that an answer. – Joseph Wright Apr 15 '12 at 10:43
@JosephWright Thanks for the reminder, done. – percusse Apr 15 '12 at 12:06
A – perhaps unsatisfactory – solution could be to restrict the y values to a certain domain.
\documentclass{standalone}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
no markers,
samples=1000,
restrict y to domain=-2:2
]
\end{axis}
\end{tikzpicture}
\end{document}
Derived from Section 4.21 of the »pgfplots« manual (p. 272).
-
Thanks, this is what I did – Harrison Apr 16 '12 at 5:24
I think it's not a problem with gnuplot because tkz-fct uses gnuplot and I get
with
\documentclass{standalone}
\usepackage{tkz-fct}
\begin{document}
\begin{tikzpicture}
\tkzInit[xmin=-1,xmax=1,xstep=.2,
ymin=-5,ymax=5,ystep=1]
\tkzGrid[color=brown,sub,subxstep=.1](-1,-5)(1,5)
\tkzAxeXY
\tkzFct[color=red,samples=1001,domain = -1:1]%
{atanh(\x)}
\end{tikzpicture}
\end{document}
Perhaps there is an option in pgfplots to avoid this problem. The author of pgfplots Christian F. will surely help you.
-
gnuplot marks the point 0,0 as invalid in the .tkzfct.table (and in the .pgf-plot.table) with an i in the third column. It seems that tkz-fct (and plain TikZ/PGF) uses that column to exclude the point from the plot, while pgfplots does not. – Jake Apr 15 '12 at 10:46
Here is a comparison of different ways of doing the same job with the resulting tables as Jake commented:
First, let's draw a few atanh(x) plots
\documentclass{article}
\usepackage{pgfplots}
\begin{document}
\begin{tikzpicture}
\begin{axis}[no markers,samples=202,restrict y to domain=-2:2]
\end{axis}
\begin{scope}[domain=-2:2,shift={(10,2.5)}]
\draw[ultra thick,red] plot[raw gnuplot,id=raw-example] function{set samples 202; plot atanh(x)};
\end{scope}
\begin{scope}[shift={(0,-7)}]
\begin{axis}[no markers,samples=202,domain=-0.999:0.999]
\end{axis}
\end{scope}
\end{tikzpicture}
\end{document}
What I wan't to emphasize here is along the lines of Jake's comment. If you look at the resulting .table files of the first two methods, most of the sampling points are discarded and even in the original pgfplots part it's drawn with lines to origin. Here is the snapshot from the .table file of the raw-example. u and i are the identifiers that confuses pgfplots and it draws them anyway.
As we can see only a limited number of points are of interest and the rest is either outside the domain of the function -1<x<1 or numerical noise which is the reason why I manually tuned to 202 samples. However in my last example I restrict the domain of the function such that all samples are valid and increases the resolution.
Long story short, it's not a good idea to rely on the plotter's abilities for identifying the domain of the function and filtering them, which would result in dummy sampling points , increased compile time and incerased file sizes for no good reason. Although, gnuplot and pgfplots are marvellous tools, we should help them a little anyway :)
-
Thanks, I learnt a lot from your answer – Harrison Apr 18 '12 at 4:12
|
{}
|
# Tools for working with Scrum
Português | Source
Here at the company where I work, we participate in an internal project with a team of 8 people. After attending a lecture on Scrum, we decided to adopt some agile practices in our development team.
We were very excited about the methodology, however, the physical space where we work does not allow the main tools Scrum suggests (whiteboard, pen, paper and post-it). As a disclaimer, our team has not fully adopted Scrum, our idea is to incrementally adopt some practices so we can improve the way we develop software. This post is a result of researching I did to find to support our team work. Let's take a look at the tools.
We will start with a tool that stands for simplicity and ease of use, and I'm a fan of simplicity.
To create a project in Scrumy just type the url in your browser: scrumy.com/<project_name> and you will have created your space. Furthermore, the tool provides creation of user stories and follow them through the taskboard. There is also a version called Scrumy Pro that allows you to coordinate multiple sprints and multiple backlogs, and provide burndown charts.
Like Scrumy, Pango Scrum is also an online tool, but much more complete. Assists in almost all major activities such as Product Backlog, Sprint Backlog, appointment scheduling and planning sprints. The tool does not provide the functionality to create the taskboard because they insist that this should really be done on the basis of paper and post it. Pango Scrum is free and like many Web 2.0 tools, is "in beta."
This is perhaps the best of the tools listed here are, after all, was created by the company of one of the greatest evangelists of agile methodologies, Martin Fowler.
This tool has everything you can expect from a good tool for Scrum, and even better, all well implemented and with a visible usability from first contact. In addition to the expected features, the tool provides an integrated wiki, email alerts, RSS feeds and more. However, as one might expect for a tool that level, it is paid and according the price list, not cheap.
This is a tool very famous the web. It has two versions, Basic (free) and Pro (paid). Unlike the above tools, this tool works offline and needs to be installed. ScrumWorks, once installed, consists of two programs, a server and a client. The server is used by the Scrum Master, is the place where you did all the planning tasks of Scrum. The client portion is for team members so they can get the tasks updates.
It seems to be a very good tool, though lacking a more intuitive interface as the other tools already mentioned. The advantage of this tool is that it seems to have a very interesting level of customization for teams that are continually improving their development process.
The last tool is the one that is open source software. IceScrum is version 2. It's a very complete tool that has all the features I've mentioned in this post as Product Backlog, Sprint Backlog, release planning, features for all Scrum roles, aid planning poker, many charts and more.
The tool appears to have been very well implemented and seems to be constantly evolving. It is given to anyone who is interested in a tool without acquisition costs and that can add a lot of value to your team development process.
#### In conclusion...
I really enjoyed all the tools described above, from the simplest to the most complete. I'll try to test them again, specially IceScrum and Scrumy.
I hope I have helped someone like me, has always sought to improve the performance of your team when it comes to developing better software. As always, comments are open for questions, comments, new suggestions and what else you want to put there.
#### Update: August, 18 2011
This tool deserves an honorable mention as it is the only tool that is gaining brazilian national market in the area of agile development. Not had chance to test it yet, but at first glance seems to be a very good tool.
Scrum Half was developed by GPE, a Brazilian company focused on agile project management. One of the advantages realized in the tool, besides the most obvious is that the language is the fact of being online and thus does not require installation. It is worthwhile taking a look for those interested, the tool is free to use for up to 3 people, more than that, it has a price that can be found here.
Essa ferramenta merece uma menção honrosa, pois é a única ferramenta nacional que está ganhando mercado na área de desenvolvimento ágil. Não tive oportunidade de testá-la ainda, mas à primeira vista parece ser uma ferramenta muito boa.
Stack Overflow: Best Scrum Tools http://stackoverflow.com/questions/35760/best-scrum-tools
All Product Backlog and User Story Management Tools for Agile and Scrum Projects http://www.userstories.com/products
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.