text
stringlengths 100
957k
| meta
stringclasses 1
value |
|---|---|
## College Algebra 7th Edition
$x=-27$
We solve the equation by cubing both sides: $\sqrt[3]{x}=-3$ $x=(-3)^{3}=-27$
|
{}
|
## Random Image Segmentation
I’ve been playing around today with an image decomposition method. Given some observed image $\bf y$, one seeks an image $\bf x$ so as to minimize
$\sum_i (x_i - y_i)^2 + \lambda \sum_{(i,j)} |x_i-x_j|$
Where the first sum is over all pixels $i$, and the second sum is over all neighboring pairs of pixels $(i,j)$ (For simplicity here, I imagine that images are just long vectors, and the grid structure is only present in what pairs are considered neighbors). This minimization has the interesting property that, at a solution, neighboring pixels in $\bf x$ often have the same value. Thus, one can see the optimization as segmenting the image into piecewise constant regions.
I tried using the (excellent) CVX toolbox to do the optimization. Though this worked, it was quite slow. I then derived a little iteratively reweighted least-squares algorithm that is much faster.
(Update Jan 9, 2009: Code for doing the optimization is here.)
Anyway, the standard use of the decomposition is for “denoising”. Here is an example from an image in the Street Scenes database. At top is the original image $\bf y$, followed by the result $\bf x$. At the bottom, I use a random colors for each segment to give a better idea of the discontinuities.
Always, intensities are in the 0-1 interval, and only immediate horizontal and vertical pairs are considered neighbors.
$\lambda=1$
$\lambda=1$
However, more interesting (to me) images come from segmenting random images.
Uniform noise, $\lambda=0.1$
Uniform noise, $\lambda=0.2$
Uniform noise, $\lambda=0.3$
Uniform noise, $\lambda=0.4$
Uniform noise, $\lambda=0.5$
Uniform noise, $\lambda=0.6$
Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.3$
Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.4$
Gaussian noise, variance $\sigma^2=.1$, $\lambda=0.5$
|
{}
|
# Slove a integral equation in STAN
I need to solve an integral equation, the integral is:
For example, I need the value of theta if the result of the integral is 0.6. How can I implement this in STAN ?
that should be attackable with the algebra solver in Stan… @charlesm93 ?
@wds15 is correct. Stan’s algebraic solver should be able to handle this. You need to specify an algebraic equation of the form
f(\theta) = 0
So if I understand you correctly,
f(\theta) = 1 + 4 \frac{\frac{1}{\theta} \int_0^\theta \frac{x}{e^x - 1} dx}{\theta} - 0.6.
The ODE solver should be able to handle the integral in the nominator, with the integrand providing the R.H.S of the differential equation – you’ll need to work out the initial condition, which should be feasible with a tool like wolfram mathematica. Alternatively, if \theta is a scalar, you might try the 1D integrator.
1 Like
|
{}
|
# Build reversed No-Fit-Polygon
I need some robust algorithm to optimally fit one non-convex polygon into another. The destination one can contain holes.
Recently I found scholarly articles on this subject:
One of them describes way to fit list of polygons into another polygon. Building no-fit-polygon here is mentioned as one of the steps.
Another describes robust and concreete way of building no-fit-polygon with good complexity.
The only issue I struggle with is that in this papers different things are considered to be no-fit-polygon. In the first it lies inside the polygon like this, while in the second it is outside like one this picture and has different meaning.
I understand that actual "no-fit-polygon" notion is described in the second article, but how can I get "reversed" no-fit-polygon, like on the first picture? Maybe it is possible to adjust the algorithm from the second paper for this case?
I'd also love the solution to be implementable in code.
Any help appreciated.
Chazelle, Bernard. The polygon containment problem. Carnegie-Mellon University, Department of Computer Science, 1981. Proves that the problem can be solved in polynomial-time, about $$O(n^7)$$ in the general case, for polygons of $$n$$ vertices.
|
{}
|
## tlife (B3/S2-i34q)
For discussion of other cellular automata.
Posts: 1906
Joined: November 8th, 2014, 8:48 pm
Location: Getting a snacker from R-Bee's
### Re: tlife (B3/S2-i34q)
A for awesome wrote:Three variants of the same S-to-B...
As I've said before, these won't mean anything until we can construct an X-to-S.
LifeWiki: Like Wikipedia but with more spaceships. [citation needed]
A for awesome
Posts: 1901
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
Contact:
### Re: tlife (B3/S2-i34q)
A for awesome wrote:Three variants of the same S-to-B...
As I've said before, these won't mean anything until we can construct an X-to-S.
True, but it would be nice to have a head start if such thing is found. An S-to-T:
Code: Select all
x = 26, y = 32, rule = TLifeHistory
13.3B$13.3B$13.3B$13.3B$12.4B$9.13B$8.15B.B$9.15B2C$8.14B.B2C$8.14B2. B$9.13B$8.14B$7.15B$2C5.15B$.C4.16B.B$.C.CB.17B2C$2.2CB.17B2C$4.17B. 2B$4.6BC10B$4.6B2C8B$5.4BC2BC7B$8.3BC7B$8.10B$10.8B$11.7B$12.6B$13.4B
$14.5B$17.2C$17.C$18.3C$20.C! Not too much clearance, though, insofar as that means anything yet. Edit: A S-to-T5 (tandem): Code: Select all x = 28, y = 31, rule = TLifeHistory 13.3B2.3B$13.3B2.3B5.2C$13.3B2.3B5.C$12.4B2.3B2.BC.C$9.13B.B2C$8.16B$9.15B$8.15B$8.16B$9.14B$8.14B$7.15B$2C5.15B$.C4.16B.B$.C.CB.17B2C$2.
2CB.17B2C$4.17B.2B$4.6BC10B$4.6B2C8B$5.4BC2BC7B$8.3BC7B$8.10B$10.8B$
11.7B$12.6B$13.4B$14.5B$17.2C$17.C$18.3C$20.C! Edit 2: S-to-G: Code: Select all x = 30, y = 31, rule = TLifeHistory 22.3B$21.4B$21.4B$12.B6.8B$9.18B$8.20B$9.19B$8.19B$8.18B$9.17B$8.18B$
7.19B$2C5.17B.B2C$.C4.17B2.BC.C$.C.CB.18B4.C$2.2CB.18B4.2C$4.19B$4.6B
C10B.B2C$4.6B2C9B.BC.C$5.4BC2BC7B5.C$4.7BC7B6.2C$3.4B.10B$2.4B4.8B$.
4B6.7B$4B8.6B$3B10.4B$2B12.5B$B16.2C$17.C$18.3C$20.C! Edit 3: A genuine, if useless, S-to-S: Code: Select all x = 30, y = 31, rule = TLifeHistory 27.2C$18.DB7.C$16.D2BD2B2.BC.C$12.B3.B2D4B.B2C$9.8BD7B$8.18B$9.16B$8.
18B$8.18B$9.17B$8.18B$7.19B$2C5.17B.B2C$.C4.17B2.BC.C$.C.CB.18B4.C$2.
2CB.18B4.2C$4.19B$4.6BC10B.B2C$4.6B2C9B.BC.C$5.4BC2BC7B5.C$8.3BC7B6. 2C$8.10B$10.8B$11.7B$12.6B$13.4B$14.5B$17.2C$17.C$18.3C$20.C! Edit 4: Another S-to-G: Code: Select all x = 30, y = 33, rule = TLifeHistory 10.4B8.2C$11.4B7.C$12.4B7.C$13.4B5.2C$14.4B4.B$12.B2.4B2.2B$9.16B$8.
17B$9.16B$8.18B$8.18B$9.17B$8.18B$7.19B$2C5.17B.B2C$.C4.17B2.BC.C$.C. CB.18B4.C$2.2CB.18B4.2C$4.19B$4.6BC10B.B2C$4.6B2C9B.BC.C$5.4BC2BC7B5.
C$8.3BC7B6.2C$8.10B$10.8B$11.7B$12.6B$13.4B$14.5B$17.2C$17.C$18.3C$20.C! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce BlinkerSpawn Posts: 1906 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: tlife (B3/S2-i34q) A for awesome wrote: Edit 3: A genuine, if useless, S-to-S: Code: Select all x = 30, y = 31, rule = TLifeHistory 27.2C$18.DB7.C$16.D2BD2B2.BC.C$12.B3.B2D4B.B2C$9.8BD7B$8.18B$9.16B$8.
18B$8.18B$9.17B$8.18B$7.19B$2C5.17B.B2C$.C4.17B2.BC.C$.C.CB.18B4.C$2.
2CB.18B4.2C$4.19B$4.6BC10B.B2C$4.6B2C9B.BC.C$5.4BC2BC7B5.C$8.3BC7B6. 2C$8.10B$10.8B$11.7B$12.6B$13.4B$14.5B$17.2C$17.C$18.3C$20.C! Better than your first one, at any rate. LifeWiki: Like Wikipedia but with more spaceships. [citation needed] AbhpzTa Posts: 475 Joined: April 13th, 2016, 9:40 am Location: Ishikawa Prefecture, Japan ### Re: tlife (B3/S2-i34q) Another quadratic growth: Code: Select all x = 295, y = 494, rule = B3/S2-i34q 24bo$22bo2bo$5b3o14bo2bo$5bob5o10b3o$7bobob2o23bo$9bobo12b2o7bob2obo$3b2o4bobo11bo2bo2b2ob3obo2bo$9b2o12b3o11bobo$21b3o3bo7bo2bo21bo$21b2o
7bobo3bo2bo21bo$27bobo3b2ob2o22b4o$35bo27b2o$60bo$61bob2o$33b2o25bobo$
33b2o19b2o23b3o$54b2o23b2obo$28bo27b3o19bo3bo$28b2o26b3o3b2o7b2o6b3o$
28bo29b3ob2o7b2o6b3o$5bo55b3o$3b3o54b2o$6bo$bo6bo$bo6b2o37b3o$o7bo38b
3o$bobo44bo$bobo$4b2obo$6bo3$11b3o83bo$10bob2o81b4o$10bo3bo79bo3bo73bo$11b3o80b5o71bo2b2o$11b3o84bo73bo$97b3o$99bo$98b2o$95bo$95bo3$30b2o$
30b2o34bo$65b3o$19bo25b3o51bo$17bo2b2o22bob2o49b2ob2o$19bo24bo3bo$6b3o bo34b3o6b2o39bo4b2o$6b6o33b3o6b2o39bo6bo$6b2o2b3o81b2o6bo$9bo2bo70b2o
11bo6bo$9b4o70b2o17bo$3bobo2bob2o85b3o$4bo2b3o88bo$5b2ob2o28bo$6b3o27b 2ob2o$36b2ob2o$22bo12bo2b2obo$23bo12b2ob2o$20b3o13b4o$21bo16bo$77bo46b o$75b2ob2o43b3o$75b2ob2o14bo27bo3bo$74bob2o2bo12b3o25bobob3o$75b2ob2o 12bo28bo2bob2o$76b4o13bo31b2o$77bo44b3o3$33bo$32b2o$32bobo$30bo2bo$29b
ob2o27bo$28b2ob2o4bo21b3o75bo$35bo2b2o97bo$37bo98bobo2$137bo3$93b3o$
93b3o$28b2o62bo3bo$26b2ob3obo58bob2o$26bo4bobo59b3o$25bo6bo$26bo3b2o$
26bo$28bobo203b2o$29bo204b2o$92b3o$91b2ob2o$90bo2b3o$89bobo2bob2o$95b 4o20bo$95bo2bo19bobo$92b2o2b3o$92b6o20b3o$92b3obo82bo$82bo95b3o$44bo 36bobo93b2o2bo$30bo12b3o34bo95b3o2bo$31bo10bo37b2obo92bo2b6o2bo$26b2o
2bobo10bo34b2obo4b2o88bobob3obo2b2o$28bobo47bo6bo92bobo2b2o2b2o$26b3o
49bobo4b3o19bo$107bo68b2o$82b2o22bobo66bo59bo$93b2o81bobo55b3o$82bobo
8b2o12bo69b3o$83bo8bo$91bob2o$89b3ob2o$20b3o32bo33bo2b2o3bo$19b2obobo 28b2ob2o32b2o5b2o$18b3o3bobo25bo3b2o39b2o74b3o$18b2o3bobo26bob2obo116b o$18b2o4bo26bo2bo119bo$20bob3o27b2o3b2o24b2o9b2o126b3o$22bo29b2o29b2o
137b3o$22bo31bo168bo4$69b3o$70bo11bo$70bo10bobo10bo30bo$93bo30bobo$81b
3o7b2ob2o$91bobo30b3o56b3o$89b3o3b2o86b3o$18b2o76bo87bo$17bobo71bo2b3o
$17bob2o10b2o59b4o$18bo2bo9b2o59b3o$18b3ob2o234bo$13bo5b2ob2o232b2obo$13bo242b5o$255bo2bob2o$16b3o222bo14b2ob2o16bo$16b2o222b3o13b2ob2o16b2o
13b2o$51bo206bo16bobo13bob2o$49b2o44bob2o177bo14bobo$49b2o29b2o14b2o 194bo$32b3o14b2o29b2o13bobo97bo$31bo48b2o12b2o99bo$31bo4b2o21bobo17bo
14bobo31b3o63bobo$30bo6b2o20bo2bo65b3o$22b2o7b2obo24bobo67bo65bo$21bo 2bo6b2o5bo179bo$20b2ob2o8b2ob2o178b2ob2o69bo$21bobo9b2ob2o178b2ob2o44b o22b2ob2o$22bo12bo180bo2b2o43b2o22b2o3bo$217b2o46bo21bo2b2obo$78b3o
171bo38bo2bo$77bo173bobo38b2o$66bo10bo4b2o205bob3o$65bobob2o5bo6b2o 166b3o35bo$52bob2o9bo3bob2o4b2obo38bo$53b2o11b2o4bo4b2o5bo33bobo$52bob
o16bo7b2ob2o$51b2o15b2obo7b2ob2o34b3o56b3o$51bobo15b2o10bo95b3o$178bo 4$286bo$285bo2bo$107bo176bo3b2o$94bobo10bo177bo4b2o$78bo15bobo9bobo57b
o68bo49b2obo$76b2ob2o14bo69bobo66b3o53b2o$75bo3bo2bo24bo$78bo2bo14bobo b2o63b3o117bo$67bo6bo5bobo13bobob2o182bobo$67b3o5b3obo2bo12bo3b2o$66bo
9bob2obo13bob2o$64bo6bo7bo15bo171b2o$63b2o6bo24b2o169b2o$64bo7bo$69bob
o130bobo17b2o50bo$69bobo129b2o2b2o14bo2bo47b2o2bo$65bob2o132bo3b2o14bo
bo50bo$66bo140bo11bo68b2o$199bo3bob2o11b2o2b2o67b2o$199bo5b2o11bo4bo 65bo2bo$199bo3b2o14b2obo42bo10b2o$79bobo29bobo24b3obo57b2ob2o16b2o42bo 9b3o$79bo2bo28bo2bo23b6o58bo61bobo7bo$65b2o12bobo29bobo11bo12b2o2b3o 126b2o$64b2obo56bobo14bo2bo120bo5b3o5b2o$64b3o74b4o115bo2bo7b2o5b3o$
64bobo57b3o8bobo2bob2o39b3o30bo43bo17b3o$66b2o68bo2b3o41b3o30bo43bo14b 2o$64b4o2b2o65b2ob2o42bo30b3o42bo14b3o$65b2o71b3o118bo3bo12bo$263bo$204b2o54b3o$204b5o3$151bo$149b2ob2o87bo$92bo147b3o29b3o$68bob3o18b3o
55b2o4bo116b3o$67b6o75bo6bo117bo$66b3o2b2o75bo6b2o118bo$66bo2bo77bo6bo 40bo73b2o5bo$66b4o78bo46bo72b3o5b2o$67b2obo2bobo75b3o40bobo35b2o22b2o 11bo6b2o$69b3o2bo77bo55bo22b3o20b3o14b2o$69b2ob2o121bo10b2ob2o21b2o18b 2ob2o14b3o$70b3o133b2ob2o60b3o$206b2o2bo$208b2o2$68b3o$67b2ob2o$66bob 2obo$66b2o2b2o30bob3o24bobo$66b3o2bo29b6o23bo2bo$68bobo29b3o2b2o12bo
11bobo$69bo2bo27bo2bo14bobo32b3obo$70b2o28b4o48b3o2b2o115bo$101b2obo2b obo8b3o31bob5o18b3o92bo2b4o$103b3o2bo44bo2b2o19b3o82b2o8bo$81bo21b2ob 2o70bo83b2o8bo$79b2ob2o6b2o12b3o166b2o3bo$79b2ob2o5b3o4bo60bo$78bo2b2o
bo3b3o5b2o59bo$79b2ob2o4b2o3b2o$79b4o5b2o3b2o$81bo11b2o113b2o$207bo$97bo44b2o22bo40b2o26bo$93b4o40bo4b3o20bobo39bobo24b3o$93b3o40b2o5b3o 116bobo$139b2o3b2o6bo12b3o88b2o7bo$139b2o3b2o5b4o101b3o3bo$139b2o9b2ob
2o103b3o5bo4b2o$149bob2o2bo102bo2bo2b2o4bo3bo$136bo13b2ob2o104b2o13b2o
$137b4o9b2ob2o114bo5bo$90bobo45b3o11bo104b3o9bo3b3o$90b2o165bo2bo9b2ob 2o$91bobo163bo2bo10b3o$92b2o111bo53bo$91bob2o109bobo$122bo80b2o2bo$
122b2o28b3o48bo3bo$122bo28b2ob2o47b2ob2o12bo$151b3o2bo47bobo11b2ob2o8b
o$149b2obo2bobo47bo12bo12bo$148b4o65bo5bo6bobo26b2obo$148bo2bo65b2o5bo 35b2o$148b3o2b2o63bo2bobo7bo$149b6o28b3o32b2obo33b2o15b2o$131b2o17bob
3o28b3o34bo24bobo9bo4b2ob2o7bo3b3o$100bo29b2o52bo33b2o26b2o14b2obo6b2o 2b2o3bo$89bo10bo31bo111bobo14bo2bo14bobo$89b2o8bobo142b2o14bobo13b2obo$88bobo152b2obo14b2o15bo$89bo2bo7bo160bo14b3o$90b2obo182b2o$85bo4b2ob 2o49b2o$83b2o59b2o2$88b2o159b3o$87b2obo159bo2$156b2o$155bo2bo36bo28b2o
28bobo$154bobo38bo27bobo4b2o5b2o14bo2bo22bo$152b3o2bo36bobo26bobo11b2o
15bobo22bo$152b2o2b2o64b2obobo50bo$152bob2obo37bo14bo12b5obo48b2o$102b o34bo15b2ob2o50b2ob2o14b3o47b3o$86bo14b4o31bo17b3o49bo2bo3bo35b3o25bo$85bo14b2ob2o30bobo2b2o65bo2bo38b3o$86b3o10bob2o2bo31bobo66bobo5bo35bo$87bo12b2ob2o34b3o64bo2bob3o$100b2ob2o102bob2obo$102bo106bo$114b2o103bo
$217bo2b2o$112bo106bo11b2o$148b3o80b2o$114b2o34b2o122bo$114bob2ob2o56b 3o93bo2bo$116bobobo5b2o22bobo24b3o56bo35bo3b2o$82b2o28bo2bo2bobo5b3o 23bo25bo55b2o37bo4b2o$81bobo34b2o6b2o80bo25b2o3b2o32b2obo$81bob2o27b2o 5bo86bo2bo23bo2bo31b2o8b2o$82bo2bo120bo2bo14b2o8bob2obo28b3o$82b3ob2o 118b3o14bo2bo7bo3b2o28b2o$77bo5b2ob2o135b3o9b2ob2o$77bo130b2o14bo12bo$
207bo2bo2b2o$80b3o124b3o5bo$80b2o66b2o55b3o3bo$148b2o55b2o7bo65b3o$
114b2o34bo60bobo65b2ob2o$113bo2bo31b2obo113b3o10bobob2o$114b2o32b2ob3o
110b2ob2o8b4o$94b2o49bo3b2o2bo109bobob2o8bob6o$93b3o48b2o5b2o109b4o11b
o2bob3o$79bo14b2o48b2o27b3o72bo13bob6o10bo2b2o$79bo94bo71bo2bo12bo2bob
3o9b3o$76bob2o60b3o31bo70b2o3bo14bo2b2o$78bo62bo5b2o94b2o4bo14b3o$75b 2o64bo104bob2o$216b3o16bo7b2o$216bo2b2o10b2obobo$217b3o9b2obo3bo12bo$112b2o90b3o9bo12bo4b2o12bobo$110bo3bo90bo2b2o6b2o12bo$109bo4b2o38bo47b o2bob3o5bo2bo11bob2o$109bo5bo19b3o17b2o45bob5o6b2o14b2o$113b3o18b2ob2o 16b2o45b4o14bo$110b2ob2o18b3o19b2o46bobob2o4bo4bobo$78bo32b3o19bo5bo 43b3o18b2ob2o$76b2ob2o52b2o4bo43b3o19b3o$76b2ob2o17b2o34bo3bo45bo$74b
2o5bo15b2obo24b2o8b2o$74b2obo22bo24b2o80b2o$73bo6b2o13b2o4bo105b2o$74b o4b2o13bo3bob2o$74bo19bobob2o65bo46b2o$75b3o17bo68bo47b3o$162b2ob2o45b
2o$162bobo71bo$160b3o3b2o67b3o$111bo32b2o21bo68b2o$112b2o29b2obo15bo2b
3o64b3o$112b2o13bo18bo16b4o65b3o5b2o$112b2o12b3o12b2o4bo15b3o29bo36b2o
5b3o$127b2o11bo3bob2o47bo44b2o$123b3o14bobob2o48bobo41bo$123b3o5b2o8bo 93b3o$123b2o5b3o62bo39b2o$131b2o$129bo99bo$126b3o99bobo$126b2o99b2o2bo
$227bo3bo$227b2ob2o$228bobo$229bo5$147bob2o26b3o$148b2o27b3o$178bo$
153b2o$143b2ob2o4bo9bobo$144bob2o14b2o$145bo2bo14bobo$147bobo14b2o68bo
$147b2o14bob2o63b2o3bo$148bo80bo6bo$229bo5bo$228bo3b2o$229bo3bo$229bo
4bo2bo$158b3o76bo$147b3o9bo42b3o31bo$146b4o51b2ob2o26b2ob2o$145b3o52bo
3b3o27bo$145b2o53bo5bo$205b2o11bo$148b3o2b2o46bo3bo13b3o$148b3o2b2o47b
2o15b3o$153b2o$146b2o5bo$147bo4bo8$183b3o26bo$183b3o24b2o$169bobo12bo
11b3obo9b2o$169bo2bo23b6o8b2o$169bobo24b2o2b3o$147bo51bo2bo$148bo50b4o
$146b2ob2o42bobo2bob2o$148bobo43bo2b3o$145b2o3b3o42b2ob2o$145bo50b3o$145b3o2bo$146b4o$147b3o$219b2o$217bo3bo$216bo4b2o$216bo5bo$205b2o13b3o
$205b2o10b2ob2o$218b3o3$156bobo$150b2o7bo$150b3o3bo$152b3o5bo56bo$152b o2bo2b2o56bob2o$153b2o65bobo$220bobo$151b3o61bo7bo$151bo2bo45b2o12b2o 6bo$151bo2bo22b3o19bob2o12bo6bo$153bo23b3o20b3o14bo$178bo21bobo15b3o$199b2o17bo$195b2o2b4o$200b2o5$152bo$150bob2obo$149bo2bob3o$149bobo5bo$
150bo2bo$149bo2bo3bo$151b2ob2o$153bo7b2ob2o57bo$163b3o55b2ob2o$165b2o 21bobo30b2ob2o$188b2obo29b2o2bo$188bo2b2o10b3o17b2o$188b2o2bo10bo2bo$186b4ob2o10b2obo$185b3o3bo13bo$186b2ob2o$187b3o5$225bo$172bo30bo22bo$173bo27b2o2bo19bobo$168b2o2bobo28bo16b3o4b2o$170bobo47b3o3bobo$168b3o
49b2o3b4o$221bob2ob2o$220b3ob2o$149b2o70b2ob2o$149b2o71b3o$149bo$149bo
$149bo$158bo$156b3o$158bo15b2o41bo$174b2o40bobo$215b2o2bo$163b2o50bo3b o$163bob2o28b2o18b2ob2o$162bo4bo27b2o19bobo$162b2o2b2o15b3o31bo$166bo 16b3o$149b2o11bobo18b2o$144b2o2b4o9bo2bo16bo6b2o$148b2o12b2o16b3o5b2o$149bobo29b2o5bo$149b3o35bo$148bob2o33bo$149b2o33b3o$184b3o3$220b2o$191bo27bo$190bobo12b2o12b2o$189bo2b2o10bo14bobo$189bo3bo10b2o$189b2ob 2o10bobo$190bobo$191bo! Iteration of sigma(n)+tau(n)-n [sigma(n)+tau(n)-n : OEIS A163163] (e.g. 16,20,28,34,24,44,46,30,50,49,11,3,3, ...) : 965808 is period 336 (max = 207085118608). A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) 3-glider one-sided snake synthesis: Code: Select all x = 12, y = 6, rule = B3/S2-i34q 5bo4bo$4bo4bo$4b3o2b3o$2o$obo$o!
EDIT: 3G bipond:
Code: Select all
x = 19, y = 15, rule = B3/S2-i34q
13bo$12bo$12b3o$17bo$16bo$16b3o7$3o$2bo$bo!
4G paperclip:
Code: Select all
x = 16, y = 23, rule = B3/S2-i34q
13bo$12bo$12b3o6$3o$2bo$bo$7b2o$6b2o$8bo7$13b3o$13bo$14bo! 3G super pond: Code: Select all x = 17, y = 17, rule = B3/S2-i34q 15bo$13b2o$14b2o8$bo$b2o$obo2$15b2o$14b2o$16bo! Another 3G snake: Code: Select all x = 13, y = 16, rule = B3/S2-i34q 10bobo$10b2o$11bo9$bo$b2o$obo7bo$9b2o$9bobo!
Another 4G paperclip:
Code: Select all
x = 20, y = 18, rule = B3/S2-i34q
13bo$11b2o$12b2o5$b2o$obo$2bo6$18bo$17b2o$17bobo!
Exceedingly simple 3G carrier:
Code: Select all
x = 18, y = 17, rule = B3/S2-i34q
16bo$15bo$15b3o11$bo$b2o10b2o$obo9b2o$14bo!
3G shillelagh:
Code: Select all
x = 22, y = 18, rule = B3/S2-i34q
20bo$19bo$19b3o8$bo$b2o$obo3$10b2o$9b2o$11bo!
3G beehive at beehive:
Code: Select all
x = 16, y = 18, rule = B3/S2-i34q
15bo$13b2o$14b2o8$b2o$obo$2bo3$13b3o$13bo$14bo!
Code: Select all
x = 12, y = 7, rule = B3/S2-i34q
9bobo$9b2o$10bo$6b2o$b2o2b2o$obo4bo$2bo!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Posts: 1901
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
Contact:
### Re: tlife (B3/S2-i34q)
Sorry for double-posting. Here's a 3G paperclip:
Code: Select all
x = 17, y = 15, rule = B3/S2-i34q
5bobo$5b2o$6bo6$b2o$obo$2bo2$14b2o$14bobo$14bo!
3G super pond inserter:
Code: Select all
x = 6, y = 21, rule = B3/S2-i34q
5bo$3b2o$4b2o3$3bo$2bo$2b3o11$2o$obo$o!
Edgy 3G carrier:
Code: Select all
x = 13, y = 10, rule = B3/S2-i34q
10bo$10bobo$10b2o$2o$b2o$o2$10b2o$9b2o$11bo!
There are a lot of 3G longboats. Here's one that can be used as an inserter:
Code: Select all
x = 16, y = 15, rule = B3/S2-i34q
15bo$13b2o$14b2o8$b2o$obo$2bo10b2o$12b2o$14bo! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce BlinkerSpawn Posts: 1906 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: tlife (B3/S2-i34q) Any possibility for a p13 hassler? Code: Select all x = 11, y = 11, rule = tlife 5b3o3$o3b3o$o2bo3bo$o2bo3bo2bo$3bo3bo2bo$4b3o3bo3$3b3o! LifeWiki: Like Wikipedia but with more spaceships. [citation needed] A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) 4G beehive at loaf: Code: Select all x = 28, y = 21, rule = B3/S2-i34q o$b2o$2o$21bo$20bo$20b3o10$26bo$25b2o$25bobo$b2o$obo$2bo!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Posts: 1901
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
Contact:
### Re: tlife (B3/S2-i34q)
4G loop:
Code: Select all
x = 30, y = 18, rule = B3/S2-i34q
27bo$26bo$obo11bo11b3o$b2o9b2o$bo11b2o11$27b2o$27bobo$27bo! There are probably less obtrusive versions. Code: Select all x = 18, y = 28, rule = B3/S2-i34q 13bo$13bobo$13b2o5$bo$2bo8bobo$3o8b2o$12bo15$16b2o$15b2o$17bo!
[/edit]
4G block and cap inserter:
Code: Select all
x = 12, y = 24, rule = B3/S2-i34q
9bobo$5bo3b2o$3b2o5bo$4b2o4$obo$2o$bo12$3o$o$bo! (3,4)?G twit (with surplus T): Code: Select all x = 19, y = 11, rule = B3/S2-i34q$5bo6bo$3b2o6bo$4b2o5b3o4$3o$o$bo! Edit: One of the gliders in wildmyron's original very long boat synthesis is entirely unnecessary: Code: Select all x = 36, y = 22, rule = B3/S2-i34q 33bobo$33b2o$34bo2$bo$2bo$3o4$24bo$24bobo$24b2o7$21b2o$21bobo$21bo!
Yet another 4G method:
Code: Select all
x = 24, y = 27, rule = B3/S2-i34q
22bo$21bo$21b3o$16bo$16bobo$16b2o19$bo17b3o$b2o16bo$obo17bo!
4G trans-boat with tail:
Code: Select all
x = 12, y = 33, rule = B3/S2-i34q
8bo$6b2o$7b2o12$5bo$4bo4bobo$4b3o2b2o$10bo13$bo$2o$obo! Code: Select all x = 36, y = 16, rule = B3/S2-i34q 34bo$o32bo$b2o19bo10b3o$2o19bo$21b3o9$26b2o$26bobo$26bo!
4G loaf siamese loaf:
Code: Select all
x = 13, y = 19, rule = B3/S2-i34q
11bo$10bo$5bo4b3o$5bobo$5b2o12$b2o6b2o$2o7bobo$2bo6bo! 4G trans-bun at mango (I'm not sure if that's the right name): Code: Select all x = 33, y = 30, rule = B3/S2-i34q 30bo$28b2o$29b2o3$31bo$30bo$30b3o16$22b2o$21b2o$23bo2$3o$2bo$bo!
4G block and two tails:
Code: Select all
x = 12, y = 31, rule = B3/S2-i34q
3bobo$3b2o$4bo$10bo$9bo$9b3o13$2o$obo$o8$3b2o$3bobo$3bo! 4G 9-snake (!): Code: Select all x = 28, y = 25, rule = B3/S2-i34q bo$2bo$3o3$27bo$25b2o$26b2o14$4b2o$3bobo16b2o$5bo16bobo$22bo!
There are only 5 on Catagolue at the moment.
Edit 2: 5G boat with long tail:
Code: Select all
x = 13, y = 29, rule = B3/S2-i34q
12bo$10b2o$11b2o10$4bo$4bobo$4b2o6$b2o$2o$2bo4$5b2o$4b2o$6bo! It might be reducible to 4G. Edit 3: 4G beehive with tail: Code: Select all x = 26, y = 32, rule = B3/S2-i34q 22bo$21bo$21b3o3$25bo$23b2o$24b2o15$2o$b2o$o5$22b3o$22bo$23bo!
6G (?) loaf siamese barge:
Code: Select all
x = 17, y = 22, rule = B3/S2-i34q
14bo$14bobo$14b2o5$7bo$5b2o$6b2o10$3o2b3o$o4bo$bo4bo!
Edit 4: 4G beehive at eater, not at all related to the 6G synth of the same SL in normal life posted yesterday here:
Code: Select all
x = 20, y = 17, rule = B3/S2-i34q
15bo$14bo$14b3o$2bo$obo$b2o9$7bo9b2o$7b2o8bobo$6bobo8bo!
4G python:
Code: Select all
x = 23, y = 31, rule = B3/S2-i34q
4bo16bo$2bobo14b2o$3b2o15b2o17$b2o$obo$2bo7$20b2o$20bobo$20bo!
Code: Select all
x = 26, y = 25, rule = B3/S2-i34q
23bo$18bo4bobo$16b2o5b2o$17b2o19$2o20bo$b2o18b2o$o20bobo!
4G xs14_2552sga6/beehive with tail with claw (?):
Code: Select all
x = 28, y = 31, rule = B3/S2-i34q
19bo$19bobo$19b2o2$o24bo$b2o21bo$2o22b3o22$25b2o$25bobo$25bo!
There is only one occurrence of this on Catagolue.
4G loaf siamese barge:
Code: Select all
x = 16, y = 23, rule = B3/S2-i34q
14bo$13bo$13b3o9$12bobo$12b2o$13bo2$5bo$4bo$4b3o3$3o$o$bo! 5G long shillelagh: Code: Select all x = 28, y = 26, rule = B3/S2-i34q 25bobo$25b2o$26bo2$6bo$7b2o$6b2o13$19bo$18b2o$18bobo2$bo$b2o$obo!
4G boat with long tail:
Code: Select all
x = 17, y = 26, rule = B3/S2-i34q
14bo$14bobo$14b2o4$2bo$2bobo$2b2o5$2o$obo$o8$13bo$12b2o$12bobo! 4G bun cis-shift trans-bookend (?): Code: Select all x = 24, y = 23, rule = B3/S2-i34q 23bo$21b2o$22b2o$18bo$16b2o$17b2o13$2o$b2o$o19b2o$19b2o$21bo! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) Sorry, double-posting yet again. That last post was starting to get a bit too long, though. Bun at honeycomb in 4G: Code: Select all x = 24, y = 21, rule = B3/S2-i34q 21bo$21bobo$21b2o4$obo$b2o$bo8$20bo$19b2o$4bo14bobo$4b2o$3bobo! Only one occurrence on Catagolue. Edit: 4G block on table: Code: Select all x = 26, y = 26, rule = B3/S2-i34q 7bobo$8b2o$8bo5$23bo$22bo$22b3o3$23b3o$23bo$24bo9$b2o$obo$2bo!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Posts: 1901
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
Contact:
### Re: tlife (B3/S2-i34q)
Non-monotonic c/3 ship:
Code: Select all
x = 11, y = 13, rule = B3/S2-i34q
obo$b2o5bobo$o8b2o$8bo$b2o$9b2o$2bo$2bo4bo2bo$2bo4bo2bo$obo7bo$
2b2o2b4o$3bo2bo$4bo!
Random and somewhat sparky c/4 ship:
Code: Select all
x = 18, y = 17, rule = B3/S2-i34q
b3o2bob2obo2b3o$b2ob3o4b3ob2o$b2o2bobo2bobo2b2o$2bo12bo$4bo2bo2b
o2bo$2bo3bo4bo3bo$4bo2bo2bo2bo$4b3o4b3o$6bob2obo$8b2o$5bob4obo$5b obo2bobo2$obo2b2o4b2o2bobo$3ob2o6b2ob3o$b2ob2o6b2ob2o$2b3o8b3o! Tagalong: Code: Select all x = 20, y = 26, rule = B3/S2-i34q 3b2o10b2o$3b2o10b2o$5bo2bo2bo2bo$7b2o2b2o$8b4o$9b2o2$3b2o10b2o$
2b5ob4ob5o$7b6o$2b2ob2o6b2ob2o$3b2obo6bob2o$4bobobo2bobobo$3b2ob 3o2b3ob2o$4bo10bo$4b2obob2obob2o$9b2o$7bob2obo$7b2o2b2o$7bo4bo$4b
obo6bobo$2bo2b3o4b3o2bo$o3b3o6b3o3bo$bo16bo$2b4o8b4o$3b3o8b3o! Pseudo-tagalong: Code: Select all x = 18, y = 29, rule = B3/S2-i34q 2bo12bo$3bob2o4b2obo$3bo10bo$3b2obo4bob2o$5bo6bo$bo14bo$2bo12bo$o3bo8bo3bo$bob2o8b2obo$bobo10bobo3$b3o2bob2obo2b3o$b2ob3o4b3ob2o
$b2o2bobo2bobo2b2o$2bo12bo$4bo2bo2bo2bo$2bo3bo4bo3bo$4bo2bo2bo2b o$4b3o4b3o$6bob2obo$8b2o$5bob4obo$5bobo2bobo2$obo2b2o4b2o2bobo$3o
b2o6b2ob3o$b2ob2o6b2ob2o$2b3o8b3o!
Completely useless giant tagalong:
Code: Select all
x = 18, y = 45, rule = B3/S2-i34q
2b2o2bo4bo2b2o$b4obo4bob4o$bo14bo$5bo6bo$2bo2bo6bo2bo$b3o10b3o$
bo14bo$bo14bo$b2o12b2o$3o12b3o$o3bo8bo3bo$o16bo$3bobo6bobo$3bobo 6bobo$2o2bo8bo2b2o$o3bo8bo3bo$4bo2bo2bo2bo$4bo3b2o3bo$o3bo8bo3bo
$4b3o4b3o$2bo2bo6bo2bo$2bo2bobo2bobo2bo$3b5o2b5o$6b2o2b2o2$8b2o3$b3o2bob2obo2b3o$b2ob3o4b3ob2o$b2o2bobo2bobo2b2o$2bo12bo$4bo2bo2b o2bo$2bo3bo4bo3bo$4bo2bo2bo2bo$4b3o4b3o$6bob2obo$8b2o$5bob4obo$5b
obo2bobo2$obo2b2o4b2o2bobo$3ob2o6b2ob3o$b2ob2o6b2ob2o$2b3o8b3o!
Tagalongs for the small c/4:
Code: Select all
x = 10, y = 23, rule = B3/S2-i34q
b2o3b2o$b2o3b2o$b3obo2$3bobo$5bo$3bo2b2o$4b2ob2o$4bob3o3$7b2o$7b o$7bo2$5bo3bo$6bobo$5bobo$ob5o$o$o$b2ob2o$3bo!
Code: Select all
x = 10, y = 34, rule = B3/S2-i34q
obo2b2o$2bo2b2o$5obo$3b3o$3bob2o$2obo$3bobo$2o2bo$bobo$3b3o$3bo
bo2$3o$3b2ob2o$2o4b2o$b2o3bo$bob2o3bo$4bobo2bo$2bobo3bo$2b2o3bo$7b2o$6b3o$6b3o3$6b2o$6b3o$8bo$2bo3bo$o2bo2bo$2o3bo$bo2bo$bo2bo$3b
o!
A random and reasonably sparky bilaterally-symmetric c/4:
Code: Select all
x = 19, y = 34, rule = B3/S2-i34q
4b3o5b3o$5bobo3bobo$7bo3bo$2bo2bo7bo2bo$4bo2bo3bo2bo$bo3bo2bobo 2bo3bo$b2o4b2ob2o4b2o$bo3bob2ob2obo3bo$5bo7bo$2bobo9bobo$2bo2bo7b
o2bo$6bobobobo$3bobob2ob2obobo$5bobo3bobo$5bobo3bobo$2b2ob2o5b2o b2o$2b3o9b3o$3bobo7bobo$5bo7bo$3bo2bob3obo2bo$5bob5obo$3bo3bobob o3bo$bo3bob2ob2obo3bo$3bo2bo5bo2bo$3b2o9b2o$3b2o9b2o$3b2o9b2o$4o 11b4o$2o15b2o$5bo7bo$2obo4b3o4bob2o$2b2o2bob3obo2b2o$3b2o4bo4b2o
$4bo9bo! Two small c/5 ships; I think one of them is new, but I'm not sure which: Code: Select all x = 13, y = 11, rule = B3/S2-i34q 3bobobobo$4b2ob2o2$3bobobobo$3bobobobo$2b3o3b3o$6bo$bo9bo$bo3bo
bo3bo$3o3bo3b3o$6bo!
Code: Select all
x = 9, y = 12, rule = B3/S2-i34q
3b3ob3o2$2bo2bobo2bo$3bob3obo$3b2o3b2o$2b2o5b2o2$4bo3bo2$3bobobo
bo$4bo3bo$4bo3bo!
I have negative results with gfind for width-11 3c/7 orthogonal (all symmetries) and width-11 c/5 diagonal (all symmetries), as well as width-12 asymmetric c/5 diagonal. I am currently running both width-12 searches over all symmetries.
By the way, @BlinkerSpawn, would you be willing to update the synthesis collection with my new discoveries, and @wildmyron, would you be willing to update the spaceship stamp collection with all of the new discoveries since October 27th of last year (I think each of us have some, and there are some others as well)?
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
Posts: 1906
Joined: November 8th, 2014, 8:48 pm
Location: Getting a snacker from R-Bee's
### Re: tlife (B3/S2-i34q)
A for awesome wrote:By the way, @BlinkerSpawn, would you be willing to update the synthesis collection with my new discoveries?
Freshly updated, reorganized, and just as messy as ever.
How did you find these? Has Bob Shemyakin's glider-bombarding script been adapted to other rules?
(If it's not Python, can I have it?)
LifeWiki: Like Wikipedia but with more spaceships. [citation needed]
Bullet51
Posts: 536
Joined: July 21st, 2014, 4:35 am
### Re: tlife (B3/S2-i34q)
P14:
Code: Select all
x = 11, y = 11, rule = B3_S2-i34q
6b2ob2o$6b2ob2o$5bobo$4bobo$3bobo$2bobo$2obo$3o2$2o$2o! Still drifting. muzik Posts: 3500 Joined: January 28th, 2016, 2:47 pm Location: Scotland ### Re: tlife (B3/S2-i34q) Maybe "teraunix" or "skipping rope"? Bored of using the Moore neighbourhood for everything? Introducing the Range-2 von Neumann isotropic non-totalistic rulespace! A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) BlinkerSpawn wrote: A for awesome wrote:By the way, @BlinkerSpawn, would you be willing to update the synthesis collection with my new discoveries? Freshly updated, reorganized, and just as messy as ever. How did you find these? Has Bob Shemyakin's glider-bombarding script been adapted to other rules? (If it's not Python, can I have it?) I actually wrote my own glider bombardment script for that. It's Python, sorry, but it's standalone, too -- not a Golly base. It's also very slow, very messy, probably still buggy, and has no interface (you have to make non-trivial changes to the source code), but I could fix it up and post it if you want. x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce wildmyron Posts: 1274 Joined: August 9th, 2013, 12:45 am ### Re: tlife (B3/S2-i34q) A for awesome wrote:<snip>Lots of new ships</snip> I have negative results with gfind for width-11 3c/7 orthogonal (all symmetries) and width-11 c/5 diagonal (all symmetries), as well as width-12 asymmetric c/5 diagonal. I am currently running both width-12 searches over all symmetries. Some great results there. A for awesome wrote:By the way, @BlinkerSpawn, would you be willing to update the synthesis collection with my new discoveries, and @wildmyron, would you be willing to update the spaceship stamp collection with all of the new discoveries since October 27th of last year (I think each of us have some, and there are some others as well)? Working on it now... The latest version of the 5S Project contains over 221,000 spaceships. Tabulated pages up to period 160 are available on the LifeWiki. Bullet51 Posts: 536 Joined: July 21st, 2014, 4:35 am ### Re: tlife (B3/S2-i34q) Precursor of the skipping rope: Code: Select all x = 17, y = 17, rule = B3_S2-i34q 16bo$12b2o$12b2obo$12b2ob2o$14bo4$11bo$10bobo$9bobo$8bobo$b3o5bo$b3o$
4bo$2b2o$o2bo!
Still drifting.
Bullet51
Posts: 536
Joined: July 21st, 2014, 4:35 am
### Re: tlife (B3/S2-i34q)
Statorless P5:
Code: Select all
x = 12, y = 14, rule = B3_S2-i34q
4bo2bo$4bo2bo$4bo2bo$5b2o$4bo2bo$obo2b2o2bobo$obo6bobo$obo6bobo$obo2b
2o2bobo$4bo2bo$5b2o$4bo2bo$4bo2bo$4bo2bo! Still drifting. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) Bullet51 wrote:Statorless P5: Code: Select all rle I have to ask, how exactly are you finding all of these things? x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce wildmyron Posts: 1274 Joined: August 9th, 2013, 12:45 am ### Re: tlife (B3/S2-i34q) wildmyron wrote: A for awesome wrote:By the way, @BlinkerSpawn, would you be willing to update the synthesis collection with my new discoveries, and @wildmyron, would you be willing to update the spaceship stamp collection with all of the new discoveries since October 27th of last year (I think each of us have some, and there are some others as well)? Working on it now... ... Done. A for awesome wrote:Two small c/5 ships; I think one of them is new, but I'm not sure which: The first is new, the second I posted in September. The latest version of the 5S Project contains over 221,000 spaceships. Tabulated pages up to period 160 are available on the LifeWiki. A for awesome Posts: 1901 Joined: September 13th, 2014, 5:36 pm Location: 0x-1 Contact: ### Re: tlife (B3/S2-i34q) wildmyron wrote:... Done. Thank you so much! That will make it so much easier to determine whether or not I should actually post something. P.S. Thank you to @BlinkerSpawn as well for updating the synthesis collection. EDIT: I noticed two minor omission in the spaceship collection: the 2c/6 ships I posted here, and the two-disemiMWSS cleanup of the 4c/8 puffer. Apart from that, looks great! x₁=ηx V ⃰_η=c²√(Λη) K=(Λu²)/2 Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt) $$x_1=\eta x$$ $$V^*_\eta=c^2\sqrt{\Lambda\eta}$$ $$K=\frac{\Lambda u^2}2$$ $$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$ http://conwaylife.com/wiki/A_for_all Aidan F. Pierce wildmyron Posts: 1274 Joined: August 9th, 2013, 12:45 am ### Re: tlife (B3/S2-i34q) A for awesome wrote:EDIT: I noticed two minor omission in the spaceship collection: the 2c/6 ships I posted here, and the two-disemiMWSS cleanup of the 4c/8 puffer. Apart from that, looks great! Whoops, I even fixed the 2c/6 patterns in Golly but somehow they got left out. [Several of the rle patterns in that post aren't decoded correctly by Golly because of the way it deals with linebreaks - I'm sure this has been discussed somewhere but not sure if it's a bug in Golly or invalid rle.] I've included the 4c/8 puffer/ship but was a bit reluctant to do so, as I'm sure the collection could become overwhelmed if every such ship was included. For example, here's a p32 c/2 ship based on Sokwe's back rake. Code: Select all x = 44, y = 71, rule = B3/S2-i34q 14bo$14bo$bobo10bo$bo2bo9bo$o2b2o7b2o$bob2obo4bo6b4o$2bob3o4bo6bo$11b
5obob3o3$32bo2bo3bo2bo$35bo3bo$31bo3bo3bo3bo$4b2o29bo3bo$4b2o26bo2bo3b o2bo$26bo6b3o3b3o$18bo7b2o$4b2o10bo2bo5bobo$4b2o$15bo5bo$14b3o$4b2o9b
2o6bo$4b2o10b3o5bo$18bo$18b3o2bo$4b2o13b3o$4b2o14bo3$4b2o15b2o$4b2o14b o2bo$21b2o2$4b2o$4b2o$17bo$15bo3b2o$4b2o11b3ob2o$4b2o13b2o2bo$21b2o2$
4b2o$4b2o2$19bo$4b2o4bo2bo4bobo$4b2o4bo2bo4bobo$9b2o2b2o4bo3b2o5b2o$
22b4o3b4o$7bobo4bobo5b2o7b2o$7b3o4b3o5b2o2bobo2b2o$4b3o2bo4bo2b3o3b3o 3b3o$4bo2b2o6b2o2bo4bo5bo$5b3o8b3o$6bo10bo$11b2o2$8b3o2b3o$6b12o$6b2o
8b2o$6b3o6b3o$7bobo4bobo$6bo10bo$9bo4bo$7b3o4b3o3$6b2o8b2o$6bo2b2o2b2o 2bo$7b3o4b3o$8bo6bo! Adjusting the disemiMWSS placement gives a p64 puffer: Code: Select all x = 43, y = 71, rule = B3/S2-i34q 14bo$14bo$bobo10bo$bo2bo9bo$o2b2o7b2o$bob2obo4bo6b4o$2bob3o4bo6bo$11b
5obob3o3$33b2o5b2o$32b4o3b4o$32b2o7b2o$4b2o26b2o2bobo2b2o$4b2o27b3o3b 3o$26bo7bo5bo$18bo7b2o$4b2o10bo2bo5bobo$4b2o$15bo5bo$14b3o$4b2o9b2o6bo
$4b2o10b3o5bo$18bo$18b3o2bo$4b2o13b3o$4b2o14bo3$4b2o15b2o$4b2o14bo2bo$
21b2o2$4b2o$4b2o$17bo$15bo3b2o$4b2o11b3ob2o$4b2o13b2o2bo$21b2o2$4b2o$4b2o2$19bo$4b2o4bo2bo4bobo$4b2o4bo2bo4bobo$9b2o2b2o4bo3b2o5b2o$22b4o3b
4o$7bobo4bobo5b2o7b2o$7b3o4b3o5b2o2bobo2b2o$4b3o2bo4bo2b3o3b3o3b3o$4bo
2b2o6b2o2bo4bo5bo$5b3o8b3o$6bo10bo$11b2o2$8b3o2b3o$6b12o$6b2o8b2o$6b3o 6b3o$7bobo4bobo$6bo10bo$9bo4bo$7b3o4b3o3$6b2o8b2o$6bo2b2o2b2o2bo$7b3o
4b3o$8bo6bo! which can be converted into all manner of p64 c/2 ships (here's one based on a T-ship siderake): Code: Select all x = 70, y = 348, rule = B3/S2-i34q 48b3o2b3o$46b2ob6ob2o$46b2o3b2o3b2o$47bo2b4o2bo$46bo10bo$48b3o2b3o$48b o6bo2$41b2o7bo2bo$bobobobo33b2o3b2o2bo2bo2b2o$bobobobo39b3o4b3o$bobobo bo40bo6bo$ob2ob2obo$3o3b3o$bo5bo5$16bobo$15bo2bo$16bobo2$36b2o$36b3o$
36b2obo2$41b2o$41b2o12$41b2o$41b2o10$30bo$28b2o2bo$30bo$43b2o$41b2o3bo$39b3ob4o$38b2o3b3obo$38b2ob2o3b2o$39b3o3$39bo2bo$38bo3bo$39bobo$40bo 7$38b3o$37b4o$36bo4bo$36bobo$35b2o$36bo$36b2obo$38bo7$34b2o$35b2obo$
34bo2bo$36b3o$38bo17$40bo$39bobo$39bobo$40bo8$36bo$35bobo$35bobo$36bo
6$45bo$44b2o$44bobo7$50b3o2b3o$48b2ob6ob2o$48b2o3b2o3b2o$40bo8bo2b4o2b o$39bobo6bo10bo$39bobo8b3o2b3o$40bo9bo6bo2$52bo2bo$48b2o2bo2bo2b2o$49b 3o4b3o$50bo6bo3$36bo$35bobo$35bobo$36bo18bo2bo3bo2bo$58bo3bo$54bo3bo3b
o3bo$58bo3bo$55bo2bo3bo2bo$56b3o3b3o3$43b2o$43b2o13bo2bo3bo2bo$61bo3bo
$57bo3bo3bo3bo$61bo3bo$58bo2bo3bo2bo$59b3o3b3o$51b2o$51b2o2$40bo$39bob
o$39bobo10bo2bo$40bo11bo2bo$51bo$51bo4bo$51bo4bo$52b2ob2o$53b2o2$55bo$36bo17bobo$35bobo16bobo$35bobo17bo$36bo8$43b2o$43b2o9$40bo$39bobo$39bo bo$40bo3$48b2o$47b2obo$47b2o2b2o$49bob2o$48bo3b2o$36bo7bo4bo2bo$35bobo 5b3o4b2o$35bobo4bo3bo$36bo4b2o3b2o$42bo3bo$43b3o$44bo6$45bo$43bob2o$42b2o2bobo$46b2o$46bobo$46b2o3$44b2o$40bo3b2o$39bobo$39bobo$40bo8$36bo
$35bobo$35bobo$36bo7$37b2o$37b2o$48bo$47b3o$39b3o4b2o2bo$37b3ob2o6b3o$
37bo3b2ob2o4bo$33bo3b6o6bo$29bo9bo2bo$30bo6bobobo$38b3o$37b3o4$38b2o$38b2o2$34b2o$36bo$33bo$22bo$22bo$9bobo10bo12b2o$9bo2bo9bo13b2o$8bo2b2o 7b2o$9bob2obo4bo6b4o$10bob3o4bo6bo$19b5obob3o3$41b2o5b2o$40b4o3b4o$40b 2o7b2o$12b2o26b2o2bobo2b2o$12b2o27b3o3b3o$34bo7bo5bo$26bo7b2o$12b2o10b
o2bo5bobo$12b2o$23bo5bo$22b3o$12b2o9b2o6bo$12b2o10b3o5bo$26bo$26b3o2bo$12b2o13b3o$12b2o14bo3$12b2o15b2o$12b2o14bo2bo$29b2o2$12b2o$12b2o$25bo$23bo3b2o$12b2o11b3ob2o$12b2o13b2o2bo$29b2o2$12b2o$12b2o2$27bo$12b2o4b o2bo4bobo$12b2o4bo2bo4bobo$17b2o2b2o4bo3b2o5b2o$30b4o3b4o$15bobo4bobo 5b2o7b2o$15b3o4b3o5b2o2bobo2b2o$12b3o2bo4bo2b3o3b3o3b3o$12bo2b2o6b2o2b
o4bo5bo$13b3o8b3o$14bo10bo$19b2o2$16b3o2b3o$14b12o$14b2o8b2o$14b3o6b3o$15bobo4bobo$14bo10bo$17bo4bo$15b3o4b3o3$14b2o8b2o$14bo2b2o2b2o2bo$15b
bo$5bo! Still drifting. BlinkerSpawn Posts: 1906 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: tlife (B3/S2-i34q) Bullet51 wrote:...And a new p8 sparker: Code: Select all x = 11, y = 11, rule = B3_S2-i34q 5bo$4bobo$2bo5bo$bo2bobo2bo$4b3o$o2bo3bo2bo$4b3o$bo2bobo2bo$2bo5bo$4bo
bo\$5bo!
This was found by wildmyron just over a year ago.
LifeWiki: Like Wikipedia but with more spaceships. [citation needed]
A for awesome
Posts: 1901
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
Contact:
### Re: tlife (B3/S2-i34q)
wildmyron wrote:I've included the 4c/8 puffer/ship but was a bit reluctant to do so, as I'm sure the collection could become overwhelmed if every such ship was included.
My reasoning behind that is that since the 4c/8 puffer is a non-trivial stabilization of a front end with the same period, and it has no p2 or p4 parts at all, it seems as notable as the other 4c/8 ships, arguably more so. I also do feel that only one stabilization (at the minimum possible period, 8) should be included in the collection, so as not to become redundant. If you disagree, feel free to remove it.
EDIT: No results on the c/5 diagonal l120 (w12) search, at least for p5. I'm too impatient to run the l130 one yet. 3c/7 orthogonal l170 (w12) is still going.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
|
{}
|
# kleemans.ch
## Guitar songbook with LaTeX
March 7, 2022
There was a time in my life when I used LaTeX for everything.
The logo in all its beauty.
Besides the obvious - papers & scripts - I also did my homework with LaTeX, game compendiums, course notes, presentations, tutoring exercises, exposees, surveys, CVs, cheat sheets and many more.
A collection of fancy graphs (extracted mainly from course notes) can be found in this blog post.
One project which I only found recently when digging through old files caught my attention - a guitar songbook in LaTeX!
I had totally forgotten and when I discovered it, I had a good laugh and thought I should do a write-up. So here we are :-)
You can download the songbook with 8 songs here.
### Creating a song book ...with LaTeX
Most of the time, a simple guitar song to play along to consists of two parts: lyrics and chords. (Sometimes also short riffs are helpful, see last paragraph.)
With LaTeX, it's possible to add chords directly to words - or more precise, to syllables - so they would "stick" to them, and you wouldn't actually have to care about aligning lines of chords with text (as mostly they're just on different lines).
With the package songbook, it was actually quite simple to start: Just add install the package and add a header to your document for different version of songbooks, and you're good to go. It would support only text, text with chords or a base layout called Overhead Transparency edition - no idea what that was :-)
Here are the different headers:
\documentclass[12pt]{book}
\usepackage{latexsym,fancyhdr}
\usepackage[compactsong,chordbk]{songbook} %% Words & Chords edition.
%%\usepackage[wordbk]{songbook} %% Words Only edition.
%%\usepackage[overhead]{songbook} %% Overhead Transparency edition.
I checked out the package on CTAN and the last update seems to be from 2010, so I wasn't sure if it would still work.
To check, I created a new project on Overleaf, my favorite "LaaS" (LaTeX as a service, so no local installation is needed), and to my suprise, the original file from 2008 still compiles with the latest versions. If you're interested, you can download the file here.
Here's how (part of) and actual song looks like:
\begin{SBOpGroup}
\Ch{C}{How} many \Ch{F}{roads} must a \Ch{C}{man} walk \Ch{Am}{down},
\Ch{C}{before} you can \Ch{F}{call} him a \Ch{G7}{man}?
\Ch{C}{How} many \Ch{F}{seas} must a \Ch{C}{white} dove \Ch{Am}{sail},
\Ch{C}{before} she \Ch{F}{sleeps} in the \Ch{G7}{sand}?
And \Ch{C}{how} many \Ch{F}{times} must the \Ch{C}{can}onballs \Ch{Am}{fly,}
\Ch{C}{before} they're \Ch{F}{for}ever \Ch{G}{banned}?
\newline
\end{SBOpGroup}
Which gets then rendered to this:
I remember having some custom fonts because I didn't like the defaults.
The output is clean as always, quite nice-looking, customizable and the chords would stick to the correct part of the lyrics.
### Aftermath
There were multiple reasons why the approach did not stick.
The most obvious was that layouting the songs was quite time-consuming, and I kept piling up drafts and copy-paste documents with songs I was playing without ever transforming them into LaTeX.
Also, every time a new song should be added, I would have to do the typesetting, then print a single page, and then add that page to the collection of pages. I wouldn't ever be able to print a casebound-like booklet or somthing similar, and that also kept me from using the system.
Another reason was that adding chords or passages with single notes was not supported (or at least I couldn't find out how). So sometimes a song would have a nice riff which couldn't be added in pure LaTeX, and some kind of picture would invalidate the reason to use LaTeX in the first place.
I now like to keep my songs in a big notebook, which is perfect for when I'm occasionally playing on my guitar.
Thanks for reading!
|
{}
|
What I Wore: a \$10 tunic TWO ways
*This is my entry for LizzieBtv’s Get Your Vlog On. If you are a vlogger (or want to be) just jump in and start! It is so much fun to connect with people this way. Here is where you can find the tunic. I’m wearing a medium even though I normally wear small on top…
What I Wore: a \$10 tunic TWO ways
*This is my entry for LizzieBtv’s Get Your Vlog On. If you are a vlogger (or want to be) just jump in and start! It is so much fun to connect with people this way. Here is where you can find the tunic. I’m wearing a medium even though I normally wear small on top…
What I Wore: a \$10 tunic TWO ways
*This is my entry for LizzieBtv’s Get Your Vlog On. If you are a vlogger (or want to be) just jump in and start! It is so much fun to connect with people this way. Here is where you can find the tunic. I’m wearing a medium even though I normally wear small on top…
What I Wore: a \$10 tunic TWO ways
*This is my entry for LizzieBtv’s Get Your Vlog On. If you are a vlogger (or want to be) just jump in and start! It is so much fun to connect with people this way. Here is where you can find the tunic. I’m wearing a medium even though I normally wear small on top…
|
{}
|
# Proof confirmation: If $\gcd(u,v)=1$ and $uv$ is a square, then $u$ and $v$ are squares.
This is a problem from my workbook(not homework), and I can tell that it is true simply upon observation(They share no factors[other than one] and they, when multiplied have all squares as their unique prime factorization, hence they must have all squares as prime factors, which makes $u,v$ squares.)
I am not completely practiced at proofs, so I just want to check that this is rigorous, an if it isn't please advise me as to how I can make it rigorous, enough of my ramblings:
If $\gcd(u,v)=1$ and $uv$ is a square, then $u$ and $v$ are squares.
Proof
If $\gcd(u,v) = 1$ then $u=p_1^{a_1}p_2^{a_2} \dots p_k^{a_k}$, $v=q_1^{b_1}q_2^{b_2} \dots q_l^{b_l}$ where $\forall p,q \;\;, p \ne q$
$uv =p_1^{a_1}q_1^{b_1}p_2^{a_2}q_2^{b_2} \dots p_k^{a_k}q_l^{b_l}$, but $uv$ is a square, hence $u = p_1^{2c_1}p_2^{2c_2} \dots p_k^{2c_k} \;\; , v=q_1^{2d_1}q_2^{2d_2} \dots q_l^{2d_l}$ and thus $u,v$ are clearly both squares.
Thank you for your time, and please try to understand the pain I endured coding those prime factors in $\LaTeX$ on a tablet pc.
• Your proof is basically correct. I'd stress the fact that since $\;uv\;$ is a square, then all of $\;a_i,b_i\;$ must be even... – DonAntonio May 3 '14 at 12:07
• But $\gcd(-4,-9)=1$ and $(-4)(-9)$ is a square and yet neither $-4$ nor $-9$ are squares. – Hagen von Eitzen May 3 '14 at 12:22
• @Hagen They are squares in $\Bbb Z[i]$ ;-) – ajotatxe May 3 '14 at 12:28
• @HagenvonEitzen, I'm guessing the question was intended in the naturals, not in the integers... – DonAntonio May 3 '14 at 13:09
• @HagenvonEitzen Yes, the naturals, my apologies. – Display Name May 3 '14 at 23:25
Your proof is correct. But perhaps it can be shortened/clarified using a function that comes often in handy in number-theory. For each prime $p$, let be $\nu_p(n)$ the greatest integer $\alpha$ such that $p^\alpha$ divides $n$.
So, we only have to prove that $\nu_p(u)$ is even for each prime $p$ (since if $uv$ and $u$ are perfect squares, clearly so is $v$). So let $p$ be any prime.
$$\nu_p(uv)=\nu_p(u)+\nu_p(v)$$
If $\nu_p(u)=0$, is even. If not, then $\nu_p(v)=0$ since $\gcd(u,v)=1$. Hence, $\nu_p(u)=\nu_p(uv)$, which is also even q. e. d.
|
{}
|
• Volume 87, Issue 9
September 1978, pages 161-263
• An integer arithmetic method to compute generalized matrix inverse and solve linear equations exactly
An algorithm that uses integer arithmetic is suggested. It transforms anm ×n matrix to a diagonal form (of the structure of Smith Normal Form). Then it computes a reflexive generalized inverse of the matrix exactly and hence solves a system of linear equations error-free.
• On a class of nonlinear integral equations
In this paper we consider abstract equations of the typeKνν +ν =w0, in a closed convex subset of a separable Hilbert spaceH. For eachv in the closed convex subset,Kv :HH is a bounded linear map. As an application of our abstract result we obtain an existence result for nonlinear integral equations of the typeν(s)+ν(s)01k(s,t)ν(t)dt =W0(s) in the spaceL2 [0,1].
• Large and high subgroup
It is proved that a high subgroup of a large subgroup is closed in a high subgroup of the group itself. Also a necessary and sufficient condition for a subgroup to be a pureabsolute summand is obtained.
• A cylindrically symmetric universe in general relativity
A cylindrically symmetric universe with two degrees of freedom which is of Petrov Type I degenerate, has been derived. Various physical and geometrical properties of the model have also been discussed.
• Preservation of stability and asymptotic behaviour of perturbed integrodifferential equations in a Banach space
In this paper stability and asymptotic behaviour of solutions of the integrodifferential system$$x'\left( t \right) = A\left( t \right)x\left( t \right) + f\left( {t,x\left( t \right),\int_{t_0 }^t {k\left( {t,s,x\left( s \right)} \right)ds} } \right) + g\left( {t,x\left( t \right)} \right)$$ in a Banach space is related to that of the integrodifferential system$$y'\left( t \right) = A\left( t \right)y\left( t \right) + f\left( {t,y\left( t \right),\int_{t_0 }^t {k\left( {t,s,y\left( s \right)} \right)ds} } \right)$$ in a Banach space. The results obtained constitute a generalization of similar results for ordinary differential equations in a Banach space, which motivate the approach and proofs.
• On some fundamental integrodifferential inequalities and their discrete analogues
Some fundamental integrodifferential inequalities and their discrete analogues which can be used in the analysis of a class of integrodifferential and summary difference equations as handy tools are discussed.
• Heat transfer for laminar flow through parallel porous disks of different permeability
The problem of temperature distribution and heat transfer for laminar flow through two parallel porous disks of different permeability, has been investigated when the flow is entirely due to injection and/or suction at the two disks. Viscous dissipation terms have been included in the energy equation and the injection and/or suction velocities at the two disks are assumed to be small. The boundaries are kept at constant temperatures. The variation of temperature and Nusselt numbers at the two disks has been graphically depicted for various values of the injection and suction velocities.
• Magnetohydrodynamic flow of a rarefied gas near an accelerated porous plate
An investigation is made of the flow of an electrically conducting rarefied gas due to the time-varying motion of an infinite porous plate, the gas being permeated by a transverse magnetic field. The suction is taken to be a constant and the magnetic lines of force are taken to be fixed relative to the fluid. The effects of magnetic field, rarefaction parameter, suction parameter are shown by means of some tables. The expressions of the skin friction for the two particular cases have also been obtained.
• Steady flow and heat transfer in a visco-elastic fluid between two coaxial rotating disks
Flow and heat transfer of an elastico-viscous liquid between two parallel uniformly porous disks rotating about a common axis, are studied by perturbation technique. It is seen that for such a fluid, drag increases with suction and decreases with injection, while the rate of heat transfer increases in the presence of either suction or injection.
• Convective flow and heat transfer of a viscous heat generating fluid in the presence of a moving, infinite, vertical, porous plate
The analysis of convective flow and heat transfer of a viscous heat generating fluid past a uniformly moving, infinite, vertical, porous plate has been made systematically with a view to throw adequate light on the effects of the plate-motion and the presence of heat generation/absorption on the flow and heat transfer characteristics. The equations of conservation of momentum and energy which govern the flow and heat transfer of the said problem have been solved numerically by the method of Runge-Kutta-Gill. The numerical results thus obtained for the flow and heat transfer characteristics have revealed many an interesting behaviour, of the skin friction and the rate of heat transfer coefficient at the plate.
• Point force in a stratified fluid in a cylinder under a magnetic field
The flow induced by a body moving in an inviscid incompressible density stratified fluid in an infinite circular cylinder under the influence of a uniform axial magnetic field is studied using the method of replacing the body by an isolated point force. This method was adopted by Childress and others in discussing the body effects in a viscous fluid. The solution is obtained using the Fourier transformation and the Lighthill’s radiation condition. The cases of weak and strong magnetic fields are discussed.
• Aerodynamic noise from wave-turbulence interaction
In order to estimate the acoustic energy scattered when a unit volume of free turbulence, such as in free jets, interacts with a plane steady sound wave, theoretical expressions are derived for two simple models of turbulence: eddy model and isotropic model. The effect of convection by mean motion of the energy-bearing eddies on the incident sound wave and on the sound generated from wave-turbulence interaction is taken into account. Finally, by means of a representative calculation, the directionality pattern and Mach number dependence of the noise so generated is discussed.
• # Proceedings – Mathematical Sciences
Volume 130, 2020
All articles
Continuous Article Publishing mode
• # Editorial Note on Continuous Article Publication
Posted on July 25, 2019
|
{}
|
Sie sind hier
# 17.07.2019: Outstanding Paper Runner Up at HPCS 2019.
Daniel Maier was awarded the Outstanding Paper Runner Up at the 2019 International Conference on High Performance Computing & Simulation (HPCS 2019) held in Dublin, Ireland from July 15-19. The paper titled "Approximating Memory-bound Applications on Mobile GPUs" is co-authored by Nadjib Mammeri, Biagio Cosenza and Ben Juurlink. In their work, the author investigate the approximation of applications on mobile GPUs depending on the availability of fast local memory. Under the theme of “HPC and Modeling & Simulation for the 21st Century," HPCS 2019 focussed on a wide range of the state-of-the-art as well as emerging topics pertaining to high performance and large scale computing systems at both the client and backend levels.
## 22.03.2019: Farzaneh Salehiminapour received the best student award at PARS 2019.
AES member Farzaneh Salehiminapour received the best student award (Nachwuchspreis) at the 28th Workshop on Parallel Algorithms, Parallel Computer Structures and Parallel System Software (PARS 2019). The organizers honored Mrs. Salehiminapour for the submission and presentation of her paper “Reducing DRAM Accesses through Pseudo-Channel Mode”, co-authored by Jan Lucas, Matthias Goebel and Ben Juurlink. In this paper, the authors present and evaluate a technique to use the pseudo-channel mode feature of GDDR5X for merging memory requests and thus reducing the number of memory accesses. The PARS workshop is organized by the special interest group on parallel algorithms, parallel computer structures and parallel system software within the German Informatics Societies (GI/ITG). Its 28th edition was held at TU Berlin in Berlin, Germany.
## 31.05.2018: Best Presentation Award at SCOPES '18.
AES member Angela Pohl received the Best Presentation Award at the 21st International Workshop on Software and Compilers for Embedded Systems (SCOPES `18). She presented the full paper “Control Flow Vectorization for ARM NEON”, which was co-authored by Nicolás Morini, Biagio Cosenza, and Ben Juurlink. In this work, the authors discuss the capabilities of compilers’ auto-vectorization passes and present strategies to overcome the missing masked instructions on ARM NEON platforms, which are critical to vectorize loops with control flow. The work was selected for the award by attendee vote. The 21st edition of SCOPES was held in St. Goar, Germany, and showcased more than 20 presentations from the field of embedded systems.
## 13.12.2017: AES receives free pass for HiPEAC 2018 in Manchester.
AES member Matthias Göbel has received a free pass for the HiPEAC conference 2018 in Manchester, UK. The HiPEAC network of excellence has thus honored the AES group under the lead of Prof. Ben Juurlink for helping in advertising the HiPEAC Jobs Portal. Matthias will use this opportunity to present his latest research results at the co-located Sixth International Workshop on Power-Efficient GPU and Many-core Computing (PEGPUM).
The AES group is an active member of the HiPEAC network that provides a platform for cross-disciplinary research collaboration, promotes the transformation of research results into products and services, and is an incubator for the next generation of world-class computer scientists.
## 07.04.2017: Best paper award a the 13th International Symposium on Applied Reconfigurable Computing (ARC 2017) in Delft (NL) for the paper "A Quantitative Analysis of the Memory Architecture of FPGA-SoCs". (2017)
Matthias Göbel, Ahmed Elhossini and Ben Juurlink received the Best Paper Award at the 13th International Symposium on Applied Reconfigurable Computing (ARC 2017) in Delft, NL for their paper "A Quantitative Analysis of the Memory Architecture of FPGA-SoCs". The work is co-authored by Chi Ching Chi and Mauricio Alvarez-Mesa of Spin Digital, a spin-off of AES.
In this paper, we analyze the various memory and communication interconnects found in FPGA-SoCs, particularly the Zynq-7020 and Zynq-7045 from Xilinx and the Cyclone V SE SoC from Intel. Issues such as different access patterns, cache coherence and full-duplex communication are analyzed, for both generic accesses as well as for a real workload from the field of video coding. Furthermore, the paper shows that by carefully choosing the memory interconnect networks as well as the software interface, high-speed memory access can be achieved for various scenarios.
## December 23, 2015: HiPEAC Paper Award for the paper "High Performance Memory Accesses on FPGA-SoCs: A Quantitative Analysis" (2015).
Matthias Göbel, Chi Ching Chi, Mauricio Alvarez-Mesa and Ben Juurlink received a HiPEAC Paper Award for the paper "High Performance Memory Accesses on FPGA-SoCs: A Quantitative Analysis" which was published at the 2015 IEEE 23rd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM 2015).
The authors analyzed the memory bandwidth of an FPGA-SoC, namely Xilinx's Zynq-7000. Their main focus was laying on two-dimensional memory accesses which can often be found in video coding and image processing applications. They implemented various hardware and software components that perform synthetic accesses with a given width and height. Scenarios like combining multiple ports or using cache coherency were evaluated. Furthermore, a memory trace of an HEVC motion compensation unit has been used in order to simulate a real workload. In contrast to other papers, the results showed that the full bandwidth of the memory controller and the DDR chips can be used. Therefore, the FPGA and the memory ports themselves cannot be considered bottlenecks. In addition, the results proved that Full-HD HEVC decoding in real-time on a Zynq-7000 is possible while 4k decoding is too ambitious without caching or memory compression techniques.
## December 16, 2015: My PhD student Philipp Habermann receives Best Student Paper Award at IEEE ISM 2015.
My PhD student Philipp Habermann presented the paper "Optimizing HEVC CABAC Decoding with a Context Model Cache and Application-specific Prefetching" at the IEEE International Symposium on Multimedia (ISM 2015) in Miami, FL.
He received the Best Student Paper Award for his work, which was co-authored by Chi Ching Chi, Mauricio Alvarez-Mesa and Ben Juurlink.
The authors provide a design space exploration of different cache configurations for HEVC CABAC hardware decoding. It is demonstrated that the decoder throughput can be significantly increased when a cache replaces a bigger context model memory in the critical data path. Furthermore, it is shown that the cache miss rate can be effectively reduced with an application-specific prefetching algorithm and the corresponding optimized memory layout, up to the point where it is not noticeable anymore.
## 2014: HiPEAC Technology Transfer Award (TTA) for transferring some of the proprietary video coding technology to a Greek SME.
HiPEAC Technology Transfer Award (TTA) for transferring some of the proprietary video coding technology to a Greek SME
## 11.09.13: Best paper award at the 3rd IEEE 2013 ICCE-Berlin.
Mauricio Alvarez-Mesa, Chi Ching Chi and Ben Juurlink of the AES group of TU Berlin have won a best paper award at the Third IEEE International Conference on Consumer Electronics-Berlin (ICCE-Berlin) for the paper "HEVC Performance and Complexity for 4K Video". The paper was a joint effort between the AES group of TU Berlin and Fraunhofer HHI.
## 21.01.2013: “Best Poster Award” at the HiPEAC 2013.
“Best Poster Award” for our joint poster “Nexus++: A hardware Task Manager for the StarSs Programming Model” at the 8th International Conference on High-Performanceand Embedded Architectures and Compilers HiPEAC 2013, January 2013, Berlin, Germany.
## November 2002: Best paper award at the IASTED.
Awarded best paper in the area of processor architecture at the IASTED International Conference on Parallel and Distributed Computing and Systems (2002).
D. Cheresiz, B.H.H. Juurlink, S. Vassiliadis, H.A.G. Wijshoff, Architectural support for 3D graphics in the complex streamed instruction set (PDF, 55,2 KB) (November 2002), 14th International Conference on Parallel and Distributed Computing Systems (PDCS 2002), 4-6 November 2002, Cambridge, USA , Best paper award in the area of processor architecture.
Abstract:
Recently, several programming models have been proposed that try to relieve parallel programming. One of these programming models is StarSs. In StarSs, the programmer has to identify pieces of code that can be executed as tasks, as well as their inputs and outputs. Thereafter, the runtime system (RTS) determines the dependencies between tasks and schedules ready tasks onto worker cores. Previous work has shown, however, that the StarSs RTS may constitute a bottleneck that limits the scalability of the system and proposed a hardware task manager called Nexus to eliminate this bottleneck. Nexus has several limitations, however. For example, the number of inputs and outputs of each task is limited to a fixed constant and Nexus does not support double buffering. Here we present Nexus++ that addresses these as well as other limitations. Experimental results show that double buffering achieves a speedup of $54\times$, and that Nexus++ significantly enhances the scalability of applications parallelized using StarSs.
# 2. Ausgründungen
## Spin Digital
Spin Digital is a spin-off of the Technische Universität Berlin. We are specialists on video codecs, in particular, we have developed a highly efficient software implementation of the new HEVC/H.265 video coding standard, capable of doing, 4K and 8K decoding and encoding on standard computing platforms. Our main focus is on high performance and high quality video: our software has been extensively optimized for doing ultra-high quality video processing on multicore computing platforms. We leverage on the extensive research done by the AES group of TU Berlin on how to map video codecs to parallel computer architectures. Based on that research Spin Digital has been able to produce one of the fastest video codecs currently available.
- http://www.spin-digital.com
## Leiter
+49.30.314-73130/73131
Raum E-N 642
## Sekretariat
Özlem Ocak
Sekr. EN 12
Raum E-N 645
Tel. +49.30.314-73130
## Postanschrift
Technische Universität Berlin
Architektur eingebetteter Systeme
Sekr. EN 12
Einsteinufer 17 -6. OG
10587 Berlin
|
{}
|
The current GATK version is 3.7-0
Examples: Monday, today, last week, Mar 26, 3/26/04
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
You can opt in to receive email notifications, for example when your questions get answered or when there are new announcements, by following the instructions given here.
#### ☞ Did you remember to?
1. Search using the upper-right search box, e.g. using the error message.
3. Include tool and Java versions.
4. Tell us whether you are following GATK Best Practices.
5. Include relevant details, e.g. platform, DNA- or RNA-Seq, WES (+capture kit) or WGS (PCR-free or PCR+), paired- or single-end, read length, expected average coverage, somatic data, etc.
6. For tool errors, include the error stacktrace as well as the exact command.
7. For format issues, include the result of running ValidateSamFile for BAMs or ValidateVariants for VCFs.
8. For weird results, include an illustrative example, e.g. attach IGV screenshots according to Article#5484.
9. For a seeming variant that is uncalled, include results of following Article#1235.
#### ☞ Formatting tip!
Wrap blocks of code, error messages and BAM/VCF snippets--especially content with hashes (#)--with lines with three backticks ( ` ) each to make a code block as demonstrated here.
GATK 3.7 is here! Be sure to read the Version Highlights and optionally the full Release Notes.
# If I align my bams using bwa-mem, do I still need to do local indel realignment?
Member Posts: 19
edited September 2013
I have ~2,000 whole-genome-sequencing bams that I need to move through an alignment/deduping/recalibration pipeline. I'll be using bwa-mem for the initial alignment, and I'm wondering if you would still recommend carrying out the local realignment step (downstream) for these bams, given bwa-mem does some local/gapped alignment on its own. This is an important question for our group because the local realignment (using RealignerTargetCreator & IndelRealigner) for these bams will take a significant amount of time / computational resources. If I want to call short indels using UnifiedGenotyper (rather than HC, due to number of samples), does bwa-mem for primary alignment reduce the need for local realignment downstream? Any feedback/thoughts would be much appreciated - thanks!
Tagged:
|
{}
|
# 8 . Eroluste tn follumg iteabs DIS i+ d+ 6) J ee+&+ S e{0-;)*&+
###### Question:
8 . Eroluste tn follumg iteabs DIS i+ d+ 6) J ee+&+ S e {0-;)*&+
#### Similar Solved Questions
##### 10 . Domestic cats Felis cattus) are well-known junkies of the plant, Nepeta cataria, or catnip_ Nevertheless cats vary in their response to the plant_ and some don't respond at all It turns out that other cat species including leopards (Panthera pardis) , respond to catnip like house cats do some are fools for the stuff and others ignore it. This respouse is inherited relatively simple dominant Mendelian trait_ One study (this is true) , researchers found that 14 of 18 leopards responded
10 . Domestic cats Felis cattus) are well-known junkies of the plant, Nepeta cataria, or catnip_ Nevertheless cats vary in their response to the plant_ and some don't respond at all It turns out that other cat species including leopards (Panthera pardis) , respond to catnip like house cats do s...
##### The two stage compressor shown in the figure below takes in air at room condition (285...
The two stage compressor shown in the figure below takes in air at room condition (285 K and 100 kPa) and compresses it to 2.5 MPa. The intercooler then cools to the air to 380 K, after which it enter the second stage of compression, which has an exit pressure of 12 MPa. Both compression stages are ...
##### 16. Your middle aged patient presents to clinic with vesicular, painful lesions on their neck. They...
16. Your middle aged patient presents to clinic with vesicular, painful lesions on their neck. They appear as small lesions following each other in a curved line and then cluster hear the supraclavicular region on the patient’s left side. When reporting your findings to the provider on call, w...
##### A solid metal sphere emits 1.02 ✕ 1020 photons every second with a radiating power of...
A solid metal sphere emits 1.02 ✕ 1020 photons every second with a radiating power of 1.43 W. (a)Determine the energy associated with each photon. (b) Assuming the sphere's power output is associated with the peak wavelength, determine the temperature of the sphere at which this wavelengt...
##### For the SN2 reaction, the absolute configuration of the product will be … CH3 CH3 CHE...
For the SN2 reaction, the absolute configuration of the product will be … CH3 CH3 CHE Format - -----C----Br - - C- H + Br Br H3C(H2C) (CH2)CH3 (CH2)SCH3 (The representation of the product is not a Fischer proejection and is not meant to indicate anything about stereochemistry.) The product is...
##### Eenntrt Iut Manaeane Heataiett 1J3t I5iM U 8z05aUal , Merue euia ma"t , 47l LeLnnedn retertit Mu eaoetatHntriceelLec unal A4drer |
eenntrt Iut Manaeane Heataiett 1J3t I5iM U 8z05aUal , Merue euia ma"t , 47l LeLnnedn retertit Mu eaoetat Hntriceel Lec unal A4drer |...
##### Design at least two promotional materials for Kit kat Chocolate and Develop measures of success for...
Design at least two promotional materials for Kit kat Chocolate and Develop measures of success for promotional program in detail...
##### Find the support reactions IOK 12K 3 WITH A tt 8',89 100040 200046 250 PLF tot...
find the support reactions IOK 12K 3 WITH A tt 8',89 100040 200046 250 PLF tot 20...
##### Chapter Section 6.3, Question 12And the Laplace transform of tne given function:where Uc (t) denotes the Heaviside function which is 0 for t andf (t) = (t 7)us (t)5)uz (4) ,20.Enclose numerators and denominators in parentheses: For example;6)/ (14 n).c{f(t)}Q€,8 >0Click if you would lke to Show WNork Ffor this question: Open Show WorkSN HINHAKTO TTXIBy acceskina this Question Assistance you will learn while you eam polnt= Dasedthe Point potentia Palicy &tinstnictorHLiecrthhREpeNet
Chapter Section 6.3, Question 12 And the Laplace transform of tne given function: where Uc (t) denotes the Heaviside function which is 0 for t and f (t) = (t 7)us (t) 5)uz (4) , 20. Enclose numerators and denominators in parentheses: For example; 6)/ (14 n). c{f(t)} Q€,8 >0 Click if you wou...
##### Find the interval and radius of convergence for the power series Vn (x 4)" 2n n=0
Find the interval and radius of convergence for the power series Vn (x 4)" 2n n=0...
##### How do you solve 4^(x + 7) = 6^(x – 1)?
How do you solve 4^(x + 7) = 6^(x – 1)?...
##### Case: Cost Structures for Global Shippers Inc. Management from Global Shippers Inc, an international shipping business,...
Case: Cost Structures for Global Shippers Inc. Management from Global Shippers Inc, an international shipping business, is in the process of assessing the choice between two different cost structures for the business. Option A has relatively higher variable costs per unit shipped but lower annual fi...
|
{}
|
## Automatic estimation of lexical concreteness in 77 languages
Thompson, B., & Lupyan, G. (2018). Automatic estimation of lexical concreteness in 77 languages. In C. Kalish, M. Rau, J. Zhu, & T. T. Rogers (Eds.), Proceedings of the 40th Annual Conference of the Cognitive Science Society (CogSci 2018) (pp. 1122-1127). Austin, TX: Cognitive Science Society.
We estimate lexical Concreteness for millions of words across 77 languages. Using a simple regression framework, we combine vector-based models of lexical semantics with experimental norms of Concreteness in English and Dutch. By applying techniques to align vector-based semantics across distinct languages, we compute and release Concreteness estimates at scale in numerous languages for which experimental norms are not currently available. This paper lays out the technique and its efficacy. Although this is a difficult dataset to evaluate immediately, Concreteness estimates computed from English correlate with Dutch experimental norms at $\rho$ = .75 in the vocabulary at large, increasing to $\rho$ = .8 among Nouns. Our predictions also recapitulate attested relationships with word frequency. The approach we describe can be readily applied to numerous lexical measures beyond Concreteness
Publication type
Proceedings paper
Publication date
2018
|
{}
|
# Interference contributions to gluon initiated heavy Higgs production in the two-Higgs-doublet model
Greiner, Nicolas; Liebler, Stefan; Weiglein, Georg (2016). Interference contributions to gluon initiated heavy Higgs production in the two-Higgs-doublet model. European Physical Journal C - Particles and Fields, 76(3):118.
## Abstract
We discuss the production of a heavy neutral Higgs boson of a $\mathcal {CP}$-conserving two-Higgs-doublet model in gluon fusion and its decay into a four-fermion final state, $gg (\rightarrow VV) \rightarrow e^+e^-\mu ^+\mu ^-/e^+e^-\nu _l\bar{\nu }_l$. We investigate the interference contributions to invariant mass distributions of the four-fermion final state and other relevant kinematical observables. The relative importance of the different contributions is quantified for the process in the on-shell approximation, gg\rightarrow ZZ\). We show that interferences of the heavy Higgs with the light Higgs boson and background contributions are essential for a correct description of the differential cross section. Even though they contribute below ${\mathcal {O}}(10~\%)$ to those heavy Higgs signal cross sections, to which the experiments at the Large Hadron Collider were sensitive in its first run, we find that they are sizable in certain regions of the parameter space that are relevant for future heavy Higgs boson searches. In fact, the interference contributions can significantly enhance the experimental sensitivity to the heavy Higgs boson.
## Abstract
We discuss the production of a heavy neutral Higgs boson of a $\mathcal {CP}$-conserving two-Higgs-doublet model in gluon fusion and its decay into a four-fermion final state, $gg (\rightarrow VV) \rightarrow e^+e^-\mu ^+\mu ^-/e^+e^-\nu _l\bar{\nu }_l$. We investigate the interference contributions to invariant mass distributions of the four-fermion final state and other relevant kinematical observables. The relative importance of the different contributions is quantified for the process in the on-shell approximation, gg\rightarrow ZZ\). We show that interferences of the heavy Higgs with the light Higgs boson and background contributions are essential for a correct description of the differential cross section. Even though they contribute below ${\mathcal {O}}(10~\%)$ to those heavy Higgs signal cross sections, to which the experiments at the Large Hadron Collider were sensitive in its first run, we find that they are sizable in certain regions of the parameter space that are relevant for future heavy Higgs boson searches. In fact, the interference contributions can significantly enhance the experimental sensitivity to the heavy Higgs boson.
## Statistics
### Citations
Dimensions.ai Metrics
6 citations in Web of Science®
6 citations in Scopus®
### Altmetrics
72 downloads since deposited on 30 Dec 2016
Detailed statistics
Item Type: Journal Article, refereed, original work 07 Faculty of Science > Physics Institute 530 Physics Physical Sciences > Engineering (miscellaneous) Physical Sciences > Physics and Astronomy (miscellaneous) English 2016 30 Dec 2016 07:17 26 Jan 2022 11:02 Springer 1434-6044 Gold Publisher DOI. An embargo period may apply. https://doi.org/10.1140/epjc/s10052-016-3965-4 : FunderSNSF: Grant IDPZ00P2_154829: Project TitleAutomated calculations of electroweak corrections at the LHC
|
{}
|
# Estimating upper bound of uniform distribution from max of sample
This is actually part of a problem from All of Statistics:
$X_1, X_2, \ldots, X_n \sim \text{Uniform}(0, \Theta)$. And $Y = \text{Max}\{X_1,\ldots, X_n\}$.
If you're given that $Y > c$, can you estimate the probability of $\Theta>1/2$?
Of course if $c\ge1/2$ the probability is 100%.
Any hint or direction is appreciated.
BTW, is there anywhere I can find answers to the book All of Statistics? It's really a good book except there's no solution to the problems, even part of them.
• If $\Theta$ is random, and $Y>c$ for one sample, then $P(\Theta>1/2)$ is not necessarily 1 if $c>1/2$. Could you state the problem in its entirety? Now it is not clear what does it mean that $Y>c$. Does it mean that $P(Y>c)=1$ or that for given sample $Y>c$. – mpiktas Oct 21 '11 at 3:15
First find the distribution of $Y$ for a given $\Theta$. Then you need to define a prior for $\Theta$. Then calculate the joint distribution of $Y$ and $\Theta$. Then integrate to get Pr($Y > c$ and $\Theta>1/2$) and Pr($Y>c$), and divide. Note that you are calculating the probability rather than estimating it.
|
{}
|
## College Physics (4th Edition)
(a) The pole vaulter lands with a speed of $10.8~m/s$ (b) The average force exerted on the pole vaulter during that time interval is $1296~N$
(a) We can find the speed after falling 6.0 meters: $v_f^2 = v_0^2+2ay$ $v_f = \sqrt{v_0^2+2ay}$ $v_f = \sqrt{0+(2)(9.80~m/s^2)(6.0~m)}$ $v_f = 10.8~m/s$ The pole vaulter lands with a speed of $10.8~m/s$ (b) The impulse exerted on the pole vaulter is equal to the change in momentum. We can use the magnitude of the change in momentum to find the average force on the pole vaulter due to the padding: $F~t = \Delta p$ $F~t = m~\Delta v$ $F = \frac{m~\Delta v}{t}$ $F = \frac{(60.0~kg)(10.8~m/s)}{0.50~s}$ $F = 1296~N$ The average force exerted on the pole vaulter during that time interval is $1296~N$.
|
{}
|
## amy0799 one year ago find derivative of the algebraic function
1. amy0799
$f(x)=x ^{5}(1-\frac{ 6 }{ x+5 })$
2. welshfella
first expand the bracket - find derivative of x^5 then use the quotient rule to differentiate the fraction
3. amy0799
c$\frac{ x ^{5} (x-1)}{ x+5 }$ can I do this?
4. welshfella
or if you prefer you can use the product / chain rules
5. welshfella
how did you get that?
i think he took the lcm
7. amy0799
$1-\left( \frac{ 6 }{ x+5 } \right) = \frac{ x-1 }{ x+5 }$
correct
9. amy0799
so then $\frac{ x ^{6}-x ^{5} }{ x+5 }$ I would just use the quotient rule?
10. welshfella
yes
11. amy0799
the answer would be $f'(x)=\frac{ 5x ^{6}+26x ^{5}-25x ^{4} }{ (x+5)^{2}}$?
correct
:)
14. welshfella
yes i get that too
15. amy0799
cool thanks!
16. welshfella
I'm slowing up in my old age lol!
@amy0799 thats really nice
18. welshfella
yes
19. welshfella
looking at it again I would have used the product rule because the algebra is easier...
@welshfella its okay:)
21. welshfella
lol - it has to be..
|
{}
|
Share
Books Shortlist
# Solution for A river 3 m deep and 40 m wide is flowing at the rate of 2 km per hour. How much water will fall into the sea in a minute? - CBSE Class 9 - Mathematics
ConceptVolume of a Cuboid
#### Question
A river 3 m deep and 40 m wide is flowing at the rate of 2 km per hour. How much water will fall into the sea in a minute?
#### Solution
Rate of water flow = 2 km per hour
=(2000/60)"m/min"
=(100/3)"m/min"
Depth (h) of river = 3 m
Width (b) of river = 40 m
"Volume of water flowed in 1 min "=(100/3xx40xx3)m^3=4000m^3
Therefore, in 1 minute, 4000 m3 water will fall in the sea.
Is there an error in this question or solution?
#### APPEARS IN
NCERT Mathematics Textbook for Class 9 (with solutions)
Chapter 13: Surface Area and Volumes
Q: 9 | Page no. 228
R.D. Sharma Mathematics for Class 9 by R D Sharma (2018-19 Session) (with solutions)
Chapter 18: Surface Areas and Volume of a Cuboid and Cube
Q: 8
Solution for question: A river 3 m deep and 40 m wide is flowing at the rate of 2 km per hour. How much water will fall into the sea in a minute? concept: Volume of a Cuboid. For the course CBSE
S
|
{}
|
apop_update.c 11.7 KB
Jerome Benoit committed Aug 15, 2014 1 2 3 /** \file The \ref apop_update function. */ Jerome Benoit committed Jul 10, 2015 4 /* Copyright (c) 2006--2009, 2014 by Ben Klemens. Licensed under the GPLv2; see COPYING. */ Jerome Benoit committed Aug 15, 2014 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 #include "apop_internal.h" #include /* This file in four parts: --an apop_model named product, purpose-built for apop_update to send to apop_model_metropolis --apop_mcmc settings and their defaults. --apop_update and its equipment, which has three cases: --conjugates, in which case see the functions --call Metropolis */ /* This will be used by apop_update to send to apop_mcmc below. To set it up, add a more pointer to an array of two models, the prior and likelihood. The total likelihood of a data point is (likelihood these parameters are drawn from prior)*(likelihood of these parameters and the data set using the likelihood fn) */ static long double product_ll(apop_data *d, apop_model *m){ apop_model **pl = m->more; gsl_vector *v = apop_data_pack(m->parameters); apop_data_unpack(v, pl[1]->parameters); gsl_vector_free(v); return apop_log_likelihood(m->parameters, pl[0]) + apop_log_likelihood(d, pl[1]); } static long double product_constraint(apop_data *data, apop_model *m){ apop_model **pl = m->more; gsl_vector *v = apop_data_pack(m->parameters); apop_data_unpack(v, pl[1]->parameters); gsl_vector_free(v); return pl[1]->constraint(data, pl[1]); } apop_model *product = &(apop_model){"product of two models", .log_likelihood=product_ll, .constraint=product_constraint}; ///////////the conjugate table static apop_model *betabinom(apop_data *data, apop_model *prior, apop_model *likelihood){ apop_model *outp = apop_model_copy(prior); if (!data && likelihood->parameters){ double n = likelihood->parameters->vector->data[0]; double p = likelihood->parameters->vector->data[1]; *gsl_vector_ptr(outp->parameters->vector, 0) += n*p; *gsl_vector_ptr(outp->parameters->vector, 1) += n*(1-p); } else { gsl_vector *hits = Apop_cv(data, 1); gsl_vector *misses = Apop_cv(data, 0); *gsl_vector_ptr(outp->parameters->vector, 0) += apop_sum(hits); *gsl_vector_ptr(outp->parameters->vector, 1) += apop_sum(misses); } return outp; } double countup(double in){return in!=0;} static apop_model *betabernie(apop_data *data, apop_model *prior, apop_model *likelihood){ apop_model *outp = apop_model_copy(prior); Get_vmsizes(data);//tsize double sum = apop_map_sum(data, .fn_d=countup, .part='a'); *gsl_vector_ptr(outp->parameters->vector, 0) += sum; *gsl_vector_ptr(outp->parameters->vector, 1) += tsize - sum; return outp; } static apop_model *gammaexpo(apop_data *data, apop_model *prior, apop_model *likelihood){ apop_model *outp = apop_model_copy(prior); Get_vmsizes(data); //maxsize *gsl_vector_ptr(outp->parameters->vector, 0) += maxsize; apop_data_set(outp->parameters, 1, .val=1./ (1./apop_data_get(outp->parameters, 1) + (data->matrix ? apop_matrix_sum(data->matrix) : 0) + (data->vector ? apop_sum(data->vector) : 0))); return outp; } static apop_model *gammapoisson(apop_data *data, apop_model *prior, apop_model *likelihood){ /* Posterior alpha = alpha_0 + sum x; posterior beta = beta_0/(beta_0*n + 1) */ apop_model *outp = apop_model_copy(prior); Get_vmsizes(data); //vsize, msize1,maxsize *gsl_vector_ptr(outp->parameters->vector, 0) += (vsize ? apop_sum(data->vector): 0) + (msize1 ? apop_matrix_sum(data->matrix): 0); double *beta = gsl_vector_ptr(outp->parameters->vector, 1); *beta = *beta/(*beta * maxsize + 1); return outp; } static apop_model *normnorm(apop_data *data, apop_model *prior, apop_model *likelihood){ /* output \f$(\mu, \sigma) = (\frac{\mu_0}{\sigma_0^2} + \frac{\sum_{i=1}^n x_i}{\sigma^2})/(\frac{1}{\sigma_0^2} + \frac{n}{\sigma^2}), (\frac{1}{\sigma_0^2} + \frac{n}{\sigma^2})^{-1}\f$ That is, the output is weighted by the number of data points for the likelihood. If you give me a parametrized normal, with no data, then I'll take the weight to be \f$n=1\f$. */ double mu_like, var_like; long int n; apop_model *outp = apop_model_copy(prior); apop_prep(data, outp); long double mu_pri = prior->parameters->vector->data[0]; long double var_pri = gsl_pow_2(prior->parameters->vector->data[1]); if (!data && likelihood->parameters){ mu_like = likelihood->parameters->vector->data[0]; var_like = gsl_pow_2(likelihood->parameters->vector->data[1]); n = 1; } else { n = data->matrix->size1 * data->matrix->size2; apop_matrix_mean_and_var(data->matrix, &mu_like, &var_like); } gsl_vector_set(outp->parameters->vector, 0, (mu_pri/var_pri + n*mu_like/var_like)/(1/var_pri + n/var_like)); gsl_vector_set(outp->parameters->vector, 1, pow((1/var_pri + n/var_like), -.5)); return outp; } /** Take in a prior and likelihood distribution, and output a posterior distribution. Jerome Benoit committed Jul 10, 2015 125 126 127 \li This function first checks a table of conjugate distributions for the pair you sent in. If the models are listed on the table, then the function returns a corresponding closed-form model with updated parameters. Jerome Benoit committed Aug 15, 2014 128 Jerome Benoit committed Jul 10, 2015 129 130 131 132 133 \li If the parameters aren't in the table of conjugate, and the prior distribution has a \c p or \c log_likelihood element, then use \ref apop_model_metropolis to generate the posterior. If you expect MCMC to run, you may add an \ref apop_mcmc_settings group to your prior to control the details of the search. See also the \ref apop_model_metropolis documentation. Jerome Benoit committed Aug 15, 2014 134 Jerome Benoit committed Jul 10, 2015 135 136 137 138 \li If the prior does not have a \c p or \c log_likelihood but does have a \c draw element, then make draws from the prior and weight them by the \c p given by the likelihood distribution. This is not a rejection sampling method, so the burnin is ignored. Jerome Benoit committed Aug 15, 2014 139 Jerome Benoit committed Jul 10, 2015 140 141 142 143 144 145 146 147 148 149 150 151 \param data The input data, that will be used by the likelihood function (default = \c NULL.) \param prior The prior \ref apop_model. If the system needs to estimate the posterior via MCMC, this needs to have a \c log_likelihood or \c p method. (No default, must not be \c NULL.) \param likelihood The likelihood \ref apop_model. If the system needs to estimate the posterior via MCMC, this needs to have a \c log_likelihood or \c p method (ll preferred). (No default, must not be \c NULL.) \param rng A \c gsl_rng, already initialized (e.g., via \ref apop_rng_alloc). (default: an RNG from \ref apop_rng_get_thread) \return an \ref apop_model struct representing the posterior, with updated parameters. \li In all cases, the output is a \ref apop_model that can be used as the input to this function, so you can chain Bayesian updating procedures. \li Here are the conjugate distributions currently defined: Jerome Benoit committed Aug 15, 2014 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168
Likelihood Notes \ref apop_binomial "Binomial" \ref apop_bernoulli "Bernoulli" \ref apop_gamma "Gamma" Gamma likelihood represents the distribution of \f$\lambda^{-1}\f$, not plain \f$\lambda\f$ \ref apop_normal "Normal" Assumes prior with fixed \f$\sigma\f$; updates distribution for \f$\mu\f$ \ref apop_poisson "Poisson" Uses sum and size of the data Jerome Benoit committed Jul 10, 2015 169 170 171 172 173 174 175 Here is a test function that compares the output via conjugate table and via Metropolis-Hastings sampling: \include test_updating.c \li The conjugate table is stored using a vtable; see \ref vtables for details. If you are writing a new vtable entry, the typedef new functions must conform to and the hash used for lookups are: Jerome Benoit committed Aug 15, 2014 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 \code typedef apop_model *(*apop_update_type)(apop_data *, apop_model , apop_model); #define apop_update_hash(m1, m2) ((size_t)(m1).draw + (size_t)((m2).log_likelihood ? (m2).log_likelihood : (m2).p)*33) \endcode \li This function uses the \ref designated syntax for inputs. */ #ifdef APOP_NO_VARIADIC apop_model * apop_update(apop_data *data, apop_model *prior, apop_model *likelihood, gsl_rng *rng){ #else apop_varad_head(apop_model *, apop_update){ apop_data *apop_varad_var(data, NULL); apop_model *apop_varad_var(prior, NULL); apop_model *apop_varad_var(likelihood, NULL); Jerome Benoit committed Jul 10, 2015 191 gsl_rng *apop_varad_var(rng, apop_rng_get_thread(-1)); Jerome Benoit committed Aug 15, 2014 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 return apop_update_base(data, prior, likelihood, rng); } apop_model * apop_update_base(apop_data *data, apop_model *prior, apop_model *likelihood, gsl_rng *rng){ #endif static int setup=0; if (!(setup++)){ apop_update_vtable_add(betabinom, apop_beta, apop_binomial); apop_update_vtable_add(betabernie, apop_beta, apop_bernoulli); apop_update_vtable_add(gammaexpo, apop_gamma, apop_exponential); apop_update_vtable_add(gammapoisson, apop_gamma, apop_poisson); apop_update_vtable_add(normnorm, apop_normal, apop_normal); } apop_update_type conj = apop_update_vtable_get(prior, likelihood); if (conj) return conj(data, prior, likelihood); apop_mcmc_settings *s = apop_settings_get_group(prior, apop_mcmc); Jerome Benoit committed Jul 10, 2015 209 210 apop_prep(NULL, prior); //probably a no-op apop_prep(data, likelihood); //probably a no-op Jerome Benoit committed Sep 08, 2014 211 212 213 gsl_vector *pack = apop_data_pack(likelihood->parameters); int tsize = pack->size; gsl_vector_free(pack); Jerome Benoit committed Aug 15, 2014 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 Apop_stopif(prior->dsize != tsize, return apop_model_copy(&(apop_model){.error='d'}), 0, "Size of a draw from the prior does not match " "the size of the likelihood's parameters (%i != %i).%s", prior->dsize, tsize, (tsize > prior->dsize) ? " Perhaps use apop_model_fix_params to reduce the " "likelihood's parameter count?" : ""); if (prior->p || prior->log_likelihood){ apop_model *p = apop_model_copy(product); //pending revision, a memory leak: p->more = malloc(sizeof(apop_model*)*2); ((apop_model**)p->more)[0] = apop_model_copy(prior); ((apop_model**)p->more)[1] = apop_model_copy(likelihood); p->more_size = sizeof(apop_model*) * 2; p->parameters = apop_data_alloc(prior->dsize); p->data = data; if (s) apop_settings_copy_group(p, prior, "apop_mcmc"); apop_model *out = apop_model_metropolis(data, rng, p); return out; } Apop_stopif(!prior->draw, return NULL, 0, "prior does not have a .p, .log_likelihood, or .draw element. I am stumped. Returning NULL."); if (!s) s = Apop_model_add_group(prior, apop_mcmc); Jerome Benoit committed Sep 08, 2014 240 gsl_vector *draw = gsl_vector_alloc(tsize); Jerome Benoit committed Aug 15, 2014 241 242 243 apop_data *out = apop_data_alloc(s->periods, tsize); out->weights = gsl_vector_alloc(s->periods); Jerome Benoit committed Sep 08, 2014 244 245 apop_draw(draw->data, rng, prior); //set starting point. apop_data_unpack(draw, likelihood->parameters); Jerome Benoit committed Aug 15, 2014 246 247 248 for (int i=0; i< s->periods; i++){ newdraw: Jerome Benoit committed Sep 08, 2014 249 250 apop_draw(draw->data, rng, prior); apop_data_unpack(draw, likelihood->parameters); Jerome Benoit committed Aug 15, 2014 251 252 253 254 255 256 257 258 259 260 261 262 263 264 long double p = apop_p(data, likelihood); Apop_notify(3, "p=%Lg for parameters:\t", p); if (apop_opts.verbose >=3) apop_data_print(likelihood->parameters); Apop_stopif(gsl_isnan(p), goto newdraw, 1, "Trouble evaluating the " "likelihood function at vector beginning with %g. " "Throwing it out and trying again.\n" , likelihood->parameters->vector->data[0]); apop_data_pack(likelihood->parameters, Apop_rv(out, i)); gsl_vector_set(out->weights, i, p); } apop_model *outp = apop_estimate(out, apop_pmf); Jerome Benoit committed Sep 08, 2014 265 gsl_vector_free(draw); Jerome Benoit committed Aug 15, 2014 266 267 return outp; }
|
{}
|
# Loss function explained
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) [1] is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century.[2] In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s.[3] In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss.
## Examples
### Regret
See main article: Regret (decision theory). Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made had the underlying circumstances been known and the decision that was in fact taken before they were known.
The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is
λ(x)=C(t-x)2
for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL).[4]
Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function.
The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used.
### 0-1 loss function
In statistics and decision theory, a frequently used loss function is the 0-1 loss function
L(\hat{y},y)=I(\hat{y}\ney),
where
I
is the indicator function.
## Constructing loss and objective functions
In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture.[5] The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences.[6] [7] In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers.[8] [9] Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities[10] and the European subsidies for equalizing unemployment rates among 271 German regions.[11]
## Expected loss
In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X.
### Statistics
Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms.
#### Frequentist expected loss
We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function[12] [13] [14] of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by:
R(\theta,\delta)=\operatorname{E}\thetaL(\theta,\delta(X))=\intXL(\theta,\delta(x))dP\theta(x).
Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population,
\operatorname{E}\theta
is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X.
#### Bayesian expected loss
In a Bayesian approach, the expectation is calculated using the posterior distribution * of the parameter θ:
\rho(\pi*,a)=\int\ThetaL(\theta,a)d\pi*(\theta).
One then should choose the action a* which minimises the expected loss. Although this will result in choosing the same action as would be chosen using the frequentist risk, the emphasis of the Bayesian approach is that one is only interested in choosing the optimal action under the actual observed data, whereas choosing the actual frequentist optimal decision rule, which is a function of all possible observations, is a much more difficult problem.
#### Examples in statistics
• For a scalar parameter θ, a decision function whose output
\hat\theta
is an estimate of θ, and a quadratic loss function (squared error loss) $L(\theta,\hat\theta)=(\theta-\hat\theta)^2,$ the risk function becomes the mean squared error of the estimate, $R(\theta,\hat\theta)= \operatorname_\theta(\theta-\hat\theta)^2.$
### Economic choice under uncertainty
In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized.
## Decision rules
A decision rule makes a choice using an optimality criterion. Some commonly used criteria are:
Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: $\underset \ \max_ \ R(\theta,\delta).$
Choose the decision rule which satisfies an invariance requirement.
• Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): $\underset \operatorname_ [R(\theta,\delta)] = \underset \ \int_ R(\theta,\delta) \, p(\theta) \,d\theta.$
## Selecting a loss function
Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances.[15]
A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances.
In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility.
Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering.
For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable.
Two very commonly used loss functions are the squared loss,
L(a)=a2
, and the absolute loss,
L(a)=|a|
. However the absolute loss has the disadvantage that it is not differentiable at
a=0
. The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of
a
's (as in $\sum_^n L(a_i)$), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value.
The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties.[16] Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others.
W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.[17]
• Bartram, Söhnke M. . Pope, Peter F. . April–June 2011 . Asymmetric Loss Functions and the Rationality of Expected Stock Returns . International Journal of Forecasting . 27 . 2 . 413–437 . 10.1016/j.ijforecast.2009.10.008. 889323 . Aretz . Kevin .
• Book: Berger, James O. . Statistical decision theory and Bayesian Analysis . James Berger (statistician) . 1985 . 2nd . Springer-Verlag . New York . 978-0-387-96098-2 . 0804611. 1985sdtb.book.....B .
• Making monetary policy: Objectives and rules. 10.1093/oxrep/16.4.43. Oxford Review of Economic Policy. 16. 4. 43–59. 2000. Cecchetti. S..
• 10.1016/0164-0704(87)90016-4. Loss functions and public policy. Journal of Macroeconomics. 9. 4. 489–504. 1987. Horowitz. Ann R..
• 1911380. Asymmetric Policymaker Utility Functions and Optimal Policy under Uncertainty. Econometrica. 44. 1. 53–66. Waud. Roger N.. 1976. 10.2307/1911380.
## Notes and References
1. Book: Raschka, Sebastian . Python machine learning : machine learning and deep learning with python, scikit-learn, and tensorflow 2 . Packt Publishing, Limited . Birmingham . 2019 . 1-78995-829-6 . 1135663723 . 37 - 38.
2. Book: Wald, A. . Statistical Decision Functions . Wiley . 1950 .
3. Book: Cramér, H. . 1930 . On the mathematical theory of risk . Centraltryckeriet .
4. Book: Trevor . Hastie . Robert . Tibshirani . Robert Tibshirani. Jerome H. . Friedman . Jerome H. Friedman . The Elements of Statistical Learning . Springer . 2001 . 0-387-95284-5 . 18 .
5. Book: Frisch, Ragnar. 1969 . The Nobel Prize–Prize Lecture. From utopian theory to practical applications: the case of econometrics. 15 February 2021.
6. Book: Tangian, Andranik . Gruber . Josef . 1997. Constructing Scalar-Valued Objective Functions. Proceedings of the Third International Conference on Econometric Decision Models: Constructing Scalar-Valued Objective Functions, University of Hagen, held in Katholische Akademie Schwerte September 5–8, 1995. Lecture Notes in Economics and Mathematical Systems . 453. 978-3-540-63061-6 . 10.1007/978-3-642-48773-6. Springer . Berlin .
7. Book: Tangian, Andranik . Gruber . Josef . 2002. Constructing and Applying Objective Functions. Proceedings of the Fourth International Conference on Econometric Decision Models Constructing and Applying Objective Functions, University of Hagen, held in Haus Nordhelle, August, 28 — 31, 2000. Lecture Notes in Economics and Mathematical Systems . 510. Springer . Berlin. 978-3-540-42669-1 . 10.1007/978-3-642-56038-5 .
8. Tangian . Andranik . 2002. Constructing a quasi-concave quadratic objective function from interviewing a decision maker. European Journal of Operational Research . 141 . 3 . 608–640 . 10.1016/S0377-2217(01)00185-0 . 39623350 .
9. Tangian . Andranik . 2004. A model for ordinally constructing additive objective functions. European Journal of Operational Research . 159 . 2 . 476–512. 10.1016/S0377-2217(03)00413-2 . 31019036 .
10. Tangian . Andranik . 2004 . Redistribution of university budgets with respect to the status quo . European Journal of Operational Research . 157 . 2 . 409–428. 10.1016/S0377-2217(03)00271-6 .
11. Tangian . Andranik . 2008. Multi-criteria optimization of regional employment policy: A simulation analysis for Germany . Review of Urban and Regional Development . 20 . 2. 103-122 . 10.1111/j.1467-940X.2008.00144.x .
12. Book: Berger, James O. . Statistical decision theory and Bayesian Analysis . James Berger (statistician) . 1985 . 2nd . Springer-Verlag . New York . 978-0-387-96098-2 . 0804611. 1985sdtb.book.....B .
13. Book: DeGroot, Morris . Morris H. DeGroot . Optimal Statistical Decisions . Wiley Classics Library . 2004 . 1970 . 978-0-471-68029-1 . 2288194.
14. Book: Robert, Christian P. . The Bayesian Choice . Springer . New York . 2007. 2nd . 10.1007/0-387-71599-1 . 978-0-387-95231-4 . 1835885. Springer Texts in Statistics .
15. Book: Pfanzagl, J. . 1994 . Parametric Statistical Theory . Berlin . Walter de Gruyter . 978-3-11-013863-4 .
16. Detailed information on mathematical principles of the loss function choice is given in Chapter 2 of the book Book: Robust and Non-Robust Models in Statistics. B.. Klebanov. Svetlozat T.. Rachev. Frank J.. Fabozzi. Nova Scientific Publishers, Inc.. New York. 2009. (and references there).
17. Book: Deming, W. Edwards. Out of the Crisis. The MIT Press. 2000. 9780262541152.
|
{}
|
# How to add binary decimals/1s complement
$$0110.10_2 – 0011.01_2 = 0110.10_2 + 1100.10_{1s} = 0011.01_{1s}$$
My working from my understanding ...
1 1
0110.10
1100.10 +
----------
10011.00
How did the .01 appear
-
|
{}
|
1. ## application 2
The number N of bacteria in a refrigerated food is given by:
N(T)=25T^2-50T+300, 2(less then or equal to)T(less then or equal to)20
Where T is the temperature of the foodin degrees celsius. When the food is removed from refrigeration, the temperature of the food is given by:
T(t)=2t+1, 0(less then or equal to)t(less then or equal to)9
Where t is the time in hours. Find the time when the bacteria count reaches 750.
Ok so im thinking you use equation 2 and make it equal to 750 and then workout t. is it that simple or u need to defferintiate equation 1 and make it equal to equation 2s answear????? so confused, i know its probably easy its just that i havent done application questions in ages.
2. ## Re: application 2
Originally Posted by iFuuZe
The number N of bacteria in a refrigerated food is given by:
N(T)=25T^2-50T+300, 2(less then or equal to)T(less then or equal to)20
Where T is the temperature of the foodin degrees celsius. When the food is removed from refrigeration, the temperature of the food is given by:
T(t)=2t+1, 0(less then or equal to)t(less then or equal to)9
Where t is the time in hours. Find the time when the bacteria count reaches 750.
Ok so im thinking you use equation 2 and make it equal to 750 and then workout t. is it that simple or u need to defferintiate equation 1 and make it equal to equation 2s answear????? so confused, i know its probably easy its just that i havent done application questions in ages.
$N(t)=25(2t+1)^2-50(2t+1)+300 \Rightarrow 750=25(2t+1)^2-50(2t+1)+300$
$t \approx 2.18$
|
{}
|
# If the circumference of a circle is 12, and the angle measure of an arc is 60°, what is the length of the arc?
Jun 2, 2018
the length of the arc is 2
#### Explanation:
As a circle is ${360}^{o}$ and ${60}^{o}$ is $\frac{1}{6}$ of that, the length of the arc will also be $\frac{1}{6}$ of the full circle, i.e. $\frac{12}{6} = 2$
Jun 2, 2018
Length of arc= $\theta \times \frac{\pi}{180} \times r$
Circumference=$2 \pi r$
$12 = 2 \pi r$
$\frac{12}{2} = \pi r$
$6 = \pi r$
$6 = \frac{22 r}{7}$
$6 \times 7 = 22 r$
$42 = 22 r$
$\frac{42}{22} = r$
$r = \frac{21}{11} c m$
Length of arc$= 60 \times \frac{\frac{22}{7}}{180} \times \frac{21}{11}$
Length of arc$= 60 \times \frac{22}{7 \times 180} \times \frac{21}{11}$
Length of arc$= \cancel{60} \times \frac{\cancel{22}}{\cancel{1260}} \times \frac{21}{\cancel{11}}$
Length of arc$= 1 \times \frac{2}{\cancel{21}} \times \frac{\cancel{21}}{1}$
Length of arc= 2cm
|
{}
|
Chapter 43
Bradyarrhythmias are most commonly caused by failure of impulse formation (sinus node dysfunction) or by failure of impulse conduction over the atrioventricular (AV) node/His-Purkinje system. Bradyarrhythmias may be caused by disease processes that directly alter the structural and functional integrity of the sinus node, atria, AV node, and His-Purkinje system or by extrinsic factors (autonomic disturbances, drugs, etc.) without causing structural abnormalities (Table 43–1).
### Anatomy of the Sinus Node and Conduction System
#### Sinoatrial Node
Normal electrical activation of the heart arises from the principal pacemaker cells that spontaneously depolarize, located laterally in the epicardial groove of the sulcus terminalis,1 near the junction of the right atrium and the superior vena cava (Fig. 43–1). The sinus node in adults measures approximately 1 to 2 cm long and 0.5 mm wide. The central zone of the sinus node containing the principal pacemaker cells is small and located within a fibrous tissue matrix. In the periphery of the node along the crista terminalis, transitional cells with pacemaker function are also present. Experimental and clinical evidence now suggest that the sinus node region is less defined than previously appreciated. The principal pacemaker site within this region may migrate, resulting in subtle alterations in P-wave morphology. Once the impulse exits the sinus node and the perinodal tissues, it traverses the atrium to the AV node. The conduction of impulses from right to left atrium has been postulated to occur preferentially via Bachmann bundle, and secondarily across the musculature of the coronary sinus.
###### Figure 43–1
Schematic representation of the cardiac conduction system. AV, atrioventricular; SA, sinoatrial.
#### Atrioventricular Node
Once the sinus node impulse activates the atrium, electrical activation continues through the AV node, with a conduction delay ensuring complete atrial contraction before initiation of ventricular conduction. The AV nodal complex is considered to have three related regions: (1) the transitional cell zone, (2) the compact AV node, and (3) the penetrating AV bundle.2 The transitional zone consists of the main atrial approaches to the compact AV node. The compact AV node is shaped like a half oval and is located beneath the right atrial endocardium at the apex of the triangle of Koch. The triangle of Koch is formed ...
Sign in to your MyAccess profile while you are actively authenticated on this site via your institution (you will be able to verify this by looking at the top right corner of the screen - if you see your institution's name, you are authenticated). Once logged in to your MyAccess profile, you will be able to access your institution's subscription for 90 days from any location. You must be logged in while authenticated at least once every 90 days to maintain this remote access.
Ok
## Subscription Options
### AccessCardiology Full Site: One-Year Subscription
Connect to the full suite of AccessCardiology content and resources including textbooks such as Hurst's the Heart and Cardiology Clinical Questions, a unique library of multimedia, including heart imaging, an integrated drug database, and more.
|
{}
|
# Normalizing a Wavefunction of a harmonic oscillator
1. Aug 16, 2008
1. At a certain time the wavefunction of a one-dimensional harmonic oscillator is
$$\psi$$(x) = 3$$\phi$$0(x) + 4$$\phi$$1(x)
where $$\phi$$0(x) and $$\phi$$1(x) are normalized energy eigenfunctions of the ground and first excited states respectively. Normalize the wavefunction and determine the probability of finding the oscillator in the ground state.
3. I'm not really sure if I'm normalizing the wavefunction correctly, I get the normalizing constant as 1/7. However, when I calculate the probability of the ground state and first state combined they don't equal one. Aren't they supposed to and have I normalized correctly?
2. Aug 16, 2008
### Staff: Mentor
Redo this. What's the definition of normalization? (What must equal 1?)
3. Aug 16, 2008
I've got the equation for normalising a wavefunction; one over the square root of the wavefunction squared.
= ($$\int$$[3$$\phi$$0(x) + 4$$\phi$$1(x)]2)1/2 =
I still get $$\sqrt{1/49}$$ when I should be getting $$\sqrt{1/25}$$; I think I'm squaring the function wrong. The coefficients I get upon squaring are (9 + 12 + 12 + 16), I'm guessing the 12s cancel each other out to leave 9+16=25 but I don't see how. Do I need to know the energy eigenfunctions to calculate this? I think I'm getting myself confused, can you point me in the right direction of squaring the wavefunction? It's just all previous examples I have give the wavefunction experssed as a single function rather than an addition of two. Thanks very much.
4. Aug 16, 2008
### malawi_glenn
I think you are forgetting that phi_0 and phi_2 are ortogonal to each other..
5. Aug 16, 2008
### Staff: Mentor
All you need to know is:
That's the secret.
6. Aug 16, 2008
psi(x) = 3phi_0(x) + 4phi_1(x)
=> [psi(x)]^2 = [3phi_0(x) + 4phi_1(x)]^2
=> [9(phi_0(x))^2 + 12phi*_0(x)phi_1(x) + 12phi_0(x)phi*_1(x) + 16(phi_1(x))^2
Is this right?
When I integrate with limits +-infinity does it give
9+0+0+16?
7. Aug 16, 2008
### Staff: Mentor
Good! Now you've got it.
8. Aug 16, 2008
|
{}
|
Why is Earth's inner core solid?
I have never understood why earth's inner core is solid. Considering that the inner core is made of an iron-nickel alloy (melting point around 1350 C to 1600 C) and the temperature of the inner core is approximately 5430 C (about the temperature of the surface of the sun). Since Earth's core is nearly 3-4 times the melting point of iron-nickel alloys how can it possibly be solid?
• the temperature at which a phase change takes place depends on pressure. For instance because there is more pressure at low altitudes than at high altitudes, water boils at less than 100 degrees Celsius at high altitudes like mount Everest. In general, the higher the pressure, the higher the melting and boiling points are. Therefore potentially the melting point of iron-nickel alloys is significantly greater in the Earth's core than on the Earth's surface due to an enormous pressure within the Eaerth's core. – Kenshin Apr 24 '14 at 16:15
• As an aside, it doesn't make sense to say "3-4 times the temperature" when you're not even using absolute temperature. – David Richerby Apr 24 '14 at 22:55
Earth's inner core is solid even though the temperature is so high because the pressure is also very high. According to the Wikipedia article on the Earth's inner core, the temperature at the center is $5,700\ \text{K}$ and the pressure is estimated to be $330$ to $360\ \text{GPa}$ ($\sim3\cdot10^{6}\ \text{atm}$).
The phase diagram shown below (taken from this paper) shows the liquid/solid transition, where fcc and hcp are two different crystalline forms of solid iron. You can see clearly from the slope of the line going off toward the upper right that iron should be solid at this temperature and pressure.
• The phase diagram would be more useful for this answer if it went above 100 GPa. Also, I suggest giving the actual citation and/or the DOI for references, in case of link rot. This one is 10.1029/31GD07. – kwinkunks Apr 25 '14 at 11:22
• This is a great answer. But I think it would be useful if another source was used besides Wikipedia, because Wikipedia is edited by the community, and you have no idea what people add to those posts sometimes. – Eevee Feb 22 '18 at 22:37
In addition to the answers below and my comment above, I believe the following phase diagram, from DavePhd's answer here, sourced from here, is more appropriate for the pressure levels near the Earth's core of about 330 to 360GPa.
We can see from the image that for pressures between 330 and 360GPa, the melting temperature ranges from about 6200 K to 6600 K, which is much higher than Earth's inner core temperature of about 5700 K.
• For interest, I note that the surface temperature of the Sun is about 5800K. As an aside, in web-accessibility terms it's good practice to use meaningful hypertext for links. – kaberett Apr 29 '14 at 21:01
You are only considering the melting point at atmospheric pressure. Melting point depends upon pressure. The pressure in Earth's core is about 350 GigaPascals. It is important to study the phase diagram of the substance being considered. A phase diagram explains what phase (solid, liquid, gas) a substance is in at various temperatures and pressures.
See Phase diagram of iron, revised-core temperatures for detailed information.
protected by hichris123♦May 10 '16 at 2:32
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
|
{}
|
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Quadratic Functions and Their Graphs
## Identify the intercepts, vertex, and axis of symmetry
0%
Progress
Practice Quadratic Functions and Their Graphs
Progress
0%
Teacher Contributed
## Real World Applications – Algebra I
### Topic
How much better is using solar power for your home?
### Vocabulary
• Photovoltaic Effect: The phenomenon in which the incidence of light or other electromagnetic radiation upon the junction of two dissimilar materials, as a metal and a semiconductor the generation of an electromotive force (electricity!). In essence, it is the generation of electricity through the exposure of material to light. Photo = light, voltaic = electricity
• Photovoltaic Cell (Solar Cell): A thin semiconductor wafer specially treated to form an electric field, positive on one side and negative on the other. When light energy strikes the PV cell, electrons are knocked loose from the atoms in the semiconductor material. If electrical conductors are attached to the positive and negative sides, forming an electrical circuit, the electrons can be captured in the form of an electric current (electricity). This electricity can then be used to power a load, such as a light or a tool.
• Solar Energy: Solar energy is the solar radiation from the sun’s rays that reach the earth. This energy can be converted into other forms of energy, such as heat and electricity. Passive solar energy can be exploited by positioning a house to allow sunlight to enter through the windows to help heat a space. Active solar energy involves the conversion of sunlight to electrical energy, especially in solar (photovoltaic) cells. Solar Panel: A panel designed to absorb the sun’s ray as a source of energy for generating electricity or heat.
• Watt: A basic unit of power, equivalent to one joule per second, corresponding to the power in an electric circuit in which the potential difference is one volt and the current one
### Student Exploration
In this exploration, students will be able to investigate the energy output of solar panel systems through different representations of quadratic functions.
Hook: Watch this video:
$**$ It is advised that students do Activities 1-3 in the link provided below before doing this activity. This will give students a solid foundation of what solar energy is and how it is used.
The problem: A small photovoltaic solar electric panel system has been installed at a new school. The daily energy generated by the system can be modeled by the function $y = 64 + 95x - 7.5x^2$, where $x$ is the month and $y$ is power in kilowatt-hours.
Questions:
1. What observations can you make about the equation? What do the numbers mean? What do any of the coefficients mean? What do the variables represent? What’s the input? The output?
2. Create a table to represent this function.
1. What are realistic values for $x$?
3. Create a graph (either by hand or on a computer or graphing calculator) to represent this function.
2. Identify the vertex of the graph. What does the vertex represent?
3. Does the shape of the graph open UP or DOWN?
4. What is the maximum monthly energy output?
5. What month will have the maximum energy output?
6. In which month will the system produce zero energy output?
7. Does your answer to the previous question make sense in real life? Why or why not?
4. From your observations and calculations above, create a more realistic graph to represent this function.
1. Identify specific months on the appropriate axis.
2. Why does this solar panel system represent a quadratic relationship?
### Extension Investigation
(This question has been directly transferred from the resource cited below.)
The energy output of a huge 2-megawatt photovoltaic solar farm in Arizona can be modeled with the quadratic function $f(x) = 71332 +63662 - 4868x^2$. (Note: $1,000 \ watts = 1 \ kilowatt$ and $1,000 \ kilowatts = 1 \ megawatt$.)
a. What does the 71332 in the function represent in your graph/table?
b. What does the 71332 represent in real life in this problem?
c. Find the vertex of the graph of the function.
d. What is the maximum energy output of the solar farm?
e. In what two months is the energy output at about 250 megawatts per month?
f. What is the least amount of energy the system will produce?
g. The electricity demand per month of a community of 250 average American homes is approximately 230 megawatts. Between what months of the year can this large solar farm meet this community’s electricity demand?
|
{}
|
# Start Mathematica without the menu bar?
Is there any way to start Mathematica without the main menu bar? To be precise:
Sometimes I make standalone mini-applications that I would like to just have in their own little windows without the cognitive weight of the entire Mathematica system bearing down upon my little one-off uses.
-
I almost ask this question two days ago. Finally, I forgot, hehe. – Rojo May 16 '12 at 18:26
To me the top bar feels like the developers are trying to make Windows into a Mac. In case anyone is wondering Linux doesn't have the bar at the top but in each individual window. – William Feb 22 '13 at 0:12
Reverse question: (57593) – Mr.Wizard May 16 at 7:13
This may not be what you want, but when you mentioned "mini-application" I immediately thought of the CDF notebook format which you can embed in HTML pages. In effect, everything at http://demonstrations.wolfram.com/ is a mini-application of sorts. And there are no Mathematica menu bars... just those of the browser. I know, it's cheating.
However, if you're serious about standalone applications, you may want to look into CDF anyway. The rationale is outlined in this answer.
-
To push it further, it is even possible to embed the CDF and the JS (and eventually a "no plug-in installed" message image) inside an unique MHT file, which means you end up with a single file that opens in a web browser. And most likely, the HTML code can make the browser menu bar disappear. – P. Fonseca May 16 '12 at 5:51
Unfortunately CDF doesn't allow text fields. (Or the free version doesn't anyway). Either way it's only the bar that bothers me. I quickly made up a program that follows your WebView suggestion and that would work well for setups that don't have text fields, and I could automate the construction if need be. But it's quite disappointing that I have to resort to haxing. And on the topic of CDF, the bar is even more awkward with it as a "document" format (PDFs for example don't plaster a non-sequitur masterbar over your screen). – amr May 18 '12 at 0:07
That's true - it has a clunky feel to it... – Jens May 18 '12 at 0:12
In version 10.0 of Mathematica the main menu bar is gone, just as you want it. And each window has its own menu bar.
-
¡¡ WE DID IT REDDIT !! – amr Aug 8 '14 at 2:55
|
{}
|
# Number System - Ordinal Number (Magnitude)
An ordinal number, or ordinal, is a natural number used for ordering.
Ordering with numbers is also called ordering by magnitude.
In set theory (Mathematics), an ordinal number, or just ordinal, is the order type of a well-ordered set. Ordinals are an extension of the natural numbers different from integers and from cardinals.
## 3 - Documentation / Reference
Data Science
Data Analysis
Statistics
Data Science
Linear Algebra Mathematics
Trigonometry
|
{}
|
In social choice and voting theory, the profile of voters' rankings obviously determines all the head-to-head outcomes among n candidates. A theorem by McGarvey states that, given any head-to-head results you want, there is a profile of voters that will produce it. Very cool.
My question is... If you impose the condition that all voters unanimously prefer candidate A to candidate B, how does that limit the possible head-to-head results? Is anyone aware of any work done on this question?
I've come across an nxn matrix whose entry in row i and col j is min(i,j). For example, the 4x4 version would be
$\left( \begin{array}{cccc} 1 & 1 & 1 & 1\\ 1 & 2 & 2 & 2\\ 1 & 2 & 3 & 3\\ 1 & 2 & 3 & 4 \end{array} \right)$
Does anybody recognize this class of matrices? Does it have a name? I'd like to research what is known about it.
that there exist irrationals $$a$$ and $$b$$ such that $$a^{b}$$ is rational:
$$\sqrt{2}$$ is irrational. Let $$z=\sqrt{2}^{\sqrt{2}}$$. If $$z$$ is rational, the claim is proved. If not, then $$z^{\sqrt{2}}=2$$ proves the claim.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!
|
{}
|
46 1 팔로워
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Excellent!!! Thank you so much and come again!
Pinterest
|
{}
|
# Cylinder lying on conveyor belt
#### JD_PM
It depends exactly what you mean by that. Recall that we must choose a reference axis that is either the mass centre or is fixed in space. Since the belt is moving, the "point of contact" moves.
Your other equation should express that it is rolling contact.
I think I am going to think about the problem and see if I am able to get something out of it.
I will post whatever I get in the end
#### jbriggs444
Homework Helper
I guess you are driving me to the vectorial angular momentum equation:
$$L = r p$$
What I have in mind is something else. We are picking an axis of rotation that is fixed somewhere on the plane of the belt. With respect to this axis of rotation, we have already argued that angular momentum is always zero. [I am assuming an axis fixed in space rather than an axis fixed to the belt, else the angular momentum conservation argument misses a fictitious torque].
We have the moving cylinder. We can decompose its motion into two parts. One part is its linear motion fore and aft down the belt. The other part is its rotary motion. Since the center of mass of the cylinder does not align horizontally with the chosen reference axis, the horizontal linear motion contributes to angular momentum. The rotary motion does as well. That means that we can write down a useful equation.
The no slip condition should allow us to write down another.
Without having done the algebra, it seems clear that the solution will reduce very nicely.
Last edited:
#### JD_PM
What I have in mind is something else. We are picking an axis of rotation that is fixed somewhere on the plane of the belt. With respect to this axis of rotation, we have already argued that angular momentum is always zero. [I am assuming an axis fixed in space rather than an axis fixed to the belt, else the angular momentum conservation argument misses a fictitious torque].
We have the moving cylinder. We can decompose its motion into two parts. One part is its linear motion fore and aft down the belt. The other part is its rotary motion. Since the center of mass of the cylinder does not align horizontally with the chosen reference axis, the horizontal linear motion contributes to angular momentum. The rotary motion does as well. That means that we can write down a useful equation.
The no slip condition should allow us to write down another.
Without having done the algebra, it seems clear that the solution will reduce very nicely.
OK I see, so you propose:
Linear motion contribution to the angular momentum:
$$L = RMV(t)$$
But what about rotational contribution? Well, I have been thinking how to use:
$$L = I \omega$$
But the moment of inertia gets cumbersome... I mean, we no longer have a cylinder rotating about an axis passing through the CM. What is the MoI now?
#### haruspex
Homework Helper
Gold Member
2018 Award
Linear motion contribution to the angular momentum:
$$L = RMV(t)$$
No, V(t) is the belt speed. You need the cylinder's CoM speed.
Well, I have been thinking how to use:
$$L = I \omega$$
But the moment of inertia gets cumbersome... I mean, we no longer have a cylinder rotating about an axis passing through the CM. What is the MoI now?
You can decompose the cylinder's motion into a linear motion (u(t), say) and a rotation about its centre. You have dealt with the linear motion's contribution to the angular momentum about the belt-level axis above. The angular momentum due to its rotation about its own axis is invariant, i.e. you can choose any axis and it is still the same, just ICMω.
#### JD_PM
No, V(t) is the belt speed. You need the cylinder's CoM speed.
You can decompose the cylinder's motion into a linear motion (u(t), say) and a rotation about its centre. You have dealt with the linear motion's contribution to the angular momentum about the belt-level axis above. The angular momentum due to its rotation about its own axis is invariant, i.e. you can choose any axis and it is still the same, just ICMω.
No, V(t) is the belt speed. You need the cylinder's CoM speed.
You can decompose the cylinder's motion into a linear motion (u(t), say) and a rotation about its centre. You have dealt with the linear motion's contribution to the angular momentum about the belt-level axis above. The angular momentum due to its rotation about its own axis is invariant, i.e. you can choose any axis and it is still the same, just ICMω.
OK So what you mean is:
$$L = RMv_{CM} + I_{CM} \omega$$
Because of AM conservation:
$$L = 0$$
$$v_{CM} = -\frac{I_{CM} \omega}{RM}$$
Regarding vector directions: $V(t)$: points to the right. The belt moves horizontally towards right (I took that decision) and makes the cylinder spin counterclockwise, which means that the friction has to make the cylinder spin clockwise. Thus, friction points to the left.
NOTE: the negative sign makes sense because I regarded the angular velocity as positive (counterclockwise), which makes the cylinder rotate to the left (I regarded the right direction as +ive).
Let me know if you agree with me and we go to analyse the scenario on the non inertial frame.
Lately I have been thinking about the no appearance of $V(t)$ on my equation. There has to be something missing...
Last edited:
#### haruspex
Homework Helper
Gold Member
2018 Award
$$v_{CM} = -\frac{I_{CM} \omega}{RM}$$.
Right, but you need another equation. Express the fact of rolling contact in terms of the radius, the rotation rate and the two velocities.
#### JD_PM
Right, but you need another equation. Express the fact of rolling contact in terms of the radius, the rotation rate and the two velocities.
Let's take left direction as positive now so as to get a positive velocity.
Why would not be correct stating the following?:
$$v_{CM} = \frac{I_{CM} \omega}{RM} + V(t)$$
It makes sense to me, as if the cylinder were to stop spinning it would continue having $v_{CM}$ because the belt moves horizontally.
#### haruspex
Homework Helper
Gold Member
2018 Award
Let's take left direction as positive now so as to get a positive velocity.
Why would not be correct stating the following?:
$$v_{CM} = \frac{I_{CM} \omega}{RM} + V(t)$$
It makes sense to me, as if the cylinder were to stop spinning it would continue having $v_{CM}$ because the belt moves horizontally.
Why would it be correct? What is your reasoning for it?
It certainly does not represent the rolling condition, which is a purely kinematic relationship.
#### JD_PM
Why would it be correct? What is your reasoning for it?
It certainly does not represent the rolling condition, which is a purely kinematic relationship.
I just think that $V(t)$ has to contribute somehow to $v_{CM}$.
So the equation you suggest is missing is a kinematics one? Didn't we say that we were not going to assume constant acceleration?
#### haruspex
Homework Helper
Gold Member
2018 Award
I just think that $V(t)$ has to contribute somehow to $v_{CM}$.
So the equation you suggest is missing is a kinematics one? Didn't we say that we were not going to assume constant acceleration?
Kinematics, not kinetics. (Anyway, kinetics also applies to non-constant acceleration; it's just that the simple SUVAT equations don't).
Kinematics is unconcerned with masses and forces. It just considers the physical constraints on relative motion, e.g. that the length of a string is constant.
In this case you want one expressing that the contact is rolling, not sliding. That relates the linear velocities, angular velocity and radius - nothing else.
#### JD_PM
Right, but you need another equation. Express the fact of rolling contact in terms of the radius, the rotation rate and the two velocities.
You mean:
$$v_{CM} + V(t) = R \omega$$
#### haruspex
Homework Helper
Gold Member
2018 Award
You mean:
$$v_{CM} + V(t) = R \omega$$
Since both those linear velocities are in the lab frame, and I assume are positive in the same direction, adding them looks wrong.
#### JD_PM
Since both those linear velocities are in the lab frame, and I assume are positive in the same direction, adding them looks wrong.
My bad, regarding left as positive, we would have:
$$v_{CM} - V(t) = R \omega$$
Together with the equation obtained by conservation of AM:
$$v_{CM} = \frac{I_{CM} \omega}{RM}$$
$$v_{CM} = \frac{1}{2}[(\frac{I_{CM}}{R^2M} + 1) R\omega + V(t)]$$
#### haruspex
Homework Helper
Gold Member
2018 Award
My bad, regarding left as positive, we would have:
$$v_{CM} - V(t) = R \omega$$
Together with the equation obtained by conservation of AM:
$$v_{CM} = \frac{I_{CM} \omega}{RM}$$
$$v_{CM} = \frac{1}{2}[(\frac{I_{CM}}{R^2M} + 1) R\omega + V(t)]$$
Two problems there.
Your first two equations are not handling signs consistently. If velocities are positive to the left then for your first equation to be right anticlockwise must be positive for the rotation. The second equation was obtained using a different pair of definitions.
Secondly, you are asked to express vCM in terms of k (where I=Mk2), R, M and V(t). ω should not feature.
#### JD_PM
Two problems there.
Your first two equations are not handling signs consistently. If velocities are positive to the left then for your first equation to be right anticlockwise must be positive for the rotation. The second equation was obtained using a different pair of definitions.
Secondly, you are asked to express vCM in terms of k (where I=Mk2), R, M and V(t). ω should not feature.
OK I redid the problem from scratch:
From second (translation) Newton's law:
$$a_o= \frac{f}{M}$$
From second (rotation) Newton's law:
$$\tau = I \alpha = fa$$
$$k^2 \alpha= a_o a$$
$$\alpha = \frac{a_o a}{k^2}$$
Where $a_o$ is the acceleration of the cylinder measured from the ground.
Assuming that the acceleration of the belt is constant ($a_b$):
$$\frac{V}{t} = a_b$$
The acceleration of the cylinder with respect to the ground accounts for the acceleration of the belt and the angular acceleration of the cylinder (which has a negative sign because I considered the cylinder spinning clockwise i.e. the belt accelerating to the left and regarding that direction as the positive one). Therefore:
$$a_o = a_b - a\alpha$$
$$a_o = \frac{V}{t} -a\frac{a_o a}{k^2}$$
$$a_o = \frac{Vk^2}{(k^2 + a^2)t}$$
We know by kinematics that:
$$v_{cm} = \frac{Vk^2}{k^2 + a^2}$$
The issue here is that I had to assume the acceleration of the belt being constant so as to solve the problem. The answer makes sense to me but I would like to know how to solve it without assuming that $a_b = const$. (Recall that @kuruman suggested on #2 comment that we should not assume it).
#### haruspex
Homework Helper
Gold Member
2018 Award
OK I redid the problem from scratch:
From second (translation) Newton's law:
$$a_o= \frac{f}{M}$$
From second (rotation) Newton's law:
$$\tau = I \alpha = fa$$
$$k^2 \alpha= a_o a$$
$$\alpha = \frac{a_o a}{k^2}$$
Where $a_o$ is the acceleration of the cylinder measured from the ground.
Assuming that the acceleration of the belt is constant ($a_b$):
$$\frac{V}{t} = a_b$$
The acceleration of the cylinder with respect to the ground accounts for the acceleration of the belt and the angular acceleration of the cylinder (which has a negative sign because I considered the cylinder spinning clockwise i.e. the belt accelerating to the left and regarding that direction as the positive one). Therefore:
$$a_o = a_b - a\alpha$$
$$a_o = \frac{V}{t} -a\frac{a_o a}{k^2}$$
$$a_o = \frac{Vk^2}{(k^2 + a^2)t}$$
We know by kinematics that:
$$v_{cm} = \frac{Vk^2}{k^2 + a^2}$$
The issue here is that I had to assume the acceleration of the belt being constant so as to solve the problem. The answer makes sense to me but I would like to know how to solve it without assuming that $a_b = const$. (Recall that @kuruman suggested on #2 comment that we should not assume it).
To avoid assuming constant acceleration you will need to avoid involving acceleration at all.
You were nearly there with post #63. You just need to get your use of signs consistent and express the answer in terms of the variables given in the problem statement.
#### JD_PM
To avoid assuming constant acceleration you will need to avoid involving acceleration at all.
You were nearly there with post #63. You just need to get your use of signs consistent and express the answer in terms of the variables given in the problem statement.
OK, Fixing signs and $k$ issues one gets:
$$v_{CM} = V(t) - R \omega$$
$$v_{CM} = - \frac{k^2 \omega}{R}$$
$$v_{CM} = \frac{1}{2}[-(\frac{k^2}{R^2} + 1) R\omega + V(t)]$$
The only thing still disturbs me is that $\omega$ is still there. I am thinking now
OK It is just about using $v_{CM} = \omega R$ again. Actually doing so I get a pretty nice solution for $v_{CM}$:
$$v_{CM} = \frac{V(t)}{\frac{k^2}{R^2} + 3}$$
In terms of dimensions makes sense because k has dimensions of length, then the right hand side of the equation has dimensions of velocity.
WoW it has been a long journey but I enjoyed it a lot! :)
Last edited:
#### kuruman
Homework Helper
Gold Member
What's counter-intuitive about this problem is that if the belt brakes suddenly, the cylinder will stop moving just as suddenly.
#### JD_PM
What's counter-intuitive about this problem is that if the belt brakes suddenly, the cylinder will stop moving just as suddenly.
Mmm true but I think that is due to not considering an initial velocity on the cylinder. If we were to consider one different from zero, we would get an extra term that would make $v_{CM}$ different from zero even when $V(t) = 0$
#### haruspex
Homework Helper
Gold Member
2018 Award
it has been a long journey
And not quite over. The answer you got in post #65 was correct, but it assumed constant acceleration. You need to get the same answer without that assumption.
In post #63, the signs on ω in the first two equations were inconsistent. In post #67 you have changed the sign on both occurrences of ω, so they are still inconsistent.
Once you have those two equations right, you don't need the third equation in those two posts. In fact, I am not sure how you arrived at it. Just eliminate ω between the first two.
#### kuruman
Homework Helper
Gold Member
Mmm true but I think that is due to not considering an initial velocity on the cylinder. If we were to consider one different from zero, we would get an extra term that would make $v_{CM}$ different from zero even when $V(t) = 0$
The problem specifies that the bottle (cylinder) and the belt are initially at rest.
#### JD_PM
And not quite over. The answer you got in post #65 was correct, but it assumed constant acceleration. You need to get the same answer without that assumption.
In post #63, the signs on ω in the first two equations were inconsistent. In post #67 you have changed the sign on both occurrences of ω, so they are still inconsistent.
Once you have those two equations right, you don't need the third equation in those two posts. In fact, I am not sure how you arrived at it. Just eliminate ω between the first two.
Yeah the following equation is wrong (algebraic mistake):
$$v_{CM} = \frac{1}{2}[-(\frac{k^2}{R^2} + 1) R\omega + V(t)]$$
Then, combining the following two equations so as to solve for $v_{CM}$ one gets the same result than #65:
$$v_{CM} = V(t) - R \omega$$
$$v_{CM} = - \frac{k^2 \omega}{R}$$
$$v_{cm} = \frac{Vk^2}{k^2 + R^2}$$
#### haruspex
Homework Helper
Gold Member
2018 Award
Yeah the following equation is wrong (algebraic mistake):
$$v_{CM} = \frac{1}{2}[-(\frac{k^2}{R^2} + 1) R\omega + V(t)]$$
Then, combining the following two equations so as to solve for $v_{CM}$ one gets the same result than #65:
$$v_{CM} = V(t) - R \omega$$
$$v_{CM} = - \frac{k^2 \omega}{R}$$
$$v_{cm} = \frac{Vk^2}{k^2 + R^2}$$
Now you have arrived.
#### JD_PM
Now you have arrived.
The problem has been solved.
However I am still interested in the approach @kuruman suggested on 36#; solving the problem from a non-inertial reference frame. I guess this approach is based on the second Newton's Law for rotation:
$$f - F = I \alpha$$
Where $f$ is the friction and $F$ is a fictitious force.
#### kuruman
Homework Helper
Gold Member
It's not much different from what you did. The acceleration of the belt is $\frac{dV}{dt}$. In the non-inertial frame it manifests itself as a fictitious force $ma$ acting on the CM in the opposite direction as the acceleration of the belt. The torque equation about the point of contact on the belt gives $MaR=(Mk^2+MR^2)\alpha$ (note the use of he parallel axis theorem). Let $a'_{cm}$ be the acceleration of the CM in the non-inertial frame. Then $a'_{cm}=\alpha/R$. Put this back in the torque equation, cancel the masses and get $$a'_{cm}=\frac{R^2}{k^2+R^2}a=\frac{R^2}{k^2+R^2}\left(-\frac{dV}{dt}\right).$$ The negative sign is introduced because $a'_{cm}$ and $\frac{dV}{dt}$ are in opposite directions. In the inertial frame, the acceleration of the CM is $$a_{cm}=a+a'_{cm}=\frac{dV}{dt}-\frac{R^2}{k^2+R^2}\frac{dV}{dt} =\frac{k^2}{k^2+R^2}\frac{dV}{dt}$$The velocity of the CM is$$v_{cm}(t)=\int a_{cm}dt=\frac{k^2}{k^2+R^2}\int \frac{dV}{dt}dt=\frac{k^2}{k^2+R^2}V(t).$$I am not convinced that this is a better way to approach the problem however.
"Cylinder lying on conveyor belt"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{}
|
# Without Replacement
Last updated: September 5th, 2020
In [2]:
import pandas as pd
import numpy as np
In [3]:
toy_dataset_2 = pd.read_csv('Churn Modeling.csv')
desired_columns = ["CustomerId", "Surname", "Geography", "Gender", "Age"]
toy_dataset_1 = toy_dataset_2[desired_columns]
toy_dataset = toy_dataset_1[1:500]
toy_dataset.shape
Out[3]:
(499, 5)
This is a dataset which contain some customers
Two person are randomly chosen without replacement, what is the probability of both are French?
In [4]:
data_F = toy_dataset[(toy_dataset["Geography"] == "France")]
(data_F.shape[0])/(toy_dataset.shape[0])*(data_F.shape[0]-1)/(toy_dataset.shape[0]-1)
Out[4]:
0.21566023613492047
exercise: Four person are randomly chosen without replacement, what is the probability of the first has as surname Shih, the second Burns, the third Kennedy and the fourth has a different surname that all the above?
In [ ]:
In [5]:
data_S = toy_dataset[(toy_dataset["Surname"] == "Shih")]
data_B = toy_dataset[(toy_dataset["Surname"] == "Burns")]
data_K = toy_dataset[(toy_dataset["Surname"] == "Kennedy")]
SBK = ((data_S.shape[0]-1)+(data_K.shape[0]-1))/(toy_dataset.shape[0]-3)
(data_S.shape[0]/toy_dataset.shape[0])*(data_B.shape[0]/(toy_dataset.shape[0]-1))*(data_K.shape[0]/(toy_dataset.shape[0]-2))*(1-SBK)
Out[5]:
9.618221727095405e-08
In [ ]:
|
{}
|
## Calculus, 10th Edition (Anton)
$x^{4}$ - 1 = (x - 1)(x³ + x² + x + 1), so: $\frac{x^{4}-1}{x-1}$ = x³ + x² + x + 1. Now we have: $\lim\limits_{x \to 1^{+}}\frac{x^{4}-1}{x-1}$ = $\lim\limits_{x \to 1^{+}}x³ + x² + x + 1$ = 1³ + 1² + 1 + 1 = 4
|
{}
|
# Sherman Morrison Formula with a sum of outer-product
I have a question, in which I am computing the inverse of a matrix $(A + \sum_{i=1}^{k}u_iv_i^T)$. Typically $k$ is small than $5$. And I know if $k = 1$, we can apply the Sherman Morrison Formula to quickly obtain the matrix inversion. Is there any fast way to compute the inversion, when $k > 1$? Note that Woodbury formula does not work in this case since $\sum_{i=1}^{k}u_iv_i^T$ cannot be expressed as $UCV$. Thank you.
• What are $A$, $u_i$ and $v_i$? You might want to make your question clearer. – Jack Sep 30 '16 at 22:09
• $A$ is a $r\times r$ invertible matrix and $u, v$ are $r$-dim vectors. – Hongyi Xu Oct 2 '16 at 4:36
I'll write $v_i$ as $v_i^T$ just to indicate it's a row vector. I do not see any problem with expressing the sum as $UCV$: $\sum_{i=1}^k u_i v_i^T = [u_1 \; u_2 \ldots u_k] [v_1 \; v_2 \ldots v_k]^T = UCV$ with $U=[u_1 \; u_2 \ldots u_k]$, $C=I$, $V=[v_1 \; v_2 \ldots v_k]^T$.
• Thanks. However, you will have cross terms $u_i v_j^T$, where $i$ is not equal to $j$. – Hongyi Xu Oct 2 '16 at 4:33
|
{}
|
# Ngô Quốc Anh
## August 20, 2012
### A Note On The Almost-Schur Lemma On 4-dimensional Riemannian Closed Manifolds
Filed under: Riemannian geometry — Tags: — Ngô Quốc Anh @ 14:10
Let us continue our posts regarding to the Schur lemma, i.e., the following estimate
$\displaystyle \int_M {|\text{Ric} - \frac{{\overline R}}{n}g{|^2}} \leqslant \frac{{{n^2}}}{{{{(n - 2)}^2}}}\int_M {|\text{Ric} - \frac{R}{n}g{|^2}}$
holds provided $\text{Ric} \geqslant 0$ and $n\geqslant 5$ where $R$ is the scalar curvature and $\overline R=\text{vol(M)}^{-1}\int_M R$ is the average of $R$.
Recently, Ge and Wang improved the above inequality for the case $n=4$. They showed that the above estimate remains valid provided the scalar curvature is non-negative.
Today, we talk about a work by Ezequiel R. Barbosa recently published in Proc. Amer. Math. Soc. 2012 [here]. Following is his main result
Theorem. Let $(M,g)$ be a $4$-dimensinal closed Riemannian manifold. Then
$\displaystyle\int_M {|\text{Ric} - \frac{{\overline R}}{4}g{|^2}} \leqslant 4\int_M {|\text{Ric} - \frac{R}{4}g{|^2}} + 9\lambda _g^2 - \frac{{\overline R}}{4}\int_M R$
where $\overline R=\text{vol(M)}^{-1}\int_M R$ is the average of the scalar curvature $R$ of $g$ and $\lambda_g$ is the Yamabe invariant. Moreover, the equality holds if and only if there exists a metric $g_1 \in [g]$ such that $(M,g_1)$ is an Einstein manifold.
As can be seen, the only contribution of the above theorem is to assume no conditions on the Ricci tensor or the scalar curvature.
It is worth noticing that if the Yamabe invariant $\lambda_g$ is nonnegative, then
$\displaystyle 9\lambda _g^2 - \frac{{\overline R}}{4}\int_M R = 9{\left( {\mathop {\inf }\limits_{{g_2} \in [g]} \frac{{\int_M {\frac{{{R_{{g_2}}}}}{6}} }}{{{{(\text{vol}(M,{g_2}))}^{\frac{1}{2}}}}}} \right)^2} - \frac{{\overline R}}{4}\int_M R \leqslant 0.$
Since
$\displaystyle |\text{Ric} - \frac{{\overline R}}{4}g{|^2} = |\text{Ric}{|^2} - \frac{{\overline R}}{2}R + \frac{{{{\overline R}^2}}}{4}$
the above inequality is equivalent to
$\displaystyle\frac{8}{3}\int_M {{\sigma _2}(g)} \leqslant \lambda _g^2.$
We now choose the metric $g_1$ in such a way that $\sigma_1(g_1)$ is constant. Then
$\displaystyle\frac{8}{3}\text{vol}(M,{g_1})\int_M {{\sigma _2}({g_1})} \leqslant \text{vol}(M,{g_1})\int_M {{{({\sigma _1}({g_1}))}^2}} = {\left( {\int_M {{\sigma _1}({g_1})} } \right)^2}.$
Therefore,
$\displaystyle\frac{8}{3}\int_M {{\sigma _2}({g_1})} \leqslant {\left( {\frac{1}{{\text{vol}(M,{g_1})}}\int_M {{\sigma _1}({g_1})} } \right)^2} = \lambda _g^2$
since $g_1$ is a Yamabe solution. Making use of $n=4$, we get
$\displaystyle\frac{8}{3}\int_M {{\sigma _2}(g)} = \frac{8}{3}\int_M {{\sigma _2}({g_1})} \leqslant \lambda _g^2$
which proves the result.
|
{}
|
# Saliency map
A view of the fort of Marburg (Germany) and the saliency Map of the image using color, intensity and orientation.
In computer vision, a saliency map is an image that shows each pixel's unique quality.[1] The goal of a saliency map is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. For example, if a pixel has a high grey level or other unique color quality in a color image, that pixel's quality will show in the saliency map and in an obvious way. Saliency is a kind of image segmentation.
## Saliency as a segmentation problem
Saliency estimation may be viewed as an instance of image segmentation. In computer vision, image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as superpixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.[2]
## Example implementation
First, we should calculate the distance of each pixel to the rest of pixels in the same frame:
${\displaystyle \mathrm {SALS} (I_{k})=\sum _{i=1}^{N}|I_{k}-I_{i}|}$
${\displaystyle I_{i}}$ is the value of pixel ${\displaystyle i}$, in the range of [0,255]. The following equation is the expanded form of this equation.
SALS(Ik) = |Ik - I1| + |Ik - I2| + ... + |Ik - IN|
Where N is the total number of pixels in the current frame. Then we can further restructure our formula. We put the value that has same I together.
SALS(Ik) = ∑ Fn × |Ik - In|
Where Fn is the frequency of In. And the value of n belongs to [0,255]. The frequencies is expressed in the form of histogram, and the computational time of histogram is ${\displaystyle O(N)}$ time complexity.
### Time complexity
This saliency map algorithm has ${\displaystyle O(N)}$ time complexity. Since the computational time of histogram is ${\displaystyle O(N)}$ time complexity which N is the number of pixel's number of a frame. Besides, the minus part and multiply part of this equation need 256 times operation. Consequently, the time complexity of this algorithm is ${\displaystyle O(N+256)}$ which equals to ${\displaystyle O(N)}$.
### Pseudocode
All of the following code is pseudo matlab code. First, read Data from video sequences.
for k = 2 : 1 : 13 % which means from frame 2 to 13, and in every loop K's value increase one.
I1 = im2single(I); % convert double image into single(requirement of command vlslic)
I2 = im2single(l);
regionSize = 10; % set the parameter of SLIC this parameter setting are the experimental result. RegionSize means the superpixel size.
regularizer = 1; % set the parameter of SLIC
segments1 = vl_slic(I1, regionSize, regularizer); % get the superpixel of current frame
segments2 = vl_slic(I2, regionSize, regularizer); % get superpixel of the previous frame
numsuppix = max(segments1(:)); % get the number of superpixel all information about superpixel is in this link
regstats1 = regionprops(segments1, ’all’);
regstats2 = regionprops(segments2, ’all’); % get the region characteristic based on segments1
After we read data, we do superpixel process to each frame. Spnum1 and Spnum2 represent the pixel number of current frame and previous pixel.
% First, we calculate the value distance of each pixel.
% This is our core code
for i=1:1:spnum1 % From the first pixel to the last one. And in every loop i++
for j=1:1:spnum2 % From the first pixel to the last one. j++. previous frame
centredist(i:j) = sum((center(i)-center(j))); % calculate the center distance
end
end
Then we calculate the color distance of each pixel, this process we call it contract function.
for i=1:1:spnum1 % From first pixel of current frame to the last one pixel. I ++
for j=1:1:spnum2 % From first pixel of previous frame to the last one pixel. J++
posdiff(i,j) = sum((regstats1(j).Centroid’-mupwtd(:,i))); % Calculate the color distance.
end
end
After this two process, we will get a saliency map, and then store all of these maps into a new FileFolder.
### Difference in algorithms
The major difference between function one and two is the difference of contract function. If spnum1 and spnum2 both represent the current frame's pixel number, then this contract function is for the first saliency function. If spnum1 is the current frame's pixel number and spnum2 represent the previous frame's pixel number, then this contract function is for second saliency function. If we use the second contract function which using the pixel of the same frame to get center distance to get a saliency map, then we apply this saliency function to each frame and use current frame's saliency map minus previous frame's saliency map to get a new image which is the new saliency result of the third saliency function.
Saliency result
## References
1. ^ Kadir, Timor; Brady, Michael (2001). "Saliency, Scale and Image Description". International Journal of Computer Vision. 45 (2): 83–105. CiteSeerX 10.1.1.154.8924. doi:10.1023/A:1012460413855.
2. ^ A. Maity (2015). "Improvised Salient Object Detection and Manipulation". arXiv:1511.02999 [cs.CV].
|
{}
|
# Chapter 36 - Think and Explain: 28
Light does bend, but for surveying purposes, we say that light effectively travels in a straight line.
#### Work Step by Step
The bending of light due to gravity is unnoticeable for short distances. The path of a photon bends downward just as the path of a thrown baseball does, by the same distance in the same time interval. However, light travels so fast (roughly a foot per nanosecond) that in traveling a 1000-meter survey distance, taking a time of $t = 3.3 \times 10^{-6} s$, it would only drop about $5.4 \times 10^{-11} m$, roughly the size of an atom.
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
{}
|
## The Annals of Statistics
### On the Asymptotic Relation Between $L$-Estimators and $M$-Estimators and their Asymptotic Efficiency Relative to the Cramer-Rao Lower Bound
Constance van Eeden
#### Abstract
This paper gives conditions under which $L$- and $M$-estimators of a location parameter have asymptotically the same distribution, conditions under which they are asymptotically equivalent and conditions under which $L$-estimators are asymptotically efficient relative to the Cramer-Rao lower bound. Our results differ from analogous results of Jung (1955), Bickel (1965), Chernoff, Gastwirth, Johns (1967), Jaeckel (1971) and Rivest (1978, 1982) in that we do not require that the derivative of the density of the observations is absolutely continuous nor that the function defining the $M$-estimator is absolutely continuous.
#### Article information
Source
Ann. Statist., Volume 11, Number 2 (1983), 674-690.
Dates
First available in Project Euclid: 12 April 2007
Permanent link to this document
https://projecteuclid.org/euclid.aos/1176346172
Digital Object Identifier
doi:10.1214/aos/1176346172
Mathematical Reviews number (MathSciNet)
MR696078
Zentralblatt MATH identifier
0526.62038
JSTOR
van Eeden, Constance. On the Asymptotic Relation Between $L$-Estimators and $M$-Estimators and their Asymptotic Efficiency Relative to the Cramer-Rao Lower Bound. Ann. Statist. 11 (1983), no. 2, 674--690. doi:10.1214/aos/1176346172. https://projecteuclid.org/euclid.aos/1176346172
|
{}
|
# Hilbert Polynomial II
My overall goal has not changed, but I definitely have a much clearer picture of where my posts are headed for right now. I recently was working on what happens to dimension when you intersect varieties, and I needed a commutative algebra result that sort of surprised me. So that is my first benchmark on this front. Lucky for me, there is a nice clean way to prove it using the Hilbert polynomial, so I can just continue this course for now.
Let’s now reconstruct the Hilbert polynomial in a different way. As before let $M$ be a finitely generated graded $R$-module. Then $M_n$ is finitely generated as an $A_0$-module.
Let $\lambda$ be an additive funtion (in $\mathbb{Z}$) on the class of finitely generated $A_0$-modules. We define the Poincare series of $M$ to be the generating funciton of $\lambda(M_n)$. So we get a power series with coefficients in $\mathbb{Z}$: $P(M, t)=\sum \lambda(M_n)t^n$.
By a remarkably similar argument to the last post we can check by induction that $P(M, t)$ is a rational function in $t$ of the form $\displaystyle \frac{f(t)}{\prod_{t=1}^s (1-t^{k_i})}$ where $f(t)\in\mathbb{Z}[t]$.
Let’s suggestively call the order of the pole at $t=1$, $d(M)$.
We now simplify the situation by taking all $k_i=1$. Then the main idea for today is that $\lambda(M_n)$ is a polynomial of degree $d-1$. In fact, $\lambda(M_n)=H_M(n)$.
Our simplification gives that $P(M, t)=f(t)\cdot (1-t)^{-s}$. So $\lambda(M_n)$ is the coefficient of $t^n$. If we cancel factors of $(1-t)$ out of $f(t)$ we can assume $f(1)\neq 0$ and that $s=d$. Write $f(t)=\sum_{k=0}^N a_kt^k$. Then since $\displaystyle (1-t)^{-d}=\sum_{k=0}^\infty \left(\begin{matrix} d+k-1 \\ d-1 \end{matrix}\right) t^k$ we get that $\lambda(M_n)=\sum_{k=0}^N a_k \left(\begin{matrix} d+n-k-1 \\ d-1 \end{matrix}\right)$ for all $n\geq N$.
Thus we get a polynomial with non-zero leading term. Note the values at integers are integers, but the coefficients in general are only rationals.
Since $\lambda$ was any additive function, this is a bit more general. But taking $\lambda(M_n)=\dim M_n$ we get the Hilbert polynomial from last time.
Next time we’ll start using this to streamline some proofs about dimension.
Why would $\lambda (M_n) = H_M(n)$?
We can take $\lambda$ to be any additive function, and $H_M(n)$ is defined to be $\dim M_n$, so the additive function $\lambda(M_n)=\dim M_n$ is just one example of a $\lambda$ that works. This $\lambda$ is actually more general.
|
{}
|
Timezone: »
Poster
Generalization Bounds for Stochastic Gradient Descent via Localized $\varepsilon$-Covers
Sejun Park · Umut Simsekli · Murat Erdogdu
Wed Nov 30 09:00 AM -- 11:00 AM (PST) @ Hall J #309
In this paper, we propose a new covering technique localized for the trajectories of SGD. This localization provides an algorithm-specific complexity measured by the covering number, which can have dimension-independent cardinality in contrast to standard uniform covering arguments that result in exponential dimension dependency. Based on this localized construction, we show that if the objective function is a finite perturbation of a piecewise strongly convex and smooth function with $P$ pieces, i.e., non-convex and non-smooth in general, the generalization error can be upper bounded by $O(\sqrt{(\log n\log(nP))/n})$, where $n$ is the number of data samples. In particular, this rate is independent of dimension and does not require early stopping and decaying step size. Finally, we employ these results in various contexts and derive generalization bounds for multi-index linear models, multi-class support vector machines, and $K$-means clustering for both hard and soft label setups, improving the previously known state-of-the-art rates.
|
{}
|
# Normality Calculator
Created by Rahul Dhari
Reviewed by Purnima Singh, PhD and Steven Wooding
Last updated: Feb 28, 2023
The normality calculator will help you determine the number of equivalents of solute present in one liter of solution.
Normality in chemistry, otherwise known as equivalent concentration, is a function of molar concentration (see molarity calculator) and equivalence factor. It is helpful in medical science and chemistry to report the concentration of a solution.
This calculator and the accompanying article explain what normality is and the equation to calculate it. The article also discusses molarity vs. normality.
## Normality definition and units
Normality (N), otherwise known as the normal concentration, is a measure of concentration of solute in gram equivalents per volume of the solution. For a solution with 'x' grams of solute, the normality equation is:
$\scriptsize N = \frac{\text{Mass of solute}} {\text{equivalent wt.} \times \text{volume of soln.} },$
where the mass of the solute is in grams.
The units of normality are "eq/L" or "meq/L". A measurement of 1 eq/L is also 1 N. You can calculate the equivalent weight of a solute using its molecular weight and valency of the solute:
$\scriptsize \text{Eq. wt} = \frac{\text{Molecular wt. of solute}} {\text{Valency}}$
Our molecular weight calculator will be handy for calculating the equivalent weight.
## Normality vs. molarity
Molarity (M) and Normality (N) are closely related terms and are often confusing. Both quantities deal with concentration. However, molarity is the moles of substance per volume of solution, whereas normality is the equivalent weight of substance per liter of solution. You can relate normality and molarity using the equation:
$\scriptsize N \times \text{Eq.wt.} = M × \text{Molecular mass}$
The normal concentration is always greater than its molar counterpart. Both concepts are useful for reporting concentrations during titrations and medical serums.
## Using the normality calculator
Now that you know what normality is! Let's calculate normality for 1 g of sodium bicarbonate dissolved in 3 liters of water.
To calculate normality:
1. Enter the mass of solute in grams as 1 g.
2. Fill in the equivalent weight of Na₂CO₃ as 52.95 eq/g. You can calculate this by adding the molecular mass of sodium bicarbonate and dividing it by 2 because we need 2 sodium ions for every carbonate ion.
3. The normality of the solution is:
$\qquad \scriptsize N = \frac{1}{52.95 \times 3} = 0.006295~\mathrm{eq/L}$
Alternatively, you can also use this calculator to calculate the amount of solute needed to obtain the concentration of 1 N.
## FAQ
### What do you mean by normality?
The normality is the ratio of the solute's equivalent weight and solution volume. The units of normality are eq/L or meq/L. Here, eq/L refers to 1 N whereas meq/L is 0.001 N.
### How do I calculate normality?
To calculate normality:
1. Find the mass and equivalent weight of the solute.
2. Divide the mass of solute by equivalent weight of solute.
3. Divide the resultant by the volume of solution to obtain the normality or normal concentration.
### What is the normality for 2 g of N₂ in 500 ml solution?
The normality is 0.1428 eq/L. For a solute of mass 2 g and equivalent weight 28.014 eq/g, the normality equals to N = 2 / (28.014 × 0.5) = 0.1428 eq/L or 0.1428 N
### What is the difference between molarity and normality?
The normality deals with the equivalent weight of the solute, whereas the molarity deals with the molar mass of the solute. The normal concentration or normality is always greater than the molarity. The two entities are related by:
Normality × Equivalent weight = Molarity × Molar mass.
Rahul Dhari
Mass of solute (m)
g
Equivalent weight of solute (E)
eq/
g
Volume of solution (S)
US gal
Normality (N)
eq/
US gal
People also viewed…
### Cell EMF
Cell EMF calculator helps you calculate the electromotive force of an electrochemical cell.
### Discount
Discount calculator uses a product's original price and discount percentage to find the final price and the amount you save.
### Electronegativity
This electronegativity calculator is an efficient tool to calculate the type of bond formed between two atoms based on their electronegativities.
### Helium balloons
Wondering how many helium balloons it would take to lift you up in the air? Try this helium balloons calculator! 🎈
|
{}
|
# the driving wheel of an engine ...
the driving wheel of an engine is 2.1m in radius find the distance covered when it has made 8000 revolution
Naveen Sharma 2 years, 5 months ago
Ans. Radius of Wheel (r) = 2.1m
Distance covered in one Revolution = Circumference of Wheel = $$2{\pi} r = 2\times {22\over 7}\times 2.1 = 13.2 m$$
Distance covered in 8000 Revolutions = $$13.2 \times 8000 = 105600m = 105.6 Km$$
Related Questions:
Subscribe complete study pack and get unlimited access to selected subjects. Effective cost is only ₹ 12.5/- per subject per month. Based on the latest CBSE & NCERT syllabus.
## myCBSEguide
Trusted by 70 Lakh Students
#### CBSE Test Generator
Create papers in minutes
Print with your name & Logo
3 Lakhs+ Questions
Solutions Included
Based on CBSE Blueprint
Best fit for Schools & Tutors
#### Work from Home
• Work from home with us
• Create questions or review them from home
No software required, no contract to sign. Simply apply as teacher, take eligibility test and start working with us. Required desktop or laptop with internet connection
|
{}
|
# Formatting front matter title and page number in ToC to be single spaced and roman
I want to achieve the following (ignore the artifacts, it's from a university reference for their thesis style),
I have the following MWE,
\documentclass[12pt, a4paper, oneside, parskip=full]{memoir}
\setulmarginsandblock{2.5cm}{3cm}{*} % Upper-Lower margins
\setlrmarginsandblock{3.8cm}{2.5cm}{*} % Left-Right margins
\checkandfixthelayout
%******************************
% Table of con../fig../tab..
%*************************
\renewcommand{\listfigurename}{LIST OF FIGURES}
\renewcommand{\listtablename}{LIST OF TABLES}
\renewcommand{\printtoctitle}[1]{\centering\Large\bfseries #1}
\renewcommand{\printloftitle}[1]{\centering\Large\bfseries #1}
\renewcommand{\printlottitle}[1]{\centering\Large\bfseries #1}
\renewcommand{\chapternumberline}[1]{\MakeTextUppercase{\chaptername\
\numtoName{#1}: }}
\setcounter{tocdepth}{3}
\renewcommand\cftchaptername{\chaptername~}
\renewcommand{\cftchapterdotsep}{\cftdotsep}% Chapters should use dots in ToC
\usepackage{times} % Use Times font
%******************************
% Spacing
%*************************
\usepackage{parskip}
\setlength{\parindent}{1.2cm} % Paragraph indent
\frenchspacing % Use single space after end of sentence
%******************************
% Others
%*************************
\usepackage{pdfpages}
\usepackage{lipsum}
\begin{document}
%******************************
\frontmatter
%*************************
\tableofcontents*
\listoftables
\listoffigures
%******************************
\mainmatter
%*************************
\DoubleSpacing
\chapter{Introduction}
\lipsum[4-5]
%******************************
\backmatter
%*************************
\end{document}
Here is the code for SamplePage.tex:
\documentclass{article}
\title{Sample Page}
\usepackage{lipsum}
\begin{document}
\maketitle
\lipsum[4-5]
\thispagestyle{empty}
\end{document}
I've got the following so far,
I've already read Customising ToC appearance of chapter titles in front and back matter but I don't think it applies here since I'm using memoir.
My questions:
1. How to make front matter titles and respective numbers be roman (un-bold)? and have trailing dots in between them?
2. How can I have single spacing between the front matter entries in the ToC?
3. How can I have "List of Figures" and "List of Tables" in the ToC but full caps in the \title{} and thus the page?
4. How can I make Chapter headings along with their titles in full caps?
5. and also make chapter, and all sections, subsections, etc. have trailing dots to their respective page numbers?
|
{}
|
# What kind of significant impacts have originated from E=mc^2. Generally, it is regarded as the most famous equation of all time.
What kind of significant impacts have originated from $E=m{c}^{2}$.
Generally, it is regarded as the most famous equation of all time. Except for nuclear energy (fission and fusion) I do not know any other way in which this equation has made an impact on the world.
Can somebody list some developments and impacts based on this equation?
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Cremolinoer
While it is famous in popular culture, I don't think that there are any useful phenomena other than fission/fusion that can be explained through this equation. In fact, this equation on its own is quite useless (and not used that much either -- it lets one know that mass and energy are not distinct quantities, but after that there's not much you can do with this; ${E}^{2}={p}^{2}{c}^{2}+{m}^{2}{c}^{4}$ is more useful).
Besides, nuclear fission and fusion aren't really consequences of just this equation. They are consequences of the underlying theory (special relativity) as a whole. However, they can be easily explained via this equation. Similarly, matter-antimatter annihilation can be explained with this equation, but it certainly wasn't discovered from it.
To put this question in perspective, compare with "What developments came from the equation $F=ma$?". Developments came from the underlying theory of Newtonian Mechanics, but there isn't really anything that comes specifically from $F=ma$.
|
{}
|
## anonymous 3 years ago PLEASE HELP: SIMPLE HARMONIC MOTION A 0.77 kg object is attached to one end of a spring hanging vertically from a stand. The object is pushed upward 0.14m from the equilibrium position and released, causing the object to oscillate in simple harmonic motion with a frequency of 0.769Hz. Determine the period of the motion, angular frequency, spring constant, Displacement of the object after 0.13s, and Acceleration of the object after 0.13s
1. anonymous
Hi you can find $\omega$ from this :$\omega =2(pi)f$ and spring constant from this:$\omega=\sqrt({k/m})$
2. anonymous
@Yahoo! ..solve this question...
3. anonymous
4. anonymous
this solution is formula based..Explain it
5. anonymous
|dw:1357743076492:dw| |dw:1357743145369:dw| |dw:1357743232622:dw| |dw:1357743327039:dw|
6. anonymous
|dw:1357743398905:dw|
7. anonymous
Time period=1/linear frequency=1/f..
8. anonymous
|dw:1357749854411:dw| |dw:1357749904580:dw|
9. anonymous
@Taufique x = a sinwt.....ryt ?
10. anonymous
x=A*sin(wt-phi) and x=A*cos(wt-phi)..both will give u same result..
|
{}
|
1
JEE Main 2022 (Online) 28th July Evening Shift
+4
-1
The function $$f: \mathbb{R} \rightarrow \mathbb{R}$$ defined by
$$f(x)=\lim\limits_{n \rightarrow \infty} \frac{\cos (2 \pi x)-x^{2 n} \sin (x-1)}{1+x^{2 n+1}-x^{2 n}}$$ is continuous for all x in :
A
$$R-\{-1\}$$
B
$$\mathbb{R}-\{-1,1\}$$
C
$$R-\{1\}$$
D
$$R-\{0\}$$
2
JEE Main 2022 (Online) 27th July Evening Shift
+4
-1
If for $$\mathrm{p} \neq \mathrm{q} \neq 0$$, the function $$f(x)=\frac{\sqrt[7]{\mathrm{p}(729+x)}-3}{\sqrt[3]{729+\mathrm{q} x}-9}$$ is continuous at $$x=0$$, then :
A
$$7 p q \,f(0)-1=0$$
B
$$63 q \,f(0)-\mathrm{p}^{2}=0$$
C
$$21 q \,f(0)-\mathrm{p}^{2}=0$$
D
$$7 p q \,f(0)-9=0$$
3
JEE Main 2022 (Online) 26th July Evening Shift
+4
-1
Let $$\beta=\mathop {\lim }\limits_{x \to 0} \frac{\alpha x-\left(e^{3 x}-1\right)}{\alpha x\left(e^{3 x}-1\right)}$$ for some $$\alpha \in \mathbb{R}$$. Then the value of $$\alpha+\beta$$ is :
A
$$\frac{14}{5}$$
B
$$\frac{3}{2}$$
C
$$\frac{5}{2}$$
D
$$\frac{7}{2}$$
4
JEE Main 2022 (Online) 26th July Morning Shift
+4
-1
If the function $$f(x) = \left\{ {\matrix{ {{{{{\log }_e}(1 - x + {x^2}) + {{\log }_e}(1 + x + {x^2})} \over {\sec x - \cos x}}} & , & {x \in \left( {{{ - \pi } \over 2},{\pi \over 2}} \right) - \{ 0\} } \cr k & , & {x = 0} \cr } } \right.$$ is continuous at x = 0, then k is equal to:
A
1
B
$$-$$1
C
e
D
0
JEE Main Subjects
Physics
Mechanics
Electricity
Optics
Modern Physics
Chemistry
Physical Chemistry
Inorganic Chemistry
Organic Chemistry
Mathematics
Algebra
Trigonometry
Coordinate Geometry
Calculus
EXAM MAP
Joint Entrance Examination
|
{}
|
# Find integrals
1. Apr 3, 2009
### -EquinoX-
1. The problem statement, all variables and given/known data
Find the integral of
$$\int{9cos(t)e^{9sin(t)} - e^{cos(t)}sin(t)} dt$$
$$\int{ - t^2sin(t) + 2tcos(t)} dt$$
$$\int{-8(sin(t))^2 - 2cos(t)sin(2sin((t)))} dt$$
2. Relevant equations
3. The attempt at a solution
I got -2tsin(2t) - t^2cos(t) + 2 cos(t) - 2t*sin(t)
is this correct?
Last edited: Apr 3, 2009
2. Apr 3, 2009
### xepma
You calculated the derivative... Not the primitive.
(It's not even possible to write down the primitive in terms of elementary functions)
3. Apr 3, 2009
### -EquinoX-
oops.. that's right, so how do I get the integrals of it
4. Apr 3, 2009
### n!kofeyn
For the first listed integral, break the integral into two integrals and then perform a separate u-substitution for both integrals. I.e., let $u$ be something in the first integral, and let $v$ be something in the second integral.
For the second listed integral, again break the integral into two integrals and then use integration by parts. You will need to do integration by parts twice for the first of the two integrals.
Let's hold off on the third integral for now. Further hints can be given if you are still stuck.
5. Apr 3, 2009
### -EquinoX-
are you saying u = 9cos(t)e^{9sin(t)} or are you asking to break this up to a u and v
6. Apr 3, 2009
### n!kofeyn
No. That isn't how you apply u-substitution. Break the integral up as:
$$\int \left( 9\cos t e^{9\sin t} - \sin t e^{\cos t} \right) \,dt = \int 9\cos t e^{9\sin t} \,dt + \int (-\sin t) e^{\cos t} \,dt$$
For the integral on the left. Let $u = 9\sin t$. Then $du = 9\cos t \,dt$. Then the integral on the left becomes
$$\int \underbrace{e^{9\sin t}}_{e^u} \, \underbrace{9\cos t \,dt}_{du} = \int e^u \,du = e^u + C = e^{9\sin t} + C$$
Now try the integral on the right by letting v be something, similar to the one I just did.
7. Apr 4, 2009
### -EquinoX-
thanks for the help
|
{}
|
Making bed files from fasta
3
0
Entering edit mode
6.2 years ago
peri.tobias ▴ 10
Trying to run a biopython script in Windows cmd to make a bed file from a draft genome downloaded from NCBI. I get the following error. The headers appear to be fine. I have used the biopython script many times with success previously. Can someone see what the error is please?
C:\Python34>python.exe test.fasta make_bed_from_fasta.py >test.bed
File "test.fasta", line 1
>KQ503367.1
^
SyntaxError: invalid syntax
The fasta looks like this, top lines. PS.I am not sure why the post is edited to remove the ">" but it is present in the header like this ">KQ503367.1" with sequence on following line.
>KQ503367.1
TGGAAAATTTGGTttgtaattctttttctaaaaaaaacttattttggGGTGTATGATGTGGGTTATTTGGGAGGGGTGAG
AAAAAGTGTGAAACAAATGGTTGAAGggtttttggaagttttttttccaaatacaggttttttgtttcattttaatttaa
aatgggcCTGGGGAAacccttacatgtttttaccaaattggTTAGGTGGGTTTACCAAAGCCCTAAATTGATTAGAACTt
genome • 4.9k views
0
Entering edit mode
6.2 years ago
peri.tobias ▴ 10
My apologies, my biopython script was corrupted. The fasta is okay.
0
Entering edit mode
6.2 years ago
I think you've already figured out that you need to pass the python script as the first argument instead of the fasta file...
0
Entering edit mode
Thanks Matt. Yes I should have looked a bit closer before posting.
3
Entering edit mode
6.0 years ago
I guess I'll also mention that you could use pyfaidx for this as well:
$pip install pyfaidx$ faidx --transform bed test.fasta > test.bed
0
Entering edit mode
I want to point out that this feature didn't work as intended until pyfaidx v0.5.2, where someone pointed out that the coordinate weren't 0-based half-open as expected. This has now been fixed: https://github.com/mdshw5/pyfaidx/releases/tag/v0.5.2
|
{}
|
TLS-capable transport using OpenSSL for asyncio
## Project description
aioopenssl provides a asyncio Transport which uses PyOpenSSL instead of the built-in ssl module.
The transport has two main advantages compared to the original:
• The TLS handshake can be deferred by passing use_starttls=True and later calling the starttls() coroutine method.
This is useful for protocols with a STARTTLS feature.
• A coroutine can be called during the TLS handshake; this can be used to defer the certificate check to a later point, allowing e.g. to get user feedback before the starttls() method returns.
This allows to ask users for certificate trust without the application layer protocol interfering or starting to communicate with the unverified peer.
## Documentation
Official documentation can be built with sphinx and is available online on our servers.
## Project details
Uploaded source
|
{}
|
# ReLU, Sigmoid and Tanh: today’s most used activation functions
Today’s deep neural networks can handle highly complex data sets. For example, object detectors have grown capable of predicting the positions of various objects in real-time; timeseries models can handle many variables at once and many other applications can be imagined.
The question is: why can those networks handle such complexity. More specifically, why can they do what previous machine learning models were much less capable of?
There are many answers to this question. Primarily, the answer lies in the depth of the neural network – it allows networks to handle more complex data. However, a part of the answer lies in the application of various activation functions as well – and particularly the non-linear ones most used today: ReLU, Sigmoid and Tanh.
In this blog, we will find out a couple of things:
• What an activation function is;
• Why you need an activation function;
• An introduction to the Sigmoid activation function;
• An introduction to the Tanh, or tangens hyperbolicus, activation function;
• An introduction to the Rectified Linear Unit, or ReLU, activation function.
Are you ready? Let’s go! 🙂
## What is an activation function?
You do probably recall the structure of a basic neural network, in deep learning terms composed of densely-connected layers:
In this network, every neuron is composed of a weights vector and a bias value. When a new vector is input, it computes the dot product between the weights and the input vector, adds the bias value and outputs the scalar value.
…until it doesn’t.
Because put very simply: both the dot product and the scalar additions are linear operations.
Hence, when you have this value as neuron output and do this for every neuron, you have a system that behaves linearly.
And as you probably know, most data is highly nonlinear. Since linear neural networks would not be capable of e.g. generating a decision boundary in those cases, there would be no point in applying them when generating predictive models.
The system as a whole must therefore be nonlinear.
Enter the activation function.
This function, which is placed directly behind every neuron, takes as input the linear neuron output and generates a nonlinear output based on it, often deterministically (i.e., when you input the same value twice, you’ll get the same result).
This way, with every neuron generating in effect a linear-but-nonlinear output, the system behaves nonlinearly as well and by consequence becomes capable of handling nonlinear data.
### Activation outputs increase with input
Neural networks are inspired by the human brain. Although very simplistic, they can be considered to resemble the way human neurons work: they are part of large neural networks as well, with synapses – or pathways – in between. Given neural inputs, human neurons activate and pass signals to other neurons.
The system as a whole results in human brainpower as we know it.
If you wish to resemble this behavior in neural network activation functions, you’ll need to resemble human neuron activation as well. Relatively trivial is the notion that in human neural networks outputs tend to increase when stimulation, or input to the neuron, increases. By consequence, this is also often the case in artificial ones.
Hence, we’re looking for mathematical formulae that take linear input, generate a nonlinear output and increase or remain stable over time (a.k.a.,
### Towards today’s prominent activation functions
Today, three activation functions are most widely used: the Sigmoid function, the Tangens hyperbolicus or tanh and the Rectified Linear Unit, or ReLU. Next, we’ll take a look at them in more detail.
## Sigmoid
Below, you’ll see the (generic) sigmoid function, also known as the logistic curve:
Mathematically, it can be represented as follows:
$$y: f(x) = \frac{1}{1 + e^{-x}}$$
As you can see in the plot, the function slowly increases over time, but the greatest increase can be found around $$x = 0$$. The range of the function is $$(0, 1)$$; i.e. towards high values for $$x$$ the function therefore approaches 1, but never equals it.
The Sigmoid function allows you to do multiple things. First, as we recall from our post on why true Rosenblatt perceptrons cannot be created in Keras, step functions used in those ancient neurons are not differentiable and hence gradient descent for optimization cannot be applied. Second, when we implemented the Rosenblatt perceptron ourselves with the Perceptron Learning Rule, we noticed that in a binary classification problem, the decision boundary is optimized per neuron and will find one of the possible boundaries if they exist. This gets easier with the Sigmoid function, since it is more smooth (Majidi, n.d.).
Additionally, and perhaps primarily, we use the Sigmoid function because it outputs between $$(0, 1)$$. When estimating a probability, this is perfect, because probabilities have a very similar range of $$[0, 1]$$ (Sharma, 2019). Especially in binary classification problems, when we effectively estimate the probability that the output is of some class, Sigmoid functions allow us to give a very weighted estimate. The output $$0.623$$ between classes A and B would indicate “slightly more of B”. With a step function, the output would have likely been $$1$$, and the nuance disappears.
## Tangens hyperbolicus: Tanh
Another widely used activation function is the tangens hyperbolicus, or hyperbolic tangent / tanh function:
It works similar to the Sigmoid function, but has some differences.
First, the change in output accelerates close to $$x = 0$$, which is similar with the Sigmoid function.
It does also share its asymptotic properties with Sigmoid: although for very large values of $$x$$ the function approaches 1, it never actually equals it.
On the lower side of the domain, however, we see a difference in the range: rather than approaching $$0$$ as minimum value, it approaches $$-1$$.
### Differences between tanh and Sigmoid
You may now probably wonder what the differences are between tanh and Sigmoid. I did too.
Obviously, the range of the activation function differs: $$(0, 1)$$ vs $$(-1, 1)$$, as we have seen before.
Although this difference seems to be very small, it might have a large effect on model performance; specifically, how fast your model converges towards the most optimal solution (LeCun et al., 1998).
This is related to the fact that they are symmetric around the origin. Hence, they produce outputs that are close to zero. Outputs close to zero are best: during optimization, they produce the least weight swings, and hence let your model converge faster. This will really be helpful when your models are very large indeed.
As we can see, the tanh function is symmetric around the origin, where the Sigmoid function is not. Should we therefore always choose tanh?
Nope – it comes with a set of problems, or perhaps more positively, challenges.
## Challenges of Sigmoid and Tanh
The paper by LeCun et al. was written in 1998 and the world of deep learning has come a long way… identifying challenges that had to be solved in order to bring forward the deep learning field.
First of all, we’ll have to talk about model sparsity (DaemonMaker, n.d.). The less complex the model is during optimization, the faster it will converge, and the more likely it is that you’ll find a mathematical optimum in time.
And complexity can be viewed as the number of unimportant neurons that are still in your model. The fewer of them, the better – or sparser – your model is.
Sigmoid and Tanh essentially produce non-sparse models because their neurons pretty much always produce an output value: when the ranges are $$(0, 1)$$ and $$(-1, 1)$$, respectively, the output either cannot be zero or is zero with very low probability.
Hence, if certain neurons are less important in terms of their weights, they cannot be ‘removed’, and the model is not sparse.
Another possible issue with the output ranges of those activation functions is the so-called vanishing gradients problem (DaemonMaker, n.d.). During optimization, data is fed through the model, after which the outcomes are compared with the actual target values. This produces what is known as the loss. Since the loss can be considered to be an (optimizable) mathematical function, we can compute the gradient towards the zero derivative, i.e. the mathematical optimum.
Neural networks however comprise many layers of neurons. We would essentially have to repeat this process over and over again for every layer with respect to the downstream ones, and subsequently chain them. That’s what backpropagation is. Subsequently, we can optimize our models with gradient descent or a similar optimizer.
When neuron outputs are very small (i.e. $$-1 < output < 1$$), the chains produced during optimization will get smaller and smaller towards the upstream layers. This will cause them to learn very slowly, and make it questionable whether they will converge to their optimum at all: enter the vanishing gradients problem.
A more detailed review on this problem can be found here.
## Rectified Linear Unit: ReLU
In order to improve on these observations, another activation was introduced. This activation function, named Rectified Linear Unit or ReLU, is the de facto first choice for most deep learning projects today. It is much less sensitive to the problems mentioned above and hence improves the training process.
It looks as follows:
And can be represented as follows:
$$f(x) = \begin{cases} 0, & \text{if}\ x < 0 \\ x, & \text{otherwise} \\ \end{cases}$$
Or, in plain English, it produces a zero output for all inputs smaller than zero; and $$x$$ for all other inputs. Hence, for all $$inputs <= 0$$, it produces zero outputs.
### Sparsity
This benefits sparsity substantially: in almost half the cases, now, the neuron doesn’t fire anymore. This way, neurons can be made silent if they are not too important anymore in terms of their contribution to the model’s predictive power.
It also reduces the impact of vanishing gradients, because the gradient is always a constant: the derivative of $$f(x) = 0$$ is 0 while the derivative of $$f(x) = x$$ is 1. Models hence learn faster and more evenly.
### Computational requirements
Additionally, ReLU does need much fewer computational resources than the Sigmoid and Tanh functions (Jaideep, n.d.). The function that essentially needs to be executed to arrive at ReLU is a max function: $$max(0, x)$$ produces 0 when $$x < 0$$ and x when $$x >= 0$$. That’s ReLU!
Now compare this with the formulas of the Sigmoid and tanh functions presented above: those contain exponents. Computing the output of a max function is much simpler and less computationally expensive than computing the output of exponents. For one calculation, this does not matter much, but note that in deep learning many such calculations are made. Hence, ReLU reduces your need for computational requirements.
### ReLU comes with additional challenges
This does however not mean that ReLU itself does not have certain challenges:
• Firstly, it tends to produce very large values given its non-boundedness on the upside of the domain (Jaideep, n.d.). Theoretically, infinite inputs produce infinite outputs.
• Secondly, you will face the dying ReLU problem (Jaideep, n.d.). If a neuron’s weights are moved towards the zero output, it may be the case that they eventually will no longer be capable of recovering from this. They will then continually output zeros. This is especially the case when your network is poorly initialized, or when your data is poorly normalized, because the first rounds of optimization will produce large weight swings. When too many neurons output zero, you end up with a dead neural network – the dying ReLU problem.
• Thirdly: Small values, even the non-positive ones, may be of value; they can help capture patterns underlying the dataset. With ReLU, this cannot be done, since all outputs smaller than zero are zero.
• Fourthly, the transition point from $$f(x) = 0$$ to $$f(x) = x$$ is not smooth. This will impact the loss landscape during optimization, which will not be smooth either. This may (slightly albeit significantly) hamper model optimization and slightly slow down convergence.
To name just a few.
Fortunately, new activation functions have been designed to overcome these problems in especially very large and/or very deep networks. A prime example of such functions is Swish; another is Leaky ReLU. The references navigate you to blogs that cover these new functions.
## Recap
In this blog, we dived into today’s standard activation functions as well as their benefits and possible drawbacks. You should now be capable of making a decision as to which function to use. Primarily, though, it’s often best to start with ReLU; then try tanh and Sigmoid; then move towards new activation functions. This way, you can experimentally find out which works best. However, take notice of the resources you need, as you may not necessarily be able to try all choices.
Happy engineering! 🙂
## References
Panchal, S. (n.d.). What are the benefits of using a sigmoid function? Retrieved from https://stackoverflow.com/a/56334780
Majidi, A. (n.d.). What are the benefits of using a sigmoid function? Retrieved from https://stackoverflow.com/a/56337905
Sharma, S. (2019, February 14). Activation Functions in Neural Networks. Retrieved from https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6
LeCun, Y., Bottou, L., Orr, G. B., & Müller, K. -. (1998). Efficient BackProp. Lecture Notes in Computer Science, 9-50. doi:10.1007/3-540-49430-8_2
DaemonMaker. (n.d.). What are the advantages of ReLU over sigmoid function in deep neural networks? Retrieved from https://stats.stackexchange.com/a/126362
|
{}
|
# Publications
## Job market papers
### 1. Network regression and supervised centrality estimation
##### [ Abstract ] [ Paper ] [ arXiv ]
Networks are ubiquitous and play a crucial role in our lives. The position of an agent in the network, usually captured by the “centrality”, has implications for the agent’s behaviour and serves as an important intermediary of network effects. Therefore, the centrality is often incorporated in regression models to elucidate the network effect on an outcome variable of interest. In empirical studies, researchers often adopt a two-stage procedure to estimate the centrality and to infer the network effect – they first estimate the centrality from the observed network and then employ the estimated centrality in the regression for estimation and inference. Despite its prevalent adoption, this naive two-stage procedure lacks theoretical backing and can fail in both estimation and inference. We therefore propose a unified framework that combines a network model and a network regression model, under which we prove the short-comings of the two-stage in centrality estimation and the undesirable consequences in the network regression. We then propose a novel supervised network centrality estimation (SuperCENT) methodology that simultaneously combines the information from the two models. SuperCENT dominates the two-stage procedure in the estimation of the centrality and the true underlying network universally. In addition, SuperCENT yields superior estimation of the network effect and provides valid and narrower confidence intervals than those from the two-stage. We apply our method to predict the currency risk premium based on the global trade network. We show that a trading strategy based on SuperCENT centrality estimates yields a return three times as high as the two-stage method, and the inference drawn by SuperCENT verifies an economic theory via a rigorous statistical testing while the two-stage procedure cannot.
### 2. Ownership network and firm growth: What do forty million companies tell about the Chinese economy?
##### [ Abstract ] [ Paper ] [ SSRN ]
The finance–growth nexus has been a central question in understanding the unprecedented success of the Chinese economy. Using unique data on all the registered firms in China, we build extensive firm-to-firm equity ownership networks. Entering a network and increasing network centrality leads to higher firm growth, and the effect of global centralities strengthens over time. The RMB 4 trillion stimulus launched by the Chinese government in 2008 partially “crowded out” the positive network effects. Equity ownership networks and bank credit tend to act as substitutes for state-owned enterprises, but as complements for private firms in promoting growth
## Journal publications
### 4. Valid post-selection inference in model-free linear regression
##### [ Abstract ] [ Paper ] [ Published version ]
Modern data-driven approaches to modeling make extensive use of covariate/model selection. Such selection incurs a cost: it invalidates classical statistical inference. A conservative remedy to the problem was proposed by Berk et al. (2013) and further extended by Bachoc et al. (2016). These proposals, labeled PoSI methods'', provide valid inference after arbitrary model selection. They are computationally NP-hard and have certain limitations in their theoretical justifications. We therefore propose computationally efficient PoSI confidence regions and prove large-$p$ asymptotics for them. We do this for linear OLS regression allowing misspecification of the normal linear model, for both fixed and random covariates, and for independent as well as some types of dependent data. We start by proving a general equivalence result for the post-selection inference problem and a simultaneous inference problem in a setting that strips inessential features still present in a related result of Berk et al. (2013). We then construct valid PoSI confidence regions that are the first to have vastly improved computational efficiency in that the required computation times grow only quadratically rather than exponentially with the total number $p$ of covariates. These are also the first PoSI confidence regions with guaranteed asymptotic validity when the total number of covariates~$p$ diverges (almost exponentially) with the sample size~$n$. Under standard tail assumptions, we only require $(\log p)^7 = o(n)$ and $k = o(\sqrt{n/\log p})$ where $k (\le p)$ is the largest number of covariates (model size) considered for selection. We study various properties of these confidence regions, including their Lebesgue measures, and compare them (theoretically) with those proposed previously.
### 5. Statistical theory powering data science
##### [ Abstract ] [ Paper ] [ Published version ]
Statisticians are finding their place in the emerging field of data science. However, many issues considered “new” in data science have long histories in statistics. Examples of using statistical thinking are illustrated, which range from exploratory data analysis to mea- suring uncertainty to accommodating nonrandom samples. These ex- amples are then applied to service networks, baseball predictions and official statistics.
## Preprints
### 8. Nonparametric empirical Bayes estimation and testing for sparse and heteroscedastic signals
##### [ Abstract ] [ arXiv ]
Large-scale modern data often involves estimation and testing for high-dimensional unknown parameters. It is desirable to identify the sparse signals, the needles in the haystack'', with accuracy and false discovery control. However, the unprecedented complexity and heterogeneity in modern data structure require new machine learning tools to effectively exploit commonalities and to robustly adjust for both sparsity and heterogeneity. In addition, estimates for high-dimensional parameters often lack uncertainty quantification. In this paper, we propose a novel Spike-and-Nonparametric mixture prior (SNP) -- a spike to promote the sparsity and a nonparametric structure to capture signals. In contrast to the state-of-the-art methods, the proposed methods solve the estimation and testing problem at once with several merits: 1) an accurate sparsity estimation; 2) point estimates with shrinkage/soft-thresholding property; 3) credible intervals for uncertainty quantification; 4) an optimal multiple testing procedure that controls false discovery rate. Our method exhibits promising empirical performance on both simulated data and a gene expression case study.
### 9. Centralization or decentralization? The evolution of state-ownership in China
##### [ Abstract ] [ Paper ]
In this paper, we anatomize the state sector and its role in Chinese economy. We propose a measure of Chinese SOEs (and partial SOEs) based on the firm-to-firm equity investment relationships. We are the first to identify all SOEs among over 40 millions of all Chinese registered firms. Our measure captures a significant larger number of SOEs than the existing measure. The aggregated capital of all (partial) SOEs has climbed up to 85%, and the total state capital in all SOEs has increased to 31%, both over total capital in the economy by 2017. The state ownership shows parallel trends of decentralization (authoritarian hierarchy) and indirect control (ownership hierarchy) over time. In addition, we find mixed ownership is associated with higher firm growth and performance; while hierarchical distance to governments is associated with better firm performance but lower growth. Drawing a stark distinction between SOEs and privately-owned enterprises (POEs) could lead to misperceptions of the role of state ownership in Chinese economy
### 11. All of Linear Regression
##### [ Abstract ] [ Paper ] [ arXiv ]
Least squares linear regression is one of the oldest and widely used data analysis tools. Although the theoretical analysis of ordinary least squares (OLS) estimator is as old, several fundamental questions are yet to be answered. Suppose regression observations $(X_1,Y_1),...,(X_n,Y_n)$ (not necessarily independent) are available. Some of the questions we deal with are as follows: under what conditions, does the OLS estimator converge and what is the limit? What happens if the dimension is allowed to grow with $n$? What happens if the observations are dependent with dependence possibly strengthening with $n$? How to do statistical inference under these kinds of misspecification? What happens to OLS estimator under variable selection? How to do inference under misspecification and variable selection? We answer all the questions raised above with one simple deterministic inequality which holds for any set of observations and any sample size. This implies that all our results are finite sample (non-asymptotic) in nature. At the end, one only needs to bound certain random quantities under specific settings of interest to get concrete rates and we derive these bounds for the case of independent observations. In particular the problem of inference after variable selection is studied, for the first time, when $d$, the number of covariates increases (almost exponentially) with sample size $n$. We provide comments on the right'' statistic to consider for inference under variable selection and efficient computation of quantiles.
## Working papers
### 12. Valid post-selection inference for the average treatment effect with covariate adjustment in randomized experiments
##### [ Abstract ]
Randomized experiments are the fundamental tools to evaluate the treatment effect in many fields. Prior to the treatment assignment, the baseline covariates are often collected and can be incorporated into the analysis to improve the estimation efficiency. The efficiency gain from covariate adjustment might encourage attempts to hunt for covariates that maximize the efficiency of the treatment effect estimate. Such a kind of significance hunting'' can invalidate statistical inference due to the data-dependent selection. Luckily, the randomization makes an exception. we show that under a class of unbiased estimators of the average treatment effect, the inference remains valid after selecting for the estimator with minimum variance, provided with a consistent standard error. We adopt a model-free approach without imposing a parametric outcome model and solely depends on the randomization in treatment assignment.
### 13. Generalized Cp and a predictive model selection test in assumption-lean framework
##### [ Abstract ]
The classical methods of variable selection based on the estimate of the out-of-sample prediction risk are designed under the Gauss-Markov model and thus are not justifiable under misspecification. The customary elbow rule based on the scree plot can be misleading and a formal testing procedure accompanying confidence intervals will be more desirable. We propose a model-free analog of Cp, generalized Cp (GCp), and a predictive model selection test based on GCp. This estimator can be shown to be asymptotically equivalent to the testing error based on an independent sample and is also asymptotically equivalent to the leave-one-out cross-validation estimator of the out-of-sample prediction risk. We are currently pursuing the optimality and properties of the model selection test.
### 14. Computation of PoSI statistics
##### [ Abstract ]
The use of covariate selection in modern data-driven modelling invalidates classifical statistical inference. The "PoSI methods" of Berk et al. (2013) and Bachoc et al. (2016) provide valid inference after arbitrary model selection but are computationally inefficient because it involves inference simultaneously over all models. Even in the linear regression problem, the number of operations required therein is $O(p2^{p-1})$ which is prohibitive for large $p$. We propose a continuum relaxation of the PoSI statistic is proposed. This relaxation allows the use of various maximization algorithms for functions on a continuous convex set which only requires at most logarithmic of the total number of models with guaranteed approximation error bounds provided.
### 15. Common versus idiosyncratic risk
##### [ Abstract ]
It is of great interest to dissect the driving forces of common movements, or co-movement, among correlated objects, such as asset prices and product sales. Two popular models are commonly used, a common factor model and a network model, to explain the co-movement. However, there exists no literature on simultaneously examining the relative importance of these two mechanisms. We develop a flexible model incorporating both common factors and networks. We investigate conditions under which the common factors and the network effects can be simultaneously identified. Applying our model to asset pricing, we evaluate the relative importance of the two mechanisms in the co-movement of asset returns.
### 16. Self-reported Chinese company data: Can it be trusted?
##### [ Abstract ]
The Annual Industrial Survey (AIS), dubbed as the "census data", has been used as the golden source for empirical firm-level economic and operational research. It covers a long time span (as early as 1992) and provides rich information including identification information, stocks, and flows. However, the self-reported nature cast doubts on its credibility. The goal of this paper is to determine the reliability of AIS by comparing with Orbis, another firm-level data source that has the largest collection of firms with detailed ownership information. Firms' ownership is of the particular interest of researchers and serves as one of the most important controlling variables in their analysis. We, therefore, examine the disparities of ownership between AIS and Orbis, namely state-owned, privately-owned or foreign. Among the firms that have ownership information on both sides, the matching rate of ownership information is as high as 90%, which proves the credibility of AIS ownership information. Careful comparisons of several controlling variables between the cohort of matched firms and the AIS general population show there is no systematic bias in the matched cohort.
|
{}
|
# How do you calculate square yards?
First, you would measure the length in yards and then measure the width in yards. Now, multiply these two numbers together to get square yards.
Q&A Related to "How do you calculate square yards?"
1. Divide the number of square feet by 9 to calculate the equivalent in square yards. For example, if you have 90 square feet, divide 90 by 9 to get 10 square yards. 2. Divide the http://www.ehow.com/how_7891906_calculate-square-y...
Measure the height of the wall from floor to ceiling. Measure the width of the wall. Multiply height x width = square footage. For square yards, divide the square footage by 9. http://wiki.answers.com/Q/How_do_you_calculate_squ...
Knowing how to calculate square footage will come in handy when it comes time to estimate materials for that next do-it-yourself job. You'll save time, money and extra trips to the http://www.life123.com/home-garden/building-renova...
You multiple the width by the length. http://www.chacha.com/question/how-does-one-calcul...
|
{}
|
# Decision Forest Classification and Regression (DF)¶
Decision Forest (DF) classification and regression algorithms are based on an ensemble of tree-structured classifiers, which are known as decision trees. Decision forest is built using the general technique of bagging, a bootstrap aggregation, and a random choice of features. Decision tree is a binary tree graph. Its internal (split) nodes represent a decision function used to select the child node at the prediction stage. Its leaf, or terminal, nodes represent the corresponding response values, which are the result of the prediction from the tree. For more details, see [Breiman84] and [Breiman2001].
Operation Computational methods Programming Interface Training Dense Hist train(…) train_input train_result Inference Dense Hist infer(…) infer_input infer_result
## Mathematical formulation¶
### Training¶
Given $$n$$ feature vectors $$X=\{x_1=(x_{11},\ldots,x_{1p}),\ldots,x_n=(x_{n1},\ldots,x_{np})\}$$ of size $$p$$, their non-negative observation weights $$W=\{w_1,\ldots,w_n\}$$ and $$n$$ responses $$Y=\{y_1,\ldots,y_n\}$$,
• $$y_i \in \{0, \ldots, C-1\}$$, where $$C$$ is the number of classes, for classification
• $$y_i \in \mathbb{R}$$, for regression
the problem is to build a decision forest classification or regression model.
During the training stage, $$B$$ independent classification or regression trees are created using the following:
1. New training set generated from the original one by sampling uniformly and with replacement (bootstrapping).
2. Impurity metric $$I$$ and impurity reduction $$\Delta I$$ for splitting tree’s nodes, calculated as follows:
• Gini impurity for classification:
• without observation weights: $$I(D)=1-\sum_{i=1}^{C}{p_i^2},$$ where $$p_i$$ is the fraction of observations in subset $$D$$ that belong to the $$i$$-th class.
• with observation weights: $$I(D)=1-\sum_{i=1}^{C}{p_i^2},$$ where $$p_i$$ is the weighted fraction of observations in subset $$D$$ that belong to the $$i$$-th class, computed as follows:
$p_i=(\sum_{d \in \{d \in D | y_{d}=i\}}w_d)/\sum_{d \in D}w_d$
where $$w_d$$ is a weight of observation $$d$$.
• Mean-Square Error (MSE) for regression:
• without observation weights: $$I(D)=\frac{1}{N} \sum_{i=1}^{N}{(y_i - \bar{y})^2},$$ where $$N=|D|$$ and $$\bar{y}=\frac{1}{N} \sum_{i=1}^{N}y_i$$.
• with observation weights: $$I(D)=\frac{1}{W(D)} \sum_{i=1}^{N}w_i{(y_i - \bar{y})^2},$$ where $$N=|D|$$, $$\bar{y}=\sum_{i=1}^{N}w_{i}y_{i},$$, $$W(D)=\sum_{i=1}^{N}w_{i},$$ and $$w_i$$ is a weight of observation $$i$$.
• $$\Delta I$$ is computed as follows:
$\Delta I={I} - (\frac{N_{\mathrm{left}}}{N_{\mathrm{parent}}} I_{left} + \frac{N_{\mathrm{right}}}{N_{\mathrm{parent}}} I_{\mathrm{right}})$
where $$N_{\mathrm{left}}$$ and $$N_{\mathrm{right}}$$ are the number of observations in the node on the corresponding side of the split.
Let $$S=(X,Y)$$ be the set of observations. Given the training parameters, such as the number of trees in the forest ($$B$$), the fraction of observations used for the training of one tree ($$f$$), and the number of features to try as a possible split per node ($$m$$), the algorithm does the following:
1. For each tree ($$1, \ldots, B$$):
2. Generate a bootstrapped set of observations with $$f * |S|$$ elements in it.
3. Start with the tree whose depth is equal to $$0$$.
4. For each terminal node $$t$$ in the tree:
• Choose randomly without replacement $$m$$ feature indices $$J_t \in \{0, 1, \ldots, p-1\}$$.
• For each $$j \in J_t$$, find the best split $$s_{j,t}$$ that partitions subset $$D_t$$ and maximizes impurity decrease $$\Delta I_t$$.
• Get the best split $$s_t$$ that maximizes impurity decrease $$\Delta I_t$$ in all $$s_{j,t}$$ splits.
• Split current node into two based the best split.
5. Stop when a termination criterion is met.
#### Termination Criteria¶
The library supports the following termination criteria to stop growing the tree:
• Minimal number of observations in a leaf node. Node $$t$$ is not processed if the subset of observations is smaller than the predefined value. Splits that produce nodes with the number of observations smaller than that value are not allowed.
• Maximal tree depth. Node $$t$$ is not processed if its depth in the tree reaches the predefined maximal value.
• Impurity threshold. Node $$t$$ is not processed if its $$I$$ value is smaller than the predefined threshold.
#### Random Numbers Generation¶
To create a bootstrap set and choose feature indices in the performant way, the training algorithm requires the source of random numbers, capable to produce sequences of random numbers in parallel.
Initialization of the engine in the decision forest is based on the scheme below:
The state of the engine is updated once the training of the decision forest model is completed. The library provides support to retrieve the instance of the engine with updated state that can be used in other computations. The update of the state is engine-specific and depends on the parallelization technique used as defined earlier:
• Family: the updated state is the set of states that represent individual engines in the family.
• Leapfrog: the updated state is the state of the sequence with the rightmost position on the sequence. The example below demonstrates the idea for case of 2 subsequences (‘x’ and ‘o’) of the random number sequence:
• SkipAhead: the updated state is the state of the independent sequence with the rightmost position on the sequence. The example below demonstrates the idea for case of 2 subsequences (‘x’ and ‘o’) of the random number sequence:
#### Additional Characteristics Calculated by the Decision Forest¶
Decision forests can produce additional characteristics, such as an estimate of generalization error and an importance measure (relative decisive power) of each of p features (variables).
##### Out-of-bag Error¶
The estimate of the generalization error based on the training data can be obtained and calculated as follows:
• For classification:
• For each vector $$x_i$$ in the dataset $$X$$, predict its label $$\hat{y_i}$$ by having the majority of votes from the trees that contain $$x_i$$ in their OOB set, and vote for that label.
• Calculate the OOB error of the decision forest $$T$$ as the average of misclassifications:
$OOB(T) = \frac{1}{|{D}^{\text{'}}|}\sum _{y_i \in {D}^{\text{'}}}I\{y_i \ne \hat{y_i}\}\text{,where }{D}^{\text{'}}={\bigcup }_{b=1}^{B}\overline{D_b}.$
• If OOB error value per each observation is required, then calculate the prediction error for $$x_i$$: $$OOB(x_i) = I\{{y}_{i}\ne \hat{{y}_{i}}\}$$
• For regression:
• For each vector $$x_i$$ in the dataset $$X$$, predict its response $$\hat{y_i}$$ as the mean of prediction from the trees that contain $$x_i$$ in their OOB set:
$$\hat{y_i} = \frac{1}{{|B}_{i}|}\sum _{b=1}^{|B_i|}\hat{y_{ib}}$$, where $$B_i= \bigcup{T_b}: x_i \in \overline{D_b}$$ and $$\hat{y_{ib}}$$ is the result of prediction $$x_i$$ by $$T_b$$.
• Calculate the OOB error of the decision forest $$T$$ as the Mean-Square Error (MSE):
$OOB(T) = \frac{1}{|{D}^{\text{'}}|}\sum _{{y}_{i} \in {D}^{\text{'}}}\sum {(y_i-\hat{y_i})}^{2}, \text{where } {D}^{\text{'}}={\bigcup}_{b=1}^{B}\overline{{D}_{b}}$
• If OOB error value per each observation is required, then calculate the prediction error for $$x_i$$:
$OOB(x_i) = {(y_i-\hat{y_i})}^{2}$
##### Variable Importance¶
There are two main types of variable importance measures:
• Mean Decrease Impurity importance (MDI).
Importance of the $$j$$-th variable for predicting $$Y$$ is the sum of weighted impurity decreases $$p(t) \Delta i(s_t, t)$$ for all nodes $$t$$ that use $$x_j$$, averaged over all $$B$$ trees in the forest:
$MDI\left(j\right)=\frac{1}{B}\sum _{b=1}^{B} \sum _{t\in {T}_{b}:v\left({s}_{t}\right)=j}p\left(t\right)\Delta i\left({s}_{t},t\right),$
where $$p\left(t\right)=\frac{|{D}_{t}|}{|D|}$$ is the fraction of observations reaching node $$t$$ in the tree $$T_b$$, and $$v(s_t)$$ is the index of the variable used in split $$s_t$$ .
• Mean Decrease Accuracy (MDA).
Importance of the $$j$$-th variable for predicting $$Y$$ is the average increase in the OOB error over all trees in the forest when the values of the $$j$$-th variable are randomly permuted in the OOB set. For that reason, this latter measure is also known as permutation importance.
In more details, the library calculates MDA importance as follows:
• Let $$\pi (X,j)$$ be the set of feature vectors where the $$j$$-th variable is randomly permuted over all vectors in the set.
• Let $$E_b$$ be the OOB error calculated for $$T_b:$$ on its out-of-bag dataset $$\overline{D_b}$$.
• Let $$E_{b,j}$$ be the OOB error calculated for $$T_b:$$ using $$\pi \left(\overline{{X}_{b}},j\right)$$, and its out-of-bag dataset $$\overline{D_b}$$ is permuted on the $$j$$-th variable. Then
• $${\delta }_{b,j}={E}_{b}-{E}_{b,j}$$ is the OOB error increase for the tree $$T_b$$.
• $$Raw MDA\left(j\right)=\frac{1}{B}\sum _{b=1}^{B}{\delta }_{b,j}$$ is MDA importance.
• $$Scaled MDA\left(j\right)=\frac{Raw MDA\left({x}_{j}\right)}{\frac{{\sigma }_{j}}{\sqrt{B}}}$$, where $${\sigma }_{j}^{2}$$ is the variance of $$D_{b,j}$$
### Training method: Dense¶
In dense training method all possible split variants for each feature (from selected features’ subset for current node) are evaluated for best split computation.
### Training method: Hist¶
In hist training method, we consider only some selected subset of splits for best split computation. This subset of splits is computed for each feature on initialization stage of the algorithm. After computing subset of splits, we substitute each value from initially provided data with the value of the corresponding bin. Bins are continuous intervals between selected splits.
### Inference methods: Dense and Hist¶
Dense and hist inference methods performs prediction by the same way:
1. For classification, $$y_i \in \{0, \ldots, C-1\}$$, where $$C$$ is the number of classes, the tree ensemble model predicts the output by selecting the response $$y$$, which is voted for by the majority of the trees in the forest.
2. For regression, the tree ensemble model uses the mean of $$B$$ functions’ results to predict the output, i.e. $$\hat{y}=\frac{1}{M} \sum_{k=1}^M{f_k(x_i)}, \; f_k \in F,$$ where $$f_k$$ are regression trees, $$W$$ is a set of tree leaves’ scores and $$T$$ is the number of leaves in the tree. In other words, each tree maps an observation to the corresponding leaf’s score.
## Programming Interface¶
All types and functions in this section are declared in the oneapi::dal::decision_forest namespace and are available via inclusion of the oneapi/dal/algo/decision_forest.hpp header file.
### Enum classes¶
enum class error_metric_mode
error_metric_mode::none
Do not compute error metric.
error_metric_mode::out_of_bag_error
Train produces $$1 \times 1$$ table with cumulative prediction error for out of bag observations.
error_metric_mode::out_of_bag_error_per_observation
Train produces $$n \times 1$$ table with prediction error for out-of-bag observations.
enum class variable_importance_mode
variable_importance_mode::none
Do not compute variable importance.
variable_importance_mode::mdi
Mean Decrease Impurity. Computed as the sum of weighted impurity decreases for all nodes where the variable is used, averaged over all trees in the forest.
variable_importance_mode::mda_raw
Mean Decrease Accuracy (permutation importance). For each tree, the prediction error on the out-of-bag portion of the data is computed (error rate for classification, MSE for regression). The same is done after permuting each predictor variable. The difference between the two are then averaged over all trees.
variable_importance_mode::mda_scaled
Mean Decrease Accuracy (permutation importance). This is MDA_Raw value scaled by its standard deviation.
enum class infer_mode
infer_mode::class_labels
Infer produces a $$n \times 1$$ table with the predicted labels.
infer_mode::class_probabilities
Infer produces $$n \times c$$ table with the predicted class probabilities for each observation.
enum class voting_mode
voting_mode::weighted
The final prediction is combined through a weighted majority voting.
voting_mode::unweighted
The final prediction is combined through a simple majority voting.
### Descriptor¶
template<typename Float = detail::descriptor_base<>::float_t, typename Method = detail::descriptor_base<>::method_t, typename Task = detail::descriptor_base<>::task_t>
class descriptor
Template Parameters
• Float – The floating-point type that the algorithm uses for intermediate computations. Can be float or double.
• Method – Tag-type that specifies an implementation of algorithm. Can be method::v1::dense or method::v1::hist.
• Task – Tag-type that specifies type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Public Methods
auto &set_observations_per_tree_fraction(double value)
auto &set_impurity_threshold(double value)
auto &set_min_weight_fraction_in_leaf_node(double value)
auto &set_min_impurity_decrease_in_split_node(double value)
auto &set_tree_count(std::int64_t value)
auto &set_features_per_node(std::int64_t value)
auto &set_max_tree_depth(std::int64_t value)
auto &set_min_observations_in_leaf_node(std::int64_t value)
auto &set_min_observations_in_split_node(std::int64_t value)
auto &set_max_leaf_nodes(std::int64_t value)
auto &set_max_bins(std::int64_t value)
auto &set_min_bin_size(std::int64_t value)
auto &set_error_metric_mode(error_metric_mode value)
auto &set_memory_saving_mode(bool value)
auto &set_bootstrap(bool value)
auto &set_variable_importance_mode(variable_importance_mode value)
template<typename T = Task, typename None = detail::enable_if_classification_t<T>>
auto &set_class_count(std::int64_t value)
template<typename T = Task, typename None = detail::enable_if_classification_t<T>>
auto &set_infer_mode(infer_mode value)
template<typename T = Task, typename None = detail::enable_if_classification_t<T>>
auto &set_voting_mode(voting_mode value)
#### Method tags¶
struct dense
Tag-type that denotes dense computational method.
struct hist
Tag-type that denotes hist computational method.
using by_default = dense
Alias tag-type for dense computational method.
struct classification
Tag-type that parameterizes entities used for solving classification problem.
struct regression
Tag-type that parameterizes entities used for solving regression problem.
using by_default = classification
### Model¶
template<typename Task = task::by_default>
class model
Template Parameters
Task – Tag-type that specifies the type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Constructors
model()
Creates a new instance of the class with the default property values.
Properties
std::int64_t tree_count = 100
The number of trees in the forest.
Getter & Setter
std::int64_t get_tree_count() const
Invariants
tree_count > 0
std::int64_t class_count = 2
The class count. Used with oneapi::dal::decision_forest::task::v1::classification only.
Getter & Setter
template <typename T = Task, typename None = detail::enable_if_classification_t<T>> std::int64_t get_class_count() const
### Training train(...)¶
#### Input¶
template<typename Task = task::by_default>
class train_input
Template Parameters
Task – Tag-type that specifies type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Constructors
train_input(const table &data, const table &labels)
Creates a new instance of the class with the given data and labels property values.
Properties
const table &data = table{}
The training set $$X$$.
Getter & Setter
const table & get_data() const
auto & set_data(const table &value)
const table &labels = table{}
Vector of labels $$y$$ for the training set $$X$$.
Getter & Setter
const table & get_labels() const
auto & set_labels(const table &value)
#### Result¶
template<typename Task = task::by_default>
class train_result
Template Parameters
Task – Tag-type that specifies type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Constructors
train_result()
Creates a new instance of the class with the default property values.
Properties
const model<Task> &model = model<Task>{}
The trained Decision Forest model.
Getter & Setter
const model< Task > & get_model() const
auto & set_model(const model< Task > &value)
const table &oob_err = table{}
A $$1 \times 1$$ table containing cumulative out-of-bag error value. Computed when error_metric_mode set with error_metric_mode::out_of_bag_error.
Getter & Setter
const table & get_oob_err() const
auto & set_oob_err(const table &value)
const table &oob_err_per_observation = table{}
A $$n \times 1$$ table containing out-of-bag error value per observation. Computed when error_metric_mode set with error_metric_mode::out_of_bag_error_per_observation.
Getter & Setter
const table & get_oob_err_per_observation() const
auto & set_oob_err_per_observation(const table &value)
const table &var_importance = table{}
A $$1 \times p$$ table containing variable importance value for each feature. Computed when variable_importance_mode != variable_importance_mode::none.
Getter & Setter
const table & get_var_importance() const
auto & set_var_importance(const table &value)
#### Operation¶
template<typename Descriptor>
decision_forest::train_result train(const Descriptor &desc, const decision_forest::train_input &input)
Template Parameters
Descriptor – Decision Forest algorithm descriptor decision_forest::desc.
Preconditions
input.data.is_empty == false
input.labels.is_empty == false
input.labels.column_count == 1
input.data.row_count == input.labels.row_count
desc.get_bootstrap() == true || (desc.get_bootstrap() == false && desc.get_variable_importance_mode() != variable_importance_mode::mda_raw && desc.get_variable_importance_mode() != variable_importance_mode::mda_scaled)
desc.get_bootstrap() == true || (desc.get_bootstrap() == false && desc.get_error_metric_mode() == error_metric_mode::none)
### Inference infer(...)¶
#### Input¶
template<typename Task = task::by_default>
class infer_input
Template Parameters
Task – Tag-type that specifies the type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Constructors
infer_input(const model<Task> &trained_model, const table &data)
Creates a new instance of the class with the given model and data property values.
Properties
const model<Task> &model = model<Task>{}
The trained Decision Forest model.
Getter & Setter
const model< Task > & get_model() const
auto & set_model(const model< Task > &value)
const table &data = table{}
The dataset for inference $$X'$$.
Getter & Setter
const table & get_data() const
auto & set_data(const table &value)
#### Result¶
template<typename Task = task::by_default>
class infer_result
Template Parameters
Task – Tag-type that specifies the type of the problem to solve. Can be task::v1::classification or task::v1::regression.
Constructors
infer_result()
Creates a new instance of the class with the default property values.
Properties
const table &labels = table{}
The $$n \times 1$$ table with the predicted labels.
Getter & Setter
const table & get_labels() const
auto & set_labels(const table &value)
const table &probabilities
A $$n \times c$$ table with the predicted class probabilities for each observation.
Getter & Setter
template <typename T = Task, typename None = detail::enable_if_classification_t<T>> const table & get_probabilities() const
template <typename T = Task, typename None = detail::enable_if_classification_t<T>> auto & set_probabilities(const table &value)
#### Operation¶
template<typename Descriptor>
decision_forest::infer_result infer(const Descriptor &desc, const decision_forest::infer_input &input)
Template Parameters
Descriptor – Decision Forest algorithm descriptor decision_forest::desc.
Preconditions
input.data.is_empty == false
|
{}
|
# Conditional expectation brownian motion
Somebody has an idea on how to tackle this quantity $$\mathbb{E}\left[ \left. \frac{\int_{0}^{T}{{{e}^{\alpha {{W}_{t}}}}}dt}{\int_{0}^{T}{{{e}^{-\alpha {{W}_{t}}}}}dt+\int_{0}^{T}{{{e}^{\alpha {{W}_{t}}}}}dt}\,\, \right|\,\,{{W}_{T}} \right]$$
For $\alpha \in \mathbb{R}$. The notation $\mathbb{E}[.|W_T]$ means conditional expectation of a stochastic process given $W_t$.
I tried use brownian bridge to build independent quantities but I cannot get a tractable result.
Thanks
Maybe not as clean as you would hope for but perhaps you could do the following:
$\mathbb{E}_{W_T}\left[ \frac{\int_0^T e^{\alpha W_t} dt}{\int_0^T e^{-\alpha W_t} dt + \int_0^T e^{\alpha W_t} dt} \right] = \mathbb{E}_{W_T}\left[ \frac{1}{\frac{\int_0^T e^{-\alpha W_t} dt}{\int_0^T e^{\alpha W_t} dt} + 1} \right]$
Then at least formally
$\mathbb{E}_{W_T}\left[ \frac{\int_0^T e^{\alpha W_t} dt}{\int_0^T e^{-\alpha W_t} dt + \int_0^T e^{\alpha W_t} dt} \right] = 1 - \mathbb{E}_{W_T}\left[ \frac{\int_0^T e^{-\alpha W_t} dt}{\int_0^T e^{\alpha W_t} dt}\right] + ...$
Where the ... indicates higher moments of $\frac{\int_0^T e^{-\alpha W_t} dt}{\int_0^T e^{\alpha W_t} dt}$.
Now the expectation here can be expressed as a series in mixed moments of the numerator and denominator. See this http://www.faculty.biol.ttu.edu/rice/ratio-derive.pdf. I think this is more convenient because for moments like that you can pass the expectation through the time integral, and perhaps get something you can evaluate.
If you know that $\alpha$ is large you could probably justify an approximation that only keeps some small number of terms (i.e. hopefully just one correction is okay). You may also have to derive two approximations based on if $W_T$ is positive or negative. I guess for $|W_T| \ll \alpha$ neither approximation would work, but maybe thats okay for your application.
P.S. to evaluate the expectations inside the time integrals you'll probably want to substitute in there a brownian bridge and notice that you're taking moments of multivariate log normal variables https://en.wikipedia.org/wiki/Log-normal_distribution#Multivariate_log-normal. The brownian bridge will involve terms like $W(T)$ and $W(t)$ and you should know the co-variance structure of that.
P.P.S. if instead you are interested in the case $\alpha \ll |W_T|$ you could get a different approximation by just expanding the integrand as a series in $\alpha$. I think it will basically be $\frac{1}{2}\left(1 + \alpha\mathbb{E}_{W_T}\left[\int_0^T W_t \mathrm{d}t\right] \right) + O(\alpha^2) = \frac{1}{2} + \frac{\alpha}{4 T} W_T + O(\alpha^2)$
|
{}
|
# Problem with cross-referencing with numerical lists
I have a problem when applying the ref command to refer to an item in a numerical list.
Here is an example for my problem:
My list style make the list appear as : 1. , 2. , 3.
I put a dot after the number of the item
When I refer to any item in the list, the dot appears with the number of the item.
The question is that how to omit the dot after the number in cross-referencing?
-
Can you please add a minimal working example (MWE) that demonstrates the problem? This does not happen with a normal enumerate in a document with the article class. – Torbjørn T. Dec 22 '12 at 13:36
I don't know how to make a minimal working example ! I am using the enumitem package – Muhammad Dec 22 '12 at 13:47
If you follow the link, there is some information about how to prepare one. Basically, an MWE is a document where you remove everything (packages, newcommands, content etc.) that doesn't have an influence on the problem, but it should still be possible to compile. So it should start with \documentclass, then include any related packages and customizations, then \begin{document}, a short list, and a cross reference to an item in the list, where the dot should appear. And finally an \end{document}. – Torbjørn T. Dec 22 '12 at 13:56
First, I really want to thank you for your help :) Second, I have solved my problem :) ... I am using the "enumitem" package and I have found the solution of my problem by reading the package documentation. – Muhammad Dec 22 '12 at 14:00
Here is an example to change the way in which the "ref" command appear in your text when you refer to an item in a list: \begin{enumerate}[label=\emph{\alph*}),ref=\emph{\alph*}] ... This remove the right parenthesis when referring to the item – Muhammad Dec 22 '12 at 14:02
You can use ref=\arabic* without a dot in
\setlist[enumerate]{label=\arabic*.,ref=\arabic*}
# \def\MWE{
\documentclass{article}
\usepackage{enumitem}
\setlist[enumerate]{label=\arabic*.,ref=\arabic*}
\begin{document}
\begin{enumerate}
\item First item
\item Second item \label{enu:second}
\item Third item
\end{enumerate}
|
{}
|
2. We obtain $$\bigwedge(\bigvee(X_0, \neg X_2), \bigvee(X_0, X_3), \bigvee(\neg X_1, \neg X_2), \bigvee(\neg X_1, X_3)) \enspace.$$. Finding Disjunctive Normal Forms (DNF) and Conjunctive Normal Forms (CNF) is really just a matter of using the Substitution Rules until you have transformed your original statement into a logically equivalent statement in DNF and/or CNF. Übersetzung Englisch-Deutsch für conjunctive normal form, CNF im PONS Online-Wörterbuch nachschlagen! In jedem dieser Maxterme kommt jede Variable genau einmal vor. Übersetzung Spanisch-Deutsch für conjunctive normal form im PONS Online-Wörterbuch nachschlagen! Example sentences with "conjunctive normal form", translation memory. Diese Seite wurde zuletzt am 15. Examples of conjunctive normal forms include (1) The notation for CNF using $\bigwedge$ and $\bigvee$ is somewhat clumsy. & \SemEquiv \bigwedge_{i (implication) => (equivalence) ~ (not) Shortcuts. Every propositional formula is equivalent to a formula in conjunctive normal form. \begin{align} \text{by (\ref{conjunction})} Firstly, theoretical part and technologies that are used in application … It is indeed true that various modelling tasks can be accomplished in CNF, see, for instance, the modelling of sudoku and the modelling of computations. Disjunktionsterme sind dabei Disjunktionen von Literalen. A. im Singular (Einzahl) und Plural (Mehrzahl) auftreten. We have: logical diagrams (alpha graphs, Begriffsschrift), Polish notation, truth tables, normal forms (CNF, DNF), Quine-McCluskey and other optimizations. Eine Formel in KNF hat also die Form. \varphi_0 \wedge \varphi_1 Das Verfahren nach Quine und McCluskey kann genutzt werden, um die konjunktive Normalform einer beliebigen Formel berechnen zu können. Okay, you clearly need … Do you know how to expand a logical expression? Eine Formel der Aussagenlogik ist in konjunktiver Normalform, wenn sie eine Konjunktion von Disjunktionstermen ist. We want to show that $\varphi_0 \vee \varphi_1$ is equivalent to a formula in CNF. Conjunctive normal form (CNF) is an approach to Boolean logic that expresses formulas as conjunctions of clauses with an AND or OR. Conjunctive normal form (propositional logic) A propositional formula is in conjunctive normal form if it is a conjunction of disjunctions of negated and non-negated variables. Das Substantiv (Hauptwort, Namenwort) dient zur Benennung von Menschen, Tieren, Sachen u. Ä. When working with formulas in CNF it is important to have rules at hand that describe how disjunction and conjunction can be applied to formulas in CNF.—Negation cannot be dealt with as easily as with conjunction and disjunction. [1] Jede Boolesche Funktion besitzt genau eine KKNF. V n+1 i=1 F i = V n i=1 F i ∧ F n+1. Als konjunktive Normalform (kurz KNF, engl. \end{align} \end{align}. Gratis Vokabeltrainer, Verbtabellen, Aussprachefunktion. I want to convert it to an equivalent boolean formula that uses only NOR gates with fan-in 2, without introducing any new dummy boolean variables. Ivor Spence introduced a method for producing small unsatisfiable formulas of propositional logic that were difficult to solve by most SAT solvers at the time, which we believe was because they were … Second, assume $\varphi_0$ and $\varphi_1$ are formulas in NNF which are equivalent to \$\bigwedge_{i Sofifa Man City, Travis Scott Mcdonald's, Commonwealth Assisted Living, Case Western Reserve Arts Sciences, Howard University Volleyball Scholarships, Accuweather Lyme Regis, Not A Pretty Girl Ukulele Chords, Crow Talons Bird,
|
{}
|
# Linear combination of eigenstates problem [closed]
Let's say that we have a system such that
$$\Psi(x,0)=\frac{\sqrt3}{2}\phi_1(x)+\frac12\phi_2(x)$$
where both $\phi(x)$ are eigenfunctions of the Hamiltonian operator.
I want to find $\Psi (x,t)$ for all $t$ and I want to know if the current state of the system is stationary.
I did the following:
$$\hat{H}|\Psi\rangle=\frac{\sqrt3}{2}\hat{H}|\phi_1\rangle+\frac{1}{2}\hat{H}|\phi_2\rangle=\frac{\sqrt3}{2}E_1|\phi_1\rangle+\frac{1}{2}E_2|\phi_2\rangle$$
So the state would not be an eigenstate, ergo it would not be stationary. Is this correct?
For the time evolution of the whole system, would it be enough to just write the time-dependent exponentials to each of the terms? Namely,
$$\Psi(x,t)=\frac{\sqrt3}{2}\phi_1(x)e^{-\frac{-iE_1 t}\hbar}+\frac{1}{2}\phi_2(x)e^{-\frac{-iE_2 t}\hbar}$$
Or is that wrong?
## closed as unclear what you're asking by ACuriousMind♦, CuriousOne, user36790, Cosmas Zachos, GertJul 18 '16 at 2:07
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• It's a very straightforward linear algebra exercise to deduce whether the sum of two eigenvectors is an eigenvector. Just apply the Hamiltonian to that state and look whether it's an eigenvector or not. – ACuriousMind Jul 16 '16 at 22:21
• If you know that these are eigenfunctions, then you know that they have an associated eigen-frequency. The time evolution is therefor trivial. If the two frequencies are not the same, then there is no one time dependent phase factor that makes the sum a stationary eigenstate and you will see beating. – CuriousOne Jul 16 '16 at 22:23
• @ACuriousMind I've edited my question, could you please check it? – Tendero Jul 17 '16 at 17:33
|
{}
|
### Efficient Hierarchical Identity Based Signature in the Standard Model
Man Ho Au, Joseph K. Liu, Tsz Hon Yuen, and Duncan S. Wong
##### Abstract
The only known constructions of Hierarchical Identity Based Signatures that are proven secure in the strongest model without random oracles are based on the approach of attaching certificate chains or hierarchical authentication tree with one-time signature. Both construction methods lead to schemes that are somewhat inefficient and leave open the problem of efficient direct construction. In this paper, we propose the first direct construction of Hierarchical Identity Based Signature scheme that is proven under the strongest model without relying on random oracles and using more standard $q$-SDH assumption. It is computationally efficient and the signature size is constant. When the number of hierarchical level is set to be one, our scheme is a normal identity based signature scheme. It enjoys the shortest size in public parameters and signatures when compare with others in the literature, with the same security level.
Available format(s)
-- withdrawn --
Category
Public-key cryptography
Publication info
Published elsewhere. Unknown where it was published
Keywords
Identity Based Signature
Contact author(s)
ksliu9 @ gmail com
History
2007-11-02: withdrawn
|
{}
|
Things I wish I’d known about CSS 878 points by harshamv22 on July 17, 2020 | hide | past | favorite | 330 comments
It always helped me to do an absolute basic concepts course on a new technology I learn.Like, sure I can play around in Photoshop or Eclipse or CSS or JavaScript and find most things.But a good 101 course is worth so much saved time.Most of the stuff in that article was mentioned in a CSS box model course I did 10 years ago.People were always baffled how I learned all this. Well, I read the docs!They always assume every one learned like them, by trying stuff out all of the time, until they got something working. Then they iterate from project to project, until they sorted out the bad ideas and kept the good ones. With that approach, learning CSS would probably have taken me 10 times as long.Sure this doesn't teach you everything or makes you a pro in a week, but I always have the feeling people just cobble around for too long and should instead take at least a few days for a more structured learning approach.What I didn't learn about CSS in a basic course and what cost me multiple weeks to fix, was pointer-events: none. Keep this in mind when your clicks stop working after you pulled some new CSS ;)
> I always have the feeling people just cobble around for too long and should instead take at least a few days for a more structured learning approach.There are two kinds of documentation: there's a list of all the individual things you can do, and there's a "The fundamental abstraction is this". The second is rare, and rarely done correctly, and much more important.A lot of people assume that X is like Y, but with Z, and they know Y, so they just need to learn about Z. A lot of time that isn't true, the fundamentals are different. The natural way do perform W in Y might be profoundly wrong in X. Examples:* git is like subversion, but distributed.* C++ is like C, but with classes.* C++ is like Java, but you have to remember to delete stuff.I love git. I love C++. I think these things are the bee's knees. They're so simple and elegant. But I totally understand how people think they're arcane eldritch horrors when you're holding the sword by the pointy end.
Exactly. Another fundamental abstraction they should have mentioned in the article is [the surprising way z-index actually works](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Positio...), especially being aware of the existence of [stacking contexts](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Positio...).
> there's a "The fundamental abstraction is this"In most cases, I'd settle for "what problem this is trying to solve" (although this is obvious for CSS, it isn't nearly as obvious for things like React and Vue).
The problem here is that really understanding "what problem this is trying to solve" often requires a passing familiarity with cognitive science.How many people here could explain the link between Working Memory and React.js?
New to JS world. Would love to know a brief explanation.
Can’t agree more. The first kind of doc is useful from time to time to check the correct argument of a function. The second one is about the big picture and the overall architecture. And sadly this is what is missing with every codebase I’ve worked on. In my project on the contrary, this is something I try to document.
If someone needs to read documentation for a few days in order to figure out which end of the sword to hold, perhaps the sword was just poorly designed and unintuitive.
So you want a dull sword with handles on both ends just so that nobody can possibly cut themselves? Professional tools are meant to be efficient for those trained in using them and if that means they are slightly harder to learn that's OK.
Cobbler here, currently cobbling a frontend (am a backend dev by trade).The issue for me is the format these courses and resources take. CSS is the most jam packed with non-intuitive technology I've met. How many dozens of pages or segments of video would I have to go through in a course to learn what the author in OP summarised in two or three sentences? Any time I've considered the structured approach for something like CSS, the material drones on and on with technically correct explanations and example code, but somehow against the odds, almost nothing of substance.Worse yet are the top google result reference resources. Look no further than the top result for "css grid" if you're in the mood to claw your brain from its stem, and flush it down the toilet.Behold, 8 zillion words and symbols across 7 trillion seemingly randomly organised boxes full of both all of the information, and simultaneously none of the information, about css grids.Which brings me to where I am now, cobbling. It's more effective for me to cobble, than to spend an entire day figuring out css grids "the right way". If the going gets tough I'll just use some grid tool, or look up some sandbox examples, and be on my way.If there was more material like the OP, and someone organised it well, well, I'd actually get around to learning it rather than cobbling. It's the best bit of writing I've ever seen on CSS. This is shocking. Not that there is anything wrong with the writing, but it's very, very simple. What on earth have all of the other CSS wizards with basic writing skills been doing? Confusing the issue, that's what.Which leads me to a sort of meta frustration with learning CSS. I know it's not hard, I know that if someone just wrote about it even half decently it would take only a moment to digest each concept, but watching learner-disconnected authors create resources I can tell have lost sight of what it's like to learn, turns me off.
I had the same perspective on these frontend technologies as a backend/ops guy, and it all finally clicked when it came to me that I just couldn't find a justification for why CSS was the way it was.This was solved trivially by just reading the history of CSS. It was shocking to finally have made clear all of the quirks and weird aspects of CSS that always made it difficult for me to connect the dots and feel myself lost in a messy tangled up language.By far the best source I went through was found here: https://news.ycombinator.com/item?id=22215931 - it's a long read, but extremely enlightening!https://www.w3.org/Style/CSS20/history.html was also useful.
This is a great comment. I expect it only applies to some folks depending on how they learn, but it resonates a lot with me.I find it very difficult to learn the whats and hows of a new thing without the whys. I tend to construct mental frameworks of things, revising the inaccurate bits as I go, until theory starts meeting practice. (I always did poorly with math "teachers" whose method was, "doesn't matter, just do it." But it did take a while to realize that they were failing, not me.)So when I encounter complicated, new things with a history, I usually try to start with at least a bit of the history.
This is so true. Though I'd been writing CSS long enough to remember Netscape 4.0 quirks, it wasn't until I learned some troff a couple years back that I had my moment of CSS enlightenment. The fundamental features are the same! CSS (and HTML) were (originally) made for pages of technical/academic writing, not creating user interfaces.
Couldn't agree more. I had less issues learning multiple programming languages,some challenging concepts around them and yet when it comes to CSS, most resources aren't very useful. You do something and it doesn't work. You check documentation and it show exactly the same thing that does work. 5 hours later it turns out it was some overriding property+ browser incompatibility+ weird undocumented exception. Very few books do deliver on their promise either. I don't know,maybe it's just me but the whole ecosystem is a bit weird.
" but the whole ecosystem is a bit weird."It absolutely is.The web was meant for static documents and some link magic.Now allmost every complex programm can run in the browser with web technologies. And in the time between you had lots of powerful organisations with their own agenda, as well as millions of single developers with a agenda and a loud voice. How could a sane, clear spec be made, under these circumstances?Besides, it would be very hard, to make a clean spec that satisfies the needs of the whole population. So by now I say, all in all, it works pretty good.
How could a sane, clear spec be made, under these circumstances?Serious answer: By respecting and supporting the organisation that had for a long time codified realistic web standards reasonably well. When the Web was younger, you could (and many of us did) learn all you needed to know about things like HTML and CSS by reading the documentation produced by the W3C. In many cases, the official recommendations themselves (the documents that were informally called "web standards") were very readable and short enough for normal developers to go through the whole thing.
My experience with the W3C was very complicated written specs, with sometimes no connection to reality(implementation in the browsers). And they moved too slow.And no. I don't think you can expect the whole world to wait for some spec commitee.
It wasn't that way in the earlier days. The W3C certainly made some questionable decisions later on that contributed to its slide into irrelevance, but it was Google flagrantly ignoring standardisation, absurdities like "living standards", and the way all of the other major browser developers were asleep at the wheel for years that really sealed the W3C's fate. And now, sadly, a new generation of web developers is about to learn the hard way why browser monopolies are a bad idea. :-(
> the whole ecosystem is a bit weirdThe whole ecosystem reminds me of why I did not study synthetic biology or medicine. I prefer the systems I work with not to have evolved but to have been designed by some intelligence with a limited memory for random details.
> Behold, 8 zillion words and symbols across 7 trillion seemingly randomly organised boxes full of both all of the information, and simultaneously none of the information, about css grids.It's interesting, because that's my go to resource for CSS Grid. I find it super handy when I forget something. Although, I'm not learning grid, I'm using it.
> Although, I'm not learning grid, I'm using it.Precisely. What CSS-Tricks calls a guide is actually almost entirely a reference. It makes a tolerable reference if you already know what you’re looking for. It’s a poor reference if you don’t know what you’re looking for because it’s insufficiently structured and takes waaaaay too much scrolling to explore. (It used to be more manageable—it used to be more a guide. It has been allowed to grow far beyond a healthy point.) And it’s atrocious as a guide.
Reference books aren’t the same as textbooks.And, typically, there are several levels of textbook. e.g. high school, then freshman physics, undergrad electrodynamics, then Jackson, then specialist references.My point is you go over the subject completely several times, at increasing levels of sophistication.
Yes, web technology is overloaded, because of the history, backwards compatible stuff etc. pp.
Check out Tachyons or another atomic CSS system. It makes CSS so much easier to use (especially in single-page apps written in JavaScript). https://tachyons.io/ "Create fast loading, highly readable, and 100% responsive interfaces with as little css as possible."See also: https://css-tricks.com/lets-define-exactly-atomic-css/ "Atomic CSS is the approach to CSS architecture that favors small, single-purpose classes with names based on visual function."Or from: https://acss.io/ "CSS is a critical piece of the frontend toolkit, but it's hard to manage, especially in large projects. Styles are written in a global scope, which is narrowed through complex selectors. Specificity issues, redundancy, bloat, and maintenance can become a nightmare. And modular, component-based projects only add to the complexity. Atomic CSS enables you to style directly in your components, avoiding the headache of managing stylesheets. Most approaches to styling inside components rely on inline styles, which are limiting. Atomic CSS, like inline styles, offers single-purpose units of style, but applied via classes. This allows for the use of handy things such as media queries, contextual styling and pseudo-classes. The lower specificity of classes also allows for easier overrides. And the short, predictable classnames are highly reusable and compress well."Mostly though, Atomic CSS is a state of mind or design pattern. So you can do it from scratch without support from something like Tachyons etc. -- but it helps.You might even find it fits right in with your "cobbling" style of development. As an example, if you want to make some text red, you add the style "red" to it. If you want a border all around something, add a "ba" class. If you develop using HyperScript (like from Mithril) to define your UIs, code might look like this: h("div.red.ba", "This is red and has a border")I refer to the Tachyons style sheet as essentially a menu of options: https://github.com/tachyons-css/tachyons/blob/master/css/tac...In the rare case something is missing I do an inline style or add something to a single supplemental stylesheet written in a similar way.There is also a verbose version of Tachyons without the abbreviations, so "ba" would be "border-all" (though I prefer the abbreviations): https://github.com/tachyons-css/tachyons-verbose/blob/master...
Your evaluation is problematic.I am a front-end dev by trade and I could really easily say similar things about the tech stack you work with daily.If you want a good reference and a learning guide start with this: https://developer.mozilla.org/en-US/docs/Web/CSS it is what us professional use.For us professionals that use CSS in our jobs, that have put the same amount of hours into front end technology as you have done for back-end, CSS is a perfectly cromulent technology. It is easy to work with, it is intuitive, it is easy to find a good reference (go to MDN not random blog posts from people that are admittedly not masters of the craft).If I try to cobble together a back-end I expect to be frustrated with it, I expect not to be able to do everything, I expect I’d need to find poor example and try to hammer them in to fit my needs, I don’t expect to be able to understand all the example snippets. Why are you on your high horse judging my tech stack this way?If you need a good well crafted front-end for your project, you should hire a professional front-end developer. Or find a friend who is one that is willing to it in their spare time. Or spend a few thousand hours learning and mastering the craft. Alternatively you can cobble together a good-enough front-end that works, just don’t complain about how hard it is because “technology hard wheee wheee”. Us professionals use it every day and are fine with it.
"here I am, at the end of the maze, telling people at the beginning how straight forward it is" is basically all I'm getting here - and you call my eval problematic.Multiple accounts of people not in your position telling you a problem exists. Ok, great, you worked through it. Doesn't mean it's as good as it should be.
Well you come across as an outsider telling us that the tools that we use—and do a fine job with—are no good, and that our literature sucks.I am here to argue that this sort of attitude is problematic. If you browse through this thread you will find no shortage of comments describing how absolutely basic this article is (and a little void of insight, and a little out dated), and that there are people in the industry getting six figures while not knowing this stuff. Frankly it is a little insulting. And I think the attitude that you presented above is part of the problem. There definitely exists a lack of respect for the craft of front-end development within our industry. This sort of attitude is not helping.
> There definitely exists a lack of respect for the craft of front-end development within our industry. This sort of attitude is not helping.There is xenophobia involved in people's attitudes toward kebabs. It doesn't change the impact of phosphates on the human body.Your argument doesn't change the impact of poorly-encapsulated complexity on the brains and mental health of your fellow programmers.https://blog.codinghorror.com/programmers-and-chefs/> Well you come across as an outsider telling us that the tools that we use—and do a fine job with—are no good, and that our literature sucks.https://en.wikipedia.org/wiki/The_Jungle---------Backend programming has this problem too. So does systems programming. The problem of unencapsulated complexity is core to the craft of software engineering.So is joining the fight against this problem. https://tonyarcieri.com/would-rust-have-prevented-heartbleed...
Or spend a few thousand hours learning and mastering the craft.Surely the point is that you shouldn't need to spend thousands of hours "mastering" something like CSS? It's not that complicated and it's not that important. If you told a developer working on desktop or mobile applications that they'd need to spend years "mastering" the design tools in whatever GUI framework they were using before they'd be able to independently build a good user interface, you'd be ridiculed.Anyone with the aptitude and interest to become a web developer, which isn't a particularly high bar as technologies go, should be able to get the hang of all the important parts of modern CSS in at most a week even starting from scratch. The fact that we don't reliably train new developers up that efficiently is a damning indictment of the current state of the industry.
100% agree. I found this to be a good guide: https://developer.mozilla.org/en-US/docs/Web/CSSAfter spending a few hours reading up on topics I thought would be relevant, I find that I am significantly more productive with CSS. Instead of trying to find answers on google, I find that I’m better equipped to build things and diagnose issues on my own.
Once I got the basic concepts of CSS I came to realize that of the trio of HTML, CSS, and JS (and SVG for that matter), CSS is the one you really miss when you have to make something using another system.The amount of suffering I've endured in Latex, PDF, different native layout systems, or even React Native's slightly restricted version of CSS that would have been resolved by a stylesheet is immense.
I totally agree. A good book that introduces CSS in a methodical perspective is a must. I personally cut my teeth on CSS with the book Web Design in a Nutshell, a 2006 book. I remember having written CSS in a haphazard try-and-see fashion before that, which utterly confused me, and then reading an actual book made everything clear. This is an investment worth making. Many years later I'm still almost always "the guy" on my team to go to when there's a weird CSS issue.
> A good bookOne under-appreciated value that actual books have over online documentation is that you know immediately what order you're supposed to read them in.
I have tried both approaches a fair amount, and realized that every time I try the cobbler approach I end up getting blocked on some major thing and thus having to rewrite everything because of some basic things that could have been avoided.I feel like it's getting both better and worse now with docs getting better in general(no of times I end up on stackoverflow has gone down significantly), however a lot of times I end up on medium posts with a list of steps and absolutely zero reason why. Whenever I try cobbling from those articles, I end up very frustrated
I completely relate. Other things I learnt "formally":- regexps (this is a big one, being able to write a complex regexp without thinking about it is amazing)- Git- basic JS when it was mainly used to add snow on your pageTaking the time to properly learn things is extremely valuable.
I'm kinda curious how the long term viability of these strategies look. Intuitively I think that reading the docs will become less and less viable as there are more and more available technologies and they become more and more featured. But I'm not fully convinced. Maybe it can keep working by only teaching the most important subset.
Learning by course is generally good (rust book!), but "overlearning" certainly exists in all technologies. I've been hanging out in freenode's ##learnpython for years helping noobs, and I can't count how many times I've met the scenario of a new programmer working their way methodically and studiously through a 2k page book, stuck on page 1643, because they haven't actually grasped concepts from page 20. They'll be all kinds of worried about how to correctly use slots and metaclasses, when they can't write even basic functions.CSS suffers massively from this info overload.What I tell every new programmer who will listen, is that they should first grasp the absolute basic building blocks, and then learn to read the docs. That's really all you need. Unless the particular language or technology their are using sucks, you can essentially compose anything from the basics. Once you've done that, the sortcuts and sugar you learn naturally actually make sense, rather than appear before you as spooky magic boxes and incantations.My ideal CSS course would just be the absolute basics of syntax and concepts, a primer on understanding and incorporating information from the docs, and then an index of well organised resources, such as the OP's.This way, it's impossible to overload, and I'm left in a state where I can expand at my own personal rate. Anything that tries to set the pace for you is going to be suboptimal for 99% of readers.Furthermore, anything that flows easily and without extremely explicit DO NOT PASS GO UNTIL YOU ARE EXCELLENT AT THE LAST CHAPTER, will lead to overload. You can't overload if you don't provide the information one can overload with. Simply ending your course after the basics and docs, absolutely guarantees against this problem. "Leave the learner in a known good state", would be my rule of thumb.
There is a learning method that hasn't been mentioned in this thread. Namely, a middle ground between pure cobbling and pure theory. In addition, you have to find a method that is optimal to you.For myself, that's a feedback loop where bugs during cobbling naturally present themselves as theory questions needing an answer, rather than practical questions needing a solution.It is rare to find documentation of such high quality that grasping the fundamental abstraction does not require theory-attentive debugging.
After spending years learning on the job, I started reading the documentation books cover to cover, and I have never looked back. Colleagues are often amazed at my "deep knowledge" of the technologies I use, however I mostly know what is written in the manual. It baffles me that people can't be bothered to read the documentation and at the same time complain that "X is too hard! Let's use Y instead!"
Same here. I tried finding a book about deb and rpm packaging the other day and couldn't find one. Left me feeling anxious.
There's a reason RTFM was so prevalent, and we need to make it so, again.
I built my first site in the 90's too (before CSS)! I've been in higher education lately, but have a YouTube channel with mostly CSS related content (both beginner & advanced). Check that out if you're interested in some great starter learning content for CSS and you prefer video format: https://www.youtube.com/followandrew
For me personally, documentation was just a step one, helping me to discover that some feature exists, what options are, etc. - and then "cobbling around", as you said, was the very important 2nd step where I'd really understand how something works and what's the best way to apply it in real life. Everyone learns differently, I guess...
Meanwhile, this is, oh so damn outdated. (I'm not kidding)Most people don't know that the flow model totally changed meanwhile, and something like "display: inline-block" actually means "display: inline flow-root", everything that came after flexbox kind of had an influence to the meanwhile borderline insane display model as a result.Everything related to inset, margin and padding has gotten an overhaul that is ready for ltr and rtl content where they switch x/y flow based on "direction: ltr (or) rtl" whereas e.g. "margin-inline" and "margin-block" are the newer properties for the updated margin.A lot of stuff has changed for the better, too.CSS transforms are now specified in a cleaner way, with a predictable way to render them (e.g. translate rotate will not be different than rotate translate). So all CSS transforms have gotten their own properties like "rotate: 90deg" or "scale: 1.23".I learned a lot when I read through the actual CSS specifications, because I am implementing my own parser (for my Browser [1] project).Also, did you know that @media, @supports and @viewport queries can be nested in any order? The media queries 4 [2] spec is kind of insane from an implementor's view.
I'm a layout implementor for Planimeter's Grid Engine. Implementing a naive subset of the algorithm for calculating the box sizes in the visual formatting model and calculating layout for normal flow, relative positioning, and absolute positioning is far easier to implement than flexbox.[1] There's nothing insane about this. And you'll learn even more when trying to draw to the screen.Normal flow also doesn't require multiple passes, whereas flexbox does.[2] Even Yoga doesn't implement a conforming model.[3] I'd even speculate that flexbox performs slower than the standard visual formatting model just because of the differences in the algorithm.Transform order absolutely matters. Any attempts to coerce order into a standardized sequence means developers have to account for this information.[4]
> Implementing a naive subset of the algorithm... and suddenly, a wild "overflow" and "text-overflow" appeared.> Transform order absolutely matters. Any attempts to coerce order into a standardized sequence means developers have to account for this information.What I meant is that the CSS specification for the new CSS transforms Level 2 has a fixed transformation matrix and order in which the properties are applied - whereas legacy "transform: rotate() scale() translate() ..." did not do this, and therefore was overly complicated.In the new specification the transform order does not matter, because the order in which "translate: ...", "rotate: ...", "scale: ..." and "offset: ..." are applied is specified and cannot be changed. See [1]PS: I'm talking about CSS transforms level 2, not level 1. I assume you are talking about level 1. "translate: 13% 37%" is a property whereas "transform: translate(13%, 37%)" was the legacy method.Personally, I prefer CSS transforms level 2, because they are implementable in an easy manner. Having to write a complex compositor isn't an easy task, especially on mobile.
Overflow is easy enough to deal with, text-overflow requires more work because it actually requires you have a model that matches the W3C specification understanding of generating boxes, instead of matching elements 1:1 which is what most naive implementations do. The same can be said about the white-space property.Both are easy to implement; in one version, you parse then push the exact order of the parsed statements to matrix transformations. In level 2, you coerce them into the standardized order.That being said, if you wanted to do anything based off of developer-specified transforms, you have to use level 1's style of applying the transformations. I would expect that GSAP and other such libraries rely on this behavior. Anyone familiar with shader-based transform code will be thinking in this manner anyway.I'm not sure why anyone would want to use the level 2 manner of specifying transforms if you know what you're doing, because presumably, you immediately lose the ability to push additional transforms to the transform stack.
What are your thoughts on CSS Grid? Does it require multiple passes?
If the question is about layout computation then CSS Grid layout (and ideally layout) is a typical LP task [1].And there are known way of solving these things, e.g. simplex methods or Cassowary constraint solver (CCS)[2] will work.In fact flexbox layout is a particular case of LP task and also can be solved by CCS efficiently.
I have not made any website or UI designs which require it, so I'm not experienced enough with it to say. As a result, I also have not made attempts to implement it.c-smile is a well known implementor in the space, and his comments are valuable as well.
This was a great article and I learned a few new tricks.I learned CSS over the years by gradually solving problems I encountered building apps. Compare this to people learning CSS now as evidenced by the #100DaysOfCode tag on Twitter. The learning technique is comprised primarily of using gradient-heavy, absolutely positioned HTML elements to create a photo-realistic, 3D rendering of objects.The results are pretty amazing, but I have my doubts about whether these skills are easily translatable for building an interactive, responsive UI. Some examples:https://twitter.com/bauervadim/status/1282264611912327169https://twitter.com/mercyoncode/status/1282449080132804609https://twitter.com/ellie_html/status/1276177277315932161https://twitter.com/thecoffeejesus/status/128204582508278169...
I learnt CSS in a similar way. I'd take Photoshop designs and try to recreate them in vanilla HTML + CSS:https://jsfiddle.net/umaar/YNA5V/https://jsfiddle.net/umaar/fu4TT/I'd even make 3d graphics of things like the HTML5 logo: https://i.imgur.com/kuEYpSV.pngI posted this all to a community called "Forrst" (think of it like twitter, but curated for developers and designers).I spent time giving feedback on other peoples work, I tried to ask insightful questions https://twitter.com/umaar/status/823915022917271552 to have an open discussion, I spent hours replying to comments every few days.Then one day, Forrst got acquired by Zurb https://zurb.com/blog/zurb-acquires-forrst and later on it got shut down, and with that, I lost access to huge amounts of my work which I hadn't stored anywhere else (some stuff has been archived online, but not everything).When it comes to web development tips, I now self-host on my own website and it's a really good feeling knowing that it'll be preserved: https://umaar.com/dev-tips/
I like what you have done with the site — very simple and easy to check out the tips. I subscribed.
Hey thanks for that appreciate it. Yes not spending time on fancy things/random enhancements let's me actually focus on creating new content.
I like that you'e carved out a space for yourself -- one that cannot just disappear due to an acquisition followed by a "pivot".Thank you for making and sharing!
Some pseudo elements, a gradient and some transforms and you can do some really cool stuff. Highly recommend learning how to use these effectively:https://developer.mozilla.org/en-US/docs/Web/CSS/background-...
A lot of people make games or fun projects for #100DaysOfCode when their job might be updating a CRUD MVC app using Y framework. I think as long as it puts them in the "coding" mindset it's worthwhile.
I agree with that 100% — if it isn't fun, it's work. And if you build enough knowledge about how CSS works in general, when it comes to implementing specific layout or responsive design techniques you have a good understanding of the basics to build upon.
I looked at the one with glass and lemon and I want to cry. It's just crazy how much some people push things. I wonder how long it took though..
I believe that focal point of CSS understanding is that horrible display property. People just need to get that one as other stuff is relatively trivial.display:xxx on some elements defines three things ( sorry, that is terrible architectural mistake authors of CSS have made initially )1. display defines "sibling requirement" how that element wants to be replaced among its neighbors. div {display:inline-box} tells container to treat that div a single glyph placed inline among other glyphs.2. In some cases it also defines "layout manager" of element's content. E.g. display:table and display:inline-table tells the renderer that content shall be treated as table having tbody , rows, cells, etc. Same thing for display:flexbox, grid, etc.3. In some cases it defines other things like display:none; display:list-item; Note: there is still no display:inline-list-item ...Ideally we should have these instead:1. display: inline | block; - and just these two.2. flow: auto | text | table | vertical | horizontal | grid...; - defines layout manager of element's content.3. visibility: visible | hidden | none; - Note: visibility:none instead of display: none;
Don’t forget display: contents in which the element is instructed not to generate a box at all (neither inline nor block; I guess like display: none without hiding the content).But I think the spec maintainers are aware of this and CSS Display Module 3[1] allows multi-keywords so you can do stuff like: display: block grid; or: display: run-in ruby; or even: display: inline flow-root list-item; 1: https://drafts.csswg.org/css-display/#the-display-properties
That is not a fix - just a cosmetic change that does not solve anything.display/flow MUST BE [1] different properties as they define orthogonal concepts.We should be able to define them separately. main { display: block; flow:grid; } @media handheld and (width < 100mm) { main { flow:vertical; } } Yet none from display shall go to visibility:none (as in my Sciter [2]).I've seen too many times errors like this: main table { display:none; } main.full table { display:block; } Which is obviously wrong, element should have display:table or display:inline-table;
You can define those inisde/outside display styles separately, like display: inline grid or display: block flex
I find your comment about display: inline | block; interesting, because when implementing the visual formatting model layout algorithm, these are the first two you'd want to implement, and they're the easiest.In my game engine, we only support these two display types from the specification.
Yikes. I've been earning 6 figures as of about two years ago doing mostly HTML and CSS UI design for a bank. And I still did not know about many of these! I guess they are things I wish I'd known about CSS! Better late than never I guess.
Six figures? In the US?Sometimes I wonder why companies even pay developers so much in the US. I'm a specialised medical embedded software developer in the UK and I'm not even halfway to six figures, and will likely never hit it.It astounds me that companies don't hire twice as many non-US developers for the same price and get more work for their money. Note, I'm not talking about outsourcing to the absolute cheapest bottom of the barrel developers who cost 1/20 of the price and the quality reflects the price. Just equally skilled developers in non-US countries.
For context: I'm an American who's currently working as a software engineer in France (for a French company).I agree that engineering salaries are much higher in the US than in Europe and other non-US countries, however it's worth considering that there are additional expenses to hiring in other countries. My french salary is ~30-40% lower than I was making in the US. However, my cost to my employer is nearly as high. Employer taxes are much higher, my company is required to reimburse my transit and all-but-obligated to cover lunch as well. Certainly, many US software companies do some/all of that, but not all.And beyond those pure costs, there are more liabilities to the employer. For instance, if they want to fire me, they're legally on the hook for four months of severance. And, I get ~7 weeks of combined vacation time and "comp time" (based on the fact that I work more than 35 hours a week). In the US, I got 2-3 weeks.I don't have exact numbers, and I don't disagree that there are potentially savings to be had, but I don't think it's nearly as clear cut that you could get anything close to twice as many developers.
Here is an official simulator for France: https://mon-entreprise.fr/simulateurs/salaire-brut-netIt's not 100% accurate (the exact amounts depend on many variables) but gives a good idea. For example, a cost to the employer of USD 100k (87460€/year) gives a before-taxes salary of USD 70.5k, and a salary after all taxes of USD 47k.
> before-taxes salary of USD 70.5k, and a salary after all taxes of USD 47k.What!? That's over 30% of your income gone, and for a relatively average income. That's not even counting VAT and other taxes.... No wonder there's so much "free" stuff in Europe.
€61k before taxes isn't really a relatively average income in France. A junior engineer (with a Masters) starts around €35k ($40k) before-taxes. Country-wide median salary is €28k ($32k). But across all those salaries it doesn't change the fact that around 45~55% of the employer's cost isn't going to the employee.And yes, taxes and cotisations are very high. On the other hand they cover most higher costs and risks in life (education, health, basic retirement…).
Isn't $70.5k USD around €61k EUR? You're right, I mixed up. I edit it.Still, €61k isn't an average salary. An engineer reaches that after about 10-15 years (from personal experience and some statistics I have access to). It's double the national median. Wow, that's just crazy to me. It makes sense with the much stronger safety net, it's just a shock.How does rent and food costs compare to America, do you know? Even with the stronger safety net, it doesn't make up for the difference compared to the US for well-paid employees (such as engineers, or tech in general). US employees are much more well-off, there's no denying that. US levels for engineers compare more to Swiss level of living.However the safety net is much more beneficial for less-paid people, at a relatively low cost for them.I only visited the US so others might be able to provide a more detailed comparison. It seems to me that rent is generally significantly higher in US cities (might be different in lower CoL areas), and groceries and other smaller budgets were slightly higher in France (restaurants, tech purchases, cars…). I have worked in the US and Europe, and may give a direct comparison.US healthcare and education consume a vast portion of your pay. We paid cash for our daughter's college (biomedical engineering) at a total of over$300K. That buys her freedom of choice when it comes time to take a job, get an advanced degree, etc. Her colleagues will have more than $500K USD debt hanging over their heads when you factor interest payments.The company pays for a portion of health insurance, but much of the cost is picked up by the employee, either though co-payments or deductibles.We live modestly, drive low-end cars and don't dine out often. We see our European friends taking great vacations and their children bounce in and out of college programs. We see free healthcare, mountain cottages, and other perks unavailable to us. Interesting, thank you. It seems the calculation becomes very different when you factor in the cost of a child (especially if he goes to college). Without going into numbers all I can say about US vs Europe: for people with low income,Europe is much much better. For middle class, it boils down to some difficult choices( kids, education, leisure) and is probably equal. While upper middle class or highly paid professionals have it better in the US. I'm in Scotland, my salary is about £45-£48k. I take home about £31k.The median salary here is about £21k nationwide.I bought my home for £170k, now worth about £230k. it's a decent sized apartment in a nice area in the city where I work. For an indication of size, my living room is about 23ft by 28ft, 2 bedrooms, 2 bathrooms.Some photos to show what I got for my money:https://imgur.com/gallery/6oNLRtlMy outgoings each month average out to around £1200k. Mortgage is under £450 a month.Healthcare is all free, university is free, prescriptions are free. This is a nice looking apartment!Some perspective on the costs that look "scary" for the people outside of the USA. I'll share some personal data based on my 20 years since moving here. I have started with$60K and now I am making close to $250K. All this time I was living in a median cost area (one of the top 20 US cities, east coast).My average effective tax rate over the last 20 years is 19.6% (this includes federal, state, social security and medicare taxes)My average medical expenses for a family of four are at 3.6% of my gross salary (includes health/rx/dental/vision insurance premiums and out of pocket expenses)I've paid cash for my oldest child's education at top 10 public university - ~1.5% of my earning over the last 20 years. I am expecting to pay similar cash amount for my other child.We live in a 4000 sqft McMansion near the best public schools in the state. My mortgage is ~$820 at 2.5% APRIt does look like your effective tax rate is 31-35%, which is more than my all-inclusive rate of ~24.7%
Did you have a large down payment? That looks like a small monthly cost for such mortgage. Or maybe the interest on such loan in Scotland is very small.
My down payment was £70k, interest rate is 2.14%When I renew at end of year it shoul be around 1.1%
Well damn, I don't think there is anything below 3.00% (3.50% more common) where I live.That's nominal - 3.5% - 4.0% is the effective interest rate. I don't know much about mortgages, so not sure if I'm missing something. Congrats on your place - it looks great!
Is that right? I think inflation makes the effective interest rate lower than the nominal rate.You could imagine that inflation pays ~1% of your mortgage for you because each dollar that you owe is becoming less and less valuable.
Effective interest rate is, to my understanding, not related to inflation. It's just a way to make loans with various compounding periods easily comparable (I'm mostly citing this from Wikipedia, I'm far from expert here: https://en.wikipedia.org/wiki/Effective_interest_rate )I just mentioned it for the sake of providing more information to whomever was reading my comment.
You're right, I confused effective and real rates.
For fun, here are the figures for Croatia (yearly take-home salary and total cost for employer, all in USD for easier comparison):Average salary (take-home): $11874Average salary (total cost for employer):$19875Median salary (take-home): $10225Median salary (total cost for employer):$16526Starting junior dev salary (take-home): $14800Starting junior dev salary (total cost for employer):$25800Senior dev salary (they can go higher, but that's relatively rare for local employers) (take-home): $27700Senior dev salary (total cost for employer):$52000VAT is 25%. Safety net is a joke. Health care is universal, but not that good. Education is mediocre at best.
Notably, I doubt you could get family healthcare coverage in the US that reduced your financial risk as much as France’s does for under $25,000/yr. Even at that rate I bet your exposure’s still worse. > I doubt you could get family healthcare coverage in the US that reduced your financial risk as much as France’s does for under$25,000/yr.I'm not so sure. It looks like the average liability for a family (12 monthly premiums plus deductable) comes in at just under $23k[1]. That's the max the average family would pay. Obviously for just a single person total liability is a lot lower.So for a person like me who makes good money being an engineer, I am still better off in America where I can make a lot of money but also pay a little more in COL / healthcare stuff. Wow, that’s... much cheaper than I’ve seen. In my non-coastal middling-wealth state, you’re looking at$1500/m for an HSA family plan that still leaves you with tens of thousands in risk per year, and reducing that risk gets expensive fast (often it’s even worse, and you end up guaranteed to pay more per year than you might pay if something goes wrong under one of the cheaper plans)
The US salaries in IT are huge, but you need to take into account that they have to pay a lot to have a quality of life comparable to what a minimum wage worker is getting in west Europe.If you have a few kids, I would bet you are not that bad in UK with one fewer digit.
Yes, This is the main difference. In Germany the education is free and medical insurance not something to worry about ( I have no idea about what deductables, deletables etc mean). This is where the difference lies.
As simple rule of thumb an employer in the EU has to pay roughly double the net salary amount, so on top of the salary pocketed by the employee yet another similar amount goes to taxes. The education/insurance aren't "free", they're just managed better and give more direct benefits to the taxpayer.
It depends. When I was a student, I did not have to spend a penny on medical biils, my wife got paid by the government for some courses (+ cost of rent etc) that would have costed us around 20K € at that time. Now that we are earning enough, I have no problem in paying back, so that others who need it will be also be able to achieve their dreams.
Deductable is a scary word but a simple concept, it represents the amount of money you must pay before your insurance begins to pay bills for you :)
That is scary. When my child had to stay in the hospital for a month, the main costs for me were the parking tickets.
I know the difference to the cost of living in the US in silicon valley, of course not everyone lives in areas like that.But I'm talking from the employer side, not the employee and why they would employ US devs who cost twice as much
The employer pays roughly the same amount. It's just that the employee sees much less in Europe due to all the taxes.
In the United States, you can spend 6k on health insurance and 12k out of pocket for unlimited family health care.So as long as a US job pays 18k more, it's pretty similar.People talk about vacation, but my job pays so much I'll take months off between contracts. (Engineering)Also it's hard to compare lifestyles in the US vs Europe. Homes in the US are gigantic, often newish, and have Air Conditioning. Cars in the United States are more feature rich and have higher safety standards (when I was an airbag engineer in 2015)
I'm sure in terms of money, working in the US as an IT person is a good deal.But you have to think not only about you but also the others. I don't know whether you have children or grandchildren , but if you do I'm sure knowing that they will never have a bad situation is pretty nice. You can't expect all your children and grandchildren to have a well paid job and you can't finance all of them forever.
Workforce mobility means people can always move for a system that makes more sense for them. If grandpa was a good IT engineer and made some nice money in US, but the grandchild is the village idiot then moving to Germany or France is a solution. Germany has an open door policy to foreigners (there are millions of Turkish descendants of "guest workers" invited after WW2 for reconstruction), East Europeans and now "refugees" from all over Middle East and Africa. No idea how immigration works in France, but there are huge Magrebian communities and lots of other foreigners as well, so I think it is doable.
Well if your child can't make it in USA, s•he will not make it in Europe either. You can't just land in France and become a citizen without a job.But do you want your child to immigrate because you live in a country of selfish people? I'm sure you would find the situation difficult.By the way, the communities you are mentioning in France are French since a few generations and not foreigners.
One can live in Western Europe on welfare. Most of my colleagues in France are not born in France, they are foreigners. The trend is growing in Germany too, in our department there is no German in Germany but people from Brazil, India or Eastern Europe.
> In the United States, you can spend 6k on health insurance and 12k out of pocket for unlimited family health care.Where? How? I... this total is about what I’m seeing for shit-tier covers-nothing insurance. Like, there are plans out there that are HCA non-conforming and not that far under $18k/year just for the premiums. Who do I talk to to get total max healthcare spending of$18,000 without an employer-provided plan, in the US? Who’s offering $12,000 annual max-out-of-pocket family plans for$500/m?
Just ran some rates in my area for an EPO with reasonable copays and zero co-insurance:$3,600/yr individual,$0 deductible, 8k OOPL$9,600/yr self + spouse + kid,$0 deductible, 16K OOPL$12,000/yr self + spouse + 2 kids,$0 deductible, 16K OOPLAlthough I don't really think it makes sense to compare costs based on OOPLs. For most people, the deductibles, coinsurance, and copays are much more relevant to actual costs.
> Sometimes I wonder why companies even pay developers so much in the US.Presenteeism.The office "has to" be in Silicon Valley. The employees "have to" be in that office, in an open plan, so the manager can physically see them working. Therefore they have to be paid huge wages to compete against the other employers doing the same.
[flagged]
When they are taxed to death on lower base wager, it is expected some to be apathetic. The saying is "they pretend to pay us, we pretend to work".
Sometimes I wonder why companies even pay developers so much in the US.It boils down to real estate prices combined with some companies really really really wanting those people to be on site.I had my highest rate ever during a brief, beautiful moment in Switzerland where a studio apartment costs ~$2000 per month.I choose to think that this rate is how much our work is actually worth - otherwise those companies couldn't afford having people on site. This is a catch 22, high wages make the rent increase (demand and offer) and high rents demand high salaries, it's a spiral going to a bubble. Housing supply is the variable that can act as a release valve. If cities don't want to expand vertically, that's fine, but they'll inevitably end up in a Bay Area type rent spiral.I think the real issue is that homeowners vote so much more frequently than renters (who would love to have more housing and lower rents), even though they're technically outnumbered. Indeed, but eventually a ceiling is reached over which it would be impossible to make a profit having on-site employees.I don't envy the locals though. I spent 2,5 months in Zurich, during which I saved so much that in pre-corona times it would be enough for the minimal allowed down payment for an apartment.The Swiss don't have a Switzerland of Switzerland where they could pull off the same trick. That is going to happen. Big corporations are finally waking up to 1st class remote work, and that’s the gateway to lower US wages. I say this as a remote worker, and a huge fan of remote work. But I do think the high US salaries are going to mean revert a bit. Sounds to me like tech workers in the US should join a union, and quick! Now, admittedly I'm a skeptic of unions by default (in my view they're mostly just another layer of bureaucracy/waste), but isn't unionization of tech workers in the US the quickest way to make themselves even less desirable?Why hire a US union member when you can hire an Eastern European who both works for peanuts and doesn't have union hang-ups? (Admittedly outsourcing has its own problems, especially when there are language/cultural barriers) EU has very strong employee rights, so that will come to the same point as a union in the US. The lower pay is an advantage but the time zone difference of 8-12 hours can be a disadvantage. There are no language barriers in IT between the US and Europe, by the time they graduate college the average European has gone through more than a decade of English courses. Cultural differences are minor, management style is very similar (in the East) and so is the communication culture.So yes, lots of American companies already outsource to East Europe. There are lots of "Silicon Valley"s across EE thriving on Western outsourcing from companies who aren't quite desperate or cheap enough to go to India. There are some phenomenal outsourced shops in EE. Those that do pay 2-4 higher the going rate in the country. These aren't wordpress sweatshops. These are usually small teams and people work on interesting,specialised projects. Management style is usually very pragmatic ( a means a and let's cut the crap) .But the best part about these is that people do get exposed to things,build experience and eventually start on their own. 10-15 years ago there was hardly any product driven company in my country.Now the ones most worth working for do have successful software product that are sold in western markets. There is probably more to the story here. If you’re only at about 50k, you’re making$24 an hour. There’s even cheaper labor than you. It’s going to be a disgusting race to the bottom if we start paying everyone 20, 18, 15 ... dollars an hour.At this point in my life you would have to pay me around six figures for me to do CSS seriously on a daily basis (and it’s likely I’ll still burn out and leave). The cross browser and device testing alone is so tedious and stress inducing that I simply won’t do the job below a certain price point (or I will out of desperation but definitely will bounce ASAP). The unsaid thing about doing heavy CSS work is that it doubles as a QA job from all the testing required.And again, I’m someone that’s good at it. You can’t pay me enough to do it, not if I have choices.
If you think CSS is tedious, then you've never dealt with medical device regulations. cross browser testing will seem like an absolute breeze. If a UK employer were paying £90k for CSS, absolutely everybody and their dog would be applying for a job that pays double a typical dev salary.I think we've already experienced the race to the bottom and come out the other side, with $24 being fairly average for the world and the US being on the high side. This all happened years bacj when companies started outsourcing development and IT to India at rock bottom prices, but then received a lot of low-quality work (you get what you pay for). I believe India has a much different approach to software quality than many other countries.It's taken many companies years to realise that mistake, but I wouldn't call$24 a race to the bottom for skilled labour as that will land you comfortably above average income in the UK
Should we go lower than 24 dollars? Sincere question.If not, never ever show bitterness for your fellowship that brought honor to the profession, wherever they are.I hope never to see developers disrespected that way, anywhere in the world.
There's a whole world outside SF in the US where people do make ~$24/hr. It is well above the median salary in the US. Things may be closer to equilibrium than a lot of people think. I used to work for an outsourcing company that was roughly 50-50 people in the US and in India, and there was never any indication that the US people would be fired en masse, probably because they were somewhat more productive, and somewhat higher paid, and the differences weren't huge. I think the important bit of info is that they work for a bank, not that they write CSS.If I look at the equivalent compensation for people who work for me, the vast majority of developers in the UK and US earn >$100k. Like for like the US employees earn more but that's almost wholly swallowed up in other costs (basically healthcare and our US healthcare plan isn't particularly bad).And the FAANGs and hedge funds, in my experience, have a reasonably similar dynamic.The simple fact is that we generally don't hire for PL knowledge except in very specific circumstances (KDB for example). If all we wanted was a coder, there are much more cost effective options than hiring someone, regardless of location.
1. Why do you think UK software companies aren't more successful compared to American ones. Aren't they saving tons on development costs? 2. Have you considered that you are terribly underpaid?
Serious questions.Which countries would you suggest for this?Just the UK?
The only European countries where costs are as high as the US for tech salaries are basically Switzerland and Norway. In the rest of Europe you have access to a very qualified workforce for less money. But they also have more protection than in the US (fewer unpaid hours, more holidays, no cheap dismissal…).
All Europe?
I think (hope, really) that gp is just one heavy "/s". The knowledge presented in the article is so basic that it is hard to believe that a person who does html for two years and six figures doesn't know these. As for the article, I fail to see any reason for it to exist beyond cheap media presence. Or maybe I should start a blog, because I'm not even halfway too.
I've done CSS since 1999, none of this was new to me, and I've never earned 6 figures for doing HTML and CSS. I moved my career away from HTML/CSS in order to earn more.
I think we picked up a lot more of the basics back in the days because doing CSS was much less forgiving than it is now. And the whole struggle with pixel perfect between browsers just forced you to spend hours on these tiny things.
I was also waaay more limited. All those clever hacks to get equal height columns, "sliding doors", workarounds for IE5/6/7…
Yep, for sure! It was a fun ride but I'm happy it's over, though I don't enjoy CSS the same these days. Could be age and all that, who knows.
Same here (but started in 2000), except I didn't know about the ch unit. OP likely has some great soft skills and negotiation prowess.
I've never been happy with a front-end dev that I've paid six-figures to.I once paid a senior front-end engineer far too much to do a fairly involved layout. Three months, an uncountable number of bugs, and one unusable tangle of Sass later I pulled the plug and swore off ever hiring a "CSS Person" again.I took the Linus+Git approach and said "I'm not writing another line of Python until I understand CSS." After a few weeks of study (I read CSS: The Definitive Guide cover-to-cover) I was able to implement the layout in, and I'm not exaggerating, two hours. No bugs, responsive, cross browser support, etc. Flat out done.I went back to the dev and asked why they tried to implement it with over a thousand lines of Sass using Flexbox over a few lines with CSS Grid.It went like this:Me: "Hey, why did you choose Flexbox over CSS Grid for feature XYZ?"Senior Front-End Dev (SFED): "I used a grid. Bootstrap's grid."Me: "No, CSS Grid"SFED: "Like the 'display: grid' thing? I don't know how that works."I've never met a CSS Person who has read a book on CSS. Or one that can do the arithmetic on a simple flex-grow/flex-shrink/flex-basis combo.. Even with a cheat sheet.I'm a back-end dev, I used to think that CSS was "garbage". After learning the ins and outs I think it's a pretty remarkable set of technologies. A true discipline. But, it's hard to find someone who really understands it because it sits at a weird level in the tech stack. Most developers feel it's beneath them or that they "have the gist of it" and most CSS specialists don't have a firm grip on it or keep up with browser developments.If you're going to work with, hire, or exist as a "CSS Person" within 6-feet of me I'm going to require you to read "CSS: The Definitive Guide" before I give your laptop charger back to you.This article is good, but it's barely the bare minimum that you need to know about not knowing CSS.Six-figures for a "CSS Person" who's read CSS:TDG is completely worth it.CSS and Sass are both worth mastering.For CSS read: "CSS: The Definitive Guide" (https://www.amazon.com/CSS-Definitive-Guide-Visual-Presentat...)For Sass read: "Pragmatic Guide to Sass 3" (https://www.amazon.com/Pragmatic-Guide-Sass-Modern-Style/dp/...)
I too have worked with many front-end developers that were book and spec averse who preferred to gather lore from various gurus in three paragraph blurbs.
You seem to be trivializing css as something static you just have to deal with. This maybe true for 99% of websites but css has had more innovation in the last few years than anything else. And much of it hasn’t reached textbooks. For example css3 functions and now being able to mix these new techniques in undiscovered ways. CSS requires a very nimble and creative mindset. That maybe why backend devs usually are very dismissive of it. By the way I became a frontend developer after I spent many years as a backend dev. My background is in Php and Perl.
Have you looked at Definitive CSS? It's up to date and a massive 1000 page tomb of encyclopedic knowledge and guidance. Anyone who digests that entire thing is not being dismissive of CSS.Digesting that is taking CSS more seriously than most people take learning new programming languages...which is appropriate! Because with CSS, you not only need to learn the language, you are effectively learning something like a language and a framework at the same time. Lots of folks learn the language minimally, piecemeal, on an "as needed basis," and end up wasting a lot of time because they aren't aware of the larger features and how they can be fit together.Most folks do the equivalent of learning about goto statements, variable declarations, and arithmetic operators, and figuring you have everything you need to do good programming work.Technically, you do have everything you need. And with a decade of work under your belt using those things, you can build some amazing and good robust systems. It's also easy to build total junk that nobody can understand but you. Perl, of course, is also infamous for having this quality.
> Have you looked at Definitive CSS? It's up to date ...Not really, it doesn't talk about custom properties, inline svg, how to style them, etc. All very useful these days, if you want to allow color theming for your web app and not use hacks like custom fonts for icons...And custom properties are around since 2014 at the very least.
> Technically, you do have everything you need. And with a decade of work under your belt using those things, you can build some amazing and good robust systems. It's also easy to build total junk that nobody can understand but you.This is what’s beautiful about the frontend. It’s “magic” and underneath in code speak it may be ugly but the work speaks for itself. Arguably that is only if you own it and maintain it.> Lots of folks learn the language minimally, piecemeal, on an "as needed basis," and end up wasting a lot of time because they aren't aware of the larger features and how they can be fit together.The way I see it, learning about all the ingredients in a recipe will not teach you how to cook. In css techniques are unearthed by a small handful of css artisans and it’s not about learning css itself in 99% of the cases. It’s about digging and finding the best ideas on GitHub, stack and codepen. And hacker news.
Or, you know, using a well vetted, well organized resource that has already done all the digging and collating of techniques and organized into a well structured Definitive Guide.
> I've never met a CSS Person who has read a book on CSSI'm not a CSS person, but I've read three (including CSSTDG) and I still can't do anything useful with CSS.
Ha!
Front end development is a craft, just like other crafts. People need to treat it seriously if they expect to be taken seriously and be paid accordingly. (BTW, if you still haven't met a FE dev who's ever read a book on CSS, you have now. I own quite a few AND have read them.)I also think a lot of the value from a front end developer comes from their ability to merge the technical requirements with a designer's vision, and produce a product that is at the same time usable, accessible and (bare minimum) meets the technical requirements.Not going to touch the CSS Grid except for two quick comments:- they absolutely should have been keeping up with CSS Grid, where it was at, and when it was usable and when it isn't- it is still an open question, because although the overall numbers should 90+% available in browsers, you have to check your own stats. For example, the stuff I work on is often used in Enterprise environments, so IE11 is a requirement (yes, we know our numbers, it is, it could literally cost us 6 figure sales if we don't have support for it). We are using CSS Grid, but we have to be very careful what we use it for and to test those uses in IE to make sure the old Grid spec in IE supports the things we do it with. Otherwise it's back to Flex or other layout techniques. It's not a silver bullet, but I also recognize that is becoming a more unique example these days.Those are the kinds of things a good FE dev should know though, IMO, and be able to articulate.Plus, FE devs also need to know HTML and semantics, accessibility, and of course, Javascript. When you can put that all together along with the skills earlier re: usability and accessibility, then you can justify the higher salary brackets.
I think this is a culture problem in fornt end dev in general. Front end devs learn frameworks, not technologies. It's very much focused on getting started as quick as possible instead of taking the time to read and understand the tech they are working with. Even though, as your story shows, you can get things done quicker when you know what's going on.
But when was this? CSS grid has only really been viable for a few years in terms of significant browser support.
Last year. Around November and 91% unprefixed usage (https://caniuse.com/#search=display%3Agrid)
CSS Grid has only been widely supported for a couple years now. Not so hard to believe a frontend dev was still using bootstrap grid.
The inexcusable fact here is that he said that he didn't know how it works. A font-end developer not knowing grid is not worth 6 figures.
I disagree, as both an engineer and a manager.CSS grid is only recently becoming a reasonable tool to use in practice. I don't think any of the top 1% of front end engineers I know would be able to use CSS grid without looking at the documentation.Based on experience, the kind of person who would be able to use it from memory either 1) is very skilled AND had a reason to use it recently, or 2) someone of medium skill who just recently read the spec but in almost all other ways is less skilled than the people I mentioned in the first paragraph.
> I don't think any of the top 1% of front end engineers I know would be able to use CSS grid without looking at the documentation.You probably don't mean it that way, but not having to use the documentation surely isn't a requirement for competency. I resort to the docs all the time, even with topics I'm quite famialiar and experienced with.
That depends on your company I guess. If your entry salary is, or is close to 6 figures then yeah. But if you pay that only for skilled developers _specializing_ in front-end then it is not too much to ask that they are up-to-date with their field.
I agree. It's fairly common for experienced front end devs who have been bitten by browser incompatibilities in the past to wait several years to adopt the latest CSS techniques. The "old techniques" often work very well.Most likely if you had asked me "why flexbox and not CSS grid" I would have said "because flexbox is able to perfectly well support this use case, and I'd rather not introduce a dependency on a less well supported CSS feature unless I need to".I personally tend to prefer using the oldest (within reason) CSS feature that lets me accomplish a task. Or I'll pick the one that best conceptually maps to the task I'm trying to accomplish, if there's a significant difference.
I used to work like that but CSS Grid is actually amazing. Highly recommended.
Too bad you had a bad experience with your dev, but seriously, you just picked up css, people have been fighting it for years because of browser support. There's a reason why there are css reset stylesheets, and frameworks out there. If you are using it in one project great, but when you do it for living you will ended up building your own framework if you are not already using one. SASS also saves you so much time in development.
Yes, one of the issues with people who've recently learned the 2020 version of JS/CSS/frameworkX is that they assume what they've learned is the correct way to do things according to "the best, latest standard". It's a variation of Dunning-Kruger almost.As you said, experienced engineers have been dealing with CSS browser support issues for years, so as a habit try to avoid using the latest shiny new features.Also, experienced engineers do refresh their knowledge from time to time, but you might have caught them between refreshes. Remember that these engineers are also working 40-60 hours per week producing work output in addition to periodically refreshing their skillset. And depending on the environment they've worked in previously they may not have had the ability to use the latest features of whatever technology. Maybe they were working on a legacy app that didn't support ES6, for example. That would be less common now in the days of Babel, but there was a time a few years ago when it would have been reasonable for a working JS engineer not to be familiar with every detail of ES6, as an example.
This might be one of the most helpful book recommendations I’ve ever read.
Thanks. If you'd like some more book and video suggestions I sent this to a friend back when she was looking to get into front-end development a few months ago:Here are some book and video links (with Amazon affiliate tags snuck in). I've read all of these books cover-to-cover save for the RxJS one. They approach front-end as a set of technologies that should be understood and mastered rather than the "CsS hAckS to GET YoU pAiD!" style of most web tutorials.Not sure where to point you with React but if you decide to use Angular or Vue I have some suggestions.CSS/Sass:"CSS: The Definitive Guide": https://www.amazon.com/CSS-Definitive-Guide-Visual-Presentat..."Pragmatic Guide to Sass 3: Tame the Modern Style Sheet": https://www.amazon.com/Pragmatic-Guide-Sass-Modern-Style/dp/...This Sass book has the best structure of any introductory tech book I've ever read."TypeScript":Mastering TypeScript 3: https://www.amazon.com/Mastering-TypeScript-enterprise-ready..."Programming TypeScript": https://www.amazon.com/Programming-TypeScript-Making-JavaScr...RxJS: Reactive programming the most significant development in UI technology in 20 years. Once you get the hang of it managing asynchronous events (user generated, network generated, time based, etc) become a breeze."Build Reactive Websites with RxJS: Master Observables and Wrangle Events": https://www.amazon.com/Build-Reactive-Websites-RxJS-Observab...RxMarbles - Interactive RxJS visulizations: https://rxmarbles.com/Angular:"Angular Development with TypeScript": https://www.amazon.com/Angular-Development-Typescript-Yakov-..."Architecting Angular Applications with Redux, RxJS, and NgRx: Learn to build Redux style high-performing applications with Angular 6": https://www.amazon.com/Architecting-Angular-Applications-Red...Videos:I watched most of the Layout Land videos when I was getting a grip on the state of CSS. Jen Simmons is a developer advocate at Mozilla and has the best overviews I've seen.Basics of CSS Grid: The Big Picture: https://www.youtube.com/watch?v=FEnRpy9XfesUsing Flexbox + CSS Grid Together: Easy Gallery Layout: https://www.youtube.com/watch?v=dQHtT47eH0M
Thanks for sharing this list. If you happen to have any recommendations for pure JavaScript, I'd love to hear it. Otherwise, I'll probably take a look at the TypeScript recommendations and work my way through the material.
Interesting, I am really really bad at css but knew all of this. Long ago, someone gave me 7 euros an hour to do some of this (and html/js/php; I never designed anything, I just edited an existing site to specifications, but css was still my least favorite part). Aren't these just things you run into and pick up on in the first webdev projects you do?!
Six figures for HTML and CSS, sorry, but I don't believe you.
I don’t thnk the author’s recommendation to include img { display: block; } in your reset is generally a good idea. It will break some common uses of images, for example to hold bits of math inline with the text when you are not using mathjax. If you want a displayed image, you should wrap it in a
If you wrap an image in a figure, but leave the image still inline, you’ll probably get a few pixels of extra space at the bottom of the image due to line-height and vertical-align. People find that really confusing. With how most people use images, I agree that img { display: block; } is what you want >99% of the time, and would rather people reverse it for the rare occasion when they want something else. (BTW, doesn’t MathJax emit SVG? Viz., not anyway.)
I’m confused by your reply. Did your eye skip over the word “not” in my comment?
Sorry, you are correct. Drop the final parenthetical, then, but I stand by the rest.
Yes, your other point was something I hadn’t thought about. I guess his advice would be good for some people, but I’m not sure about the 99%. It’s not uncommon to use small images as text-like elements, and when you do that, if you have this reset, you will need to explicitly make them inline. Not a huge deal, but people tend to copy and paste their own CSS libraries between projects, and redefining basic default semantics of elements may create confusion and extra work down the line when things don’t behave the way that any documentation you may consult assumes.
I’m going to stick with the >99% figure. I think it is uncommon, rare even. Other than chat systems with custom emoji rendered as an img, I honestly can’t think when I last saw a site with inline tags. (I’d guess it to be within the last month, maybe even the last week, but it wouldn’t surprise me if it wasn’t.) Twenty years ago, sure, but they’ve gone out of fashion stylistically as well as technically. Technically, I suspect background-image and icon fonts are both used more commonly for small iconography.
It depends on what kind of sites you tend to visit, I guess. I see this several times per day. But I often visit pages with math on them. Before mathjax became standard, inline images was the only way to do it. And visit any Wikipedia page with math: they use mathML and have fallbacks, but the end result is, you are looking at inline images. Also, I often see such things as sparklines and similar things, also inline images. The >99% may be true of what you consume, but without data I am skeptical about the numbers.Also, some people use CMSs that substitute inline images for missing Unicode glyphs, and other things. If it’s done well, you may not even notice. You don’t know if the correct figure is 99% unless you have examined the source of a sample of sites.
Something like this then: figure img { display: block; } figure figcaption img { display: inline; }
Yep, this stood out to me too. Drives me absolutely nuts when an element you expect to be inline is now suddenly block.Especially because the author's reasoning is "it can cause confusion when trying to position them or add vertical margins", after just explaining that with inline-block "you can apply vertical margins to these".
Useful article. Getting started with CSS must be harder these days than it was 15-20 years ago. Not only for mobile-friendliness, but also because of the coexistence of flexbox/grid/traditional layouts. I'm wondering if the standard will ever been simplified, instead of being added more and more features.
CSS and HTML are constantly evolving, and so its demands. However for me CSS is way more straight-forward than in the 90th and 00th.The standards addressed a lot of the pains and struggles we had with floats/clearfix etc. This was challenging but nothing im comparison to supporting IE 6-9, FF, Safari, Opra etc. at the time. Also hacking in JavaScript added to the complexity.Nowadays I can do with 10 lines of grid and flex what great CSS frameworks like Bootstrap abstracted away behind 1000th of lines of code.Also testing is way easier, since browser vendors follow standards and there is chromium everywhere.Earlier we did a lot of div soup, I remember fondly CSS zen garden as well as OneDiv.Awesome times to be a web dev, you work on products (PWAs!) not on browser hacks.My 2 cents.
Btw, there is an effort to have modern-day CSS zen garden: https://stylestage.dev
"div soup" – I like that.The present-day CSS frameworks use "class soup."For example, tailwind (which I like):
This sort of class soup brakes the principal of DRY. Much better to use CSS to style. button { --background: teal; --focus-ring: 2px 2px 5px skyblue, -2px -2px 5px skyblue; background: var(--background); box-shadow: none; } button:hover { --background: wheat; } button:focus-visible, button:focus:not(:focus-visible) { outline: none; } button:focus-visible { box-shadow: var(--focus-ring); } Now you will never forget to add that hover:bg-teal-600 class to your buttons.
I avoid Tailwind-style CSS frameworks. I prefer SCSS and, if needed, mixins. Meaningful class names are much better, IMO.
“Getting started with CSS must be harder these days than it was 15-20 years ago.”Not like I’m teaching anyone these days, but I’m not sure about that. Yes, there’s just more now. Grid and flexbox and transforms and on and on.But if you were totally starting from scratch, certain stuff is just way easier now, or at least, not a pile of hacks upon hacks. Basic beginner stuff that comes to mind…• How can I center something vertically and horizontally without having to explicitly know the size of the container and object to be centered?• What is all this float stuff, how do I just make a grid of icons?• Wait, borders fundamentally change the size of my div? What?• I have to do what now to have a drop shadow?• How do I keep my branding colors consistent without having to make really sure I plug in the right hex values in every little color declaration?
The whole float-based layout thing was a hack. Floats make perfect sense if you only ever use them as floats: box-outs to run text around.The fact you could use them to build all sorts of header/column/footer layouts was both very helpful and entirely confusing.It became the default way to use CSS, for good reason. Yet, I always felt it was a shame it wasn’t often described as the hack it was.
Just use flexbox (and grid if you need to control both dimensions at once). You can ignore all of the old ways of doing layout. Flexbox is supported back to IE11.
As an occasional CSS user, I second that. Every time I have had to use CSS for anything more complicated than diddling item appearance has led to a lot of time on Stack Overflow trying to find answers to questions like "why did my height change work on this element but not that element?" or "how can I center this text vertically in this space?".I don't use CSS frequently enough to not forget these things by the next time I need it.I'll still probably forget parts of Flexbox between the times I need to use CSS, but it will be things like forgetting like what spacing modes are available, which is easy to quickly look up.Flexbox is now my turtle [1]. Well flexbox and grid.
> Flexbox is supported back to IE11.Nope.An incompatible and broken version of Flexbox is supported in IE11. Therefore, layouts look completely different (and broken) when rendered in it.Only people who recommend Flexbox in IE11 haven't tried using it in IE11. I might via a library that did the heavy lifting for me, but there's just too many bugs to recommend anyone write Flexbox and expect it to just work -- it won't/doesn't.PS - I'd link all the IE11 Flexbox bugs but Microsoft took them offline with Microsoft Connect's retirement.
I write a fairly substantiate app (60,000 DOM elements...) that uses flexbox and SVG for all the rendering and it's fine in IE11. The layout problems are more often SVG related. That said, if you don't need to support IE11 it's much easier not to.
Did you use AutoPrefixer/PostCSS?
15-20 years ago it was exceptionally more complicated to have rounded corners on things.The complexity has just shifted. While previously you could easily start using floats, you might run into issues down the road with them, or with browser compatibilityNow, you might struggle a bit more trying to understand Gris syntax and how to use it, but once you do you’re probably set.
I started around 2006 and there are definitively more standards — as in more ways to achieve the same thing. And yes, you have mobile — though we did fluid and "responsive" layouts back then as well.One thing that is better is the browsers adherence to standards. I spent my first years learning all IE6 rendering bugs and how to overcome them.
CSS is way easier now when it was 15-20 years ago (source: 24 years with frontend, 22 professionaly). Alas, that's probably one of the reasons why people do not spend any effort learning it. I'd argue that 15 years ago was the golden age for CSS innovation and web standards in general. Technologies are much more powerfull now, but the attitude of those using them could use some improvement.
Css is getting better not harder.Back then, you had to put some image to make a fake border-radius.Or some weird hack because table layout doesn't wrap.And using a different technology that is flash because css couldn't animate.Not to mention the existence of transpilers like stylus that make writing far easier and we no longer have IE6 that behaved on its own rule.
https://xkcd.com/927/Backwards compatibility support will always require adding on new ways to do things and not removing stuff.I personally find flexboxes frustrating at times and tend to limit my usage to non-grid spacing (footer columns, horizontal menus) and responsive vertical alignment.CSS grids are pretty cool, and maybe if you were just starting out it would be easier to pick up fresh than having to unlearn floating divs.Funny thing is that if you go back 25 years (pre-table layout era), everything was not only mobile compatible but your target viewport was 800x600 which I find the painful part of mobile compatibility these days - it's that ugly zone in-between a nice mobile/cell phone layout and having the whole desktop to work with.
Guys we need to talk. Are you capsbold serious? These are things that you learn in a first week of doing web development tutorials. I read this thread and it makes me think that I'm delusional because of a heat wave. Margins collapse? :focus exists? Me not getting a joke? I don't understand.
I think you may be underestimating the fact that many people have been writing CSS for well over a decade now, and that it has evolved considerably since they first read a web development tutorial. If you've not been paying close attention (or not been able to), there are many things that will have passed you by.It's a blessing to learn css fresh today, in as much as it's a curse to still have practices from ten years ago still lodged in your memory.
> and that it has evolved considerablyGP is pointing out the parts that haven't changed at all, and are old (over two decades), simple, and common enough to be considered basic knowledge. I had the same reaction to those and one or two others.Now if this was someone relatively new to CSS I'd understand, but the opening paragraph establishes how long this person was around. Padding-vs-margin in particular was necessary knowledge to do good layouts back in those earlier days (less so now only due to flex and grid, which aren't on here).
> GP is pointing out the parts that haven't changed at allDoesn't read that way to me:> These are things that you learn in a first week of doing web development tutorials.Sure, padding vs margin isn't exactly new, however display: inline-block, ::before and ::after, rem, ch, and :nth-child() certainly weren't in the old html4/xhtml and css2 guidebook that got me into web development!
To add, I think another post by the same poster on this HN post proves the point that they're not dismissing individual parts, but rather disputing (or sorry but trolling) the article as a whole:> I think (hope, really) that gp is just one heavy "/s". The knowledge presented in the article is so basic that it is hard to believe that a person who does html for two years and six figures doesn't know these. As for the article, I fail to see any reason for it to exist beyond cheap media presence. Or maybe I should start a blog, because I'm not even halfway too.
I was disputing the article in a comment that you quoted, and this thread's other comments in ggggp, if it may help with the investigation.
I think you are point on. The two existing sibling comments have contradictory excuses for this:1. New developers might know this, but standards change fast, and it’s hard to keep up.2. You can’t expect people new to the craft to know everything in the field.I personally found this article sub-basic. You could probably learn more and better on MDN. I think CSS might be one of few areas on HN where a sub-par article doesn’t get the constructive scrutiny it deserves in the top comments.
It's also good to remember that new devs make up something like 50% of the developer population each year (I forget the exact number, maybe it's every 2 years). So while this may be basic information to someone who's been doing this for at least a few years (and I'd argue it isn't, particularly since the site isn't aimed at Front End Devs, per se, but others who have to do front end development), it is valuable information for the target audience.
> These are things that you learn in a first week of doing web development tutorials.You meant CSS tutorials? Never read about "margins collapse" in web development tutorials.Web development tutorials I read (like 10 years ago) were usually about having a web server renders some data queried from a database into some basic HTML. And adding a form to create/edit rows in the database. Then perhaps adding some basic style like colors and paddings and some jQuery to validate the form. Finally moving that page from localhost to internet.Let's be honest, web development covers a very large area of knowledge and CSS is not the most interesting part (not saying CSS is not interesting!). I guess most people just skip the CSS documentation and learn it by inspecting other pages and copying rules, and searching "how to do X in CSS" and reading tutorials about their specific question.
Yeah I thought this was going to be quirks with CSS Grid or something.The "CSS Zen Garden" from 1999 gets posted about every 3 months and makes it to the front page...
There are couple of things that author still don't get.display:inline means that element does not establish a box. Such element is rather a collection of individual content boxes (e.g. glyphs). These boxes can be placed on different lines (text wrapping) and so on. As no element box as no margins and paddings applied to such elements.img element (and input and other replaced(^) elements for that matter) is not a display:inline but rather display:inline-block. Even you will define them as display:inline they will be treated as display:inline-block and so e.g. margins will apply to them.(^) representation of replaced elements (img,input,textarea,iframe,etc.) is outside the scope of CSS - they are always treated as solid boxes.
That is really useful.I have been writing small html front-ends since 2008-09. HTML5,CSS3 made many things easier like gradients, Round borders, and many things that now developer do takes as granted. I clearly remember, how boring and frustrating it was to slice borders and corners from photoshop psd.CSS3 and HTML5 made many good changes, but new css feature like grid, Flex-box all are still confusing. I have to look at the references every time I start new frontend work.
I felt the totally opposite about grid. I spent a little time learning it, and now EVERYTHING is easy (except for things that aren't, of course)
yes everything is easy now, except for things that aren't/
Have you ever taken some deliberate time to really „play“ with those?My recommendation: take a calculator or spreadsheet and start with some simple cases and build up your knowledge in a exploratory manner. A few hours, a session for each feature (flex, grid).Its not only fun but also really helpful!
> Have you ever taken some deliberate time to really „play“ with those?I know just the thing!https://flexboxfroggy.com/
Thanks, froggy was on front page of HN so I knew it for sure, grid garden is new one for me.
yes, I really did play with it multiple times. Its just sometimes I require more revisions.
If you're looking for world-class CSS knowledge, start with https://every-layout.dev; if you build UI for a living and haven't encountered axiomatic css and layout primitives, they're likely to change your world.
To save clicks, it's $100, just covers page layouts. That's a radically misleading comment. It covers much more than layout, and the free content on the site is incredibly helpful and compelling. The full version more than justifies its price. And they offer it for free to students or anyone else for whom the cost is a problem (relying on the honor system).It's far and away the best resource on CSS I've encountered in my 22-year career working with web technology for a living.No affiliation, just a grateful patron. not that the free thing is relevant but from https://every-layout.dev/blog/you-pay/ "The honour system is now closed"> That's a radically misleading comment. It covers much more than layoutGiven the site and all 3rd party reviews just mention layout, how is it misleading? Cant find TOC anywhere?Also, found working discount code 'BRANDEMIC' for 60% off in my travels should any1 bite the bullet.prev hn sub/182 comments: https://news.ycombinator.com/item?id=20196061 Ok,I missed that the honor system closed recently - but they continue to give it away freely to students or those facing hardship, on request.The focus of course is on layout (by far the hardest part of CSS to get right), but it's in the context of building up your entire UI from robust, composable, accessible, standards-compliant, profoundly well-engineered primitives, based on first principles -- which principles are clearly articulated and demonstrated in the site.It also covers typography (IMO by far the best explanation I've encountered of _why_ to use a typographic scale as the foundation of your design system), as well as touching on things like web components, shadowDOM, custom properties.. . and providing a deeply-expert perspective on the relative merits of utility classes (a la Tailwind et al), among other approaches.I've been working in web-related roles for a living for over 20 years and have never once encountered a CSS-related resource that was this compelling and useful."Can't find TOC"? Um, it's in the site's primary navigation, in the left column. Literally impossible to miss.I paid the full$100 and was glad to do so. The authors aren't popular (they've clashed on twitter with some big-name FE people like MDalgleish) but their ideas and body of work are unparalleled, and that's what I support.If I were more selfish I might try to keep my newfound FE superpowers [axiomatic css and composable layout primitives are transformational] a secret, but I find myself wanting to spread the word and support https://every-layout.dev every chance I get.
The ; in your link sends me to about:blank#blocked.
For the benefit of others: https://every-layout.dev
Thanks! Wish I'd caught it in time to fix my 1st comment.
Thanks for the recommendation! I'm mainly a back end developer. I can hack around with CSS, but have never been very good at it.
Anybody knows a good book on CSS for ppl with a solid programming background? Every couple of years I try something and fail. Somehow the tutorial rave about the cascading on and on, and when it comes how to strategically design a good SAP working on different devices the stuff gets mute quickly.
Not a book but https://developer.mozilla.org/en-US/docs/Web/CSS is my go-to for all things CSS, I love mdn it's a fabulous resource.Also zeal (dash on Mac which is paid or you can build zeal yourself like I did) has docsets for CSS which are good.
https://caniuse.com/ is also invaluable to check whether the cool CSS feature is actually supported. it's very much today's quirksmode.com.Once upon a time I would have recommended Eric Meyer's CSS Definitive Guide without hesitation (I think I still have my 1st Edition copy) and it looks like he's re-released with updates for flexboxes and grids.
Can't recommend Andy Bell and Heydon Pickering's Everyday Layout enough. It's a wonderful resource for learning (and re-learning) contemporary best practice CSS.
I have every book written by Heydon and am currently doing Andy's 11ty course. They're both stellar teachers and some of the most knowledgeable programmers when it comes to HTML and CSS. Every Layout is amazing. Strong recommend.
Best resource I've found to learn the foundation of HTML/CSS is Shay Howe's guides [1]. I think it doesn't cover Flexbox and more advanced layouts, but it's totally worth a read. I still go back to it as reference from time to time.
I’ve written a small CSS ebook if that interests you. [1] It teaches you how to build a webpage from scratch.I’m currently writing my second one, which will be more theoretical and cover things like this blog post actually. I’d be interested to know what kind of problems you’re facing in CSS, so feel free to drop me an email.
I'm curious about your book, as it seems small enough for the kind of things I write as well [1]. Can I ask if it earns you enough for anything?Sorry for coming out of nowhere, but I have lots of small tutorials I could use (I teach in a local college) which I could make small ebooks out of them.. Looking for some side money..
I've been earning a bit on a regular basis since I published it 1 year and a half ago. Not enough for replacing a salary, but enough as side pocket money. I'm lucky to have a decent following, so I didn't have to market it a lot. I'm currently writing a second one which is longer and more advanced, so hopefully it will earn me a bit more.
Thanks for the reply! Do you have any practical tips you may want to share? Maybe how did you grow your audience, how you wrote your book (technologies, etc.)?
Well I grew my audience on Twitter by building open source projects. [1] I didn't really plan or aim for anything. It mostly happened.As for technologies, I just wrote the book in markdown using Sublime Text. Then I used a markdown-to-pdf library from GitHub to generate my ebook, and voilà. Nothing fancy really.
It feels like HTML and CSS could be useful for print layout, but I’ve rarely seen a high quality PDF renderer.I know Atom’s markdown preview uses headless chrome under the hood. These are hugely heavyweight tools but the output is very high quality.Are there any other recommended tools for going an HTML route, for typesetting? I’d much rather design pages that way than use InDesign, PDF scripting, or TeX.
We use WKHTMLTOPDF (https://wkhtmltopdf.org/). In our case it's in the Rotativa package (.NET MVC). There are a few caveats; it uses an older headless WebKit engine that doesn't support everything. We use the QT Web Browser (http://www.qtweb.net) to test/debug when needed. We do design pages to be specific "paper" sizes and lean on SVG a lot for anything fancy. We have some JS that breaks long tables into multiple tables, etc. But the resulting output looks flawless.
I have been using Python with Jinja2 templates and Weasyprint[1] for PDF creation for a side project of mine. The libary does have its css limitations but the PDFs it produces look good enough for me.
Yes, I was going to recommend WeasyPrint as well.
> but I’ve rarely seen a high quality PDF rendererWhat exactly do you mean by this? That HTML-to-PDF converters mess up the layout, or similar?You might be aware of this, but it's still worth pointing out that you can specify a different stylesheet for the print version of an HTML page than the default one. I've used this in the past for websites where the customer wanted to use a product description page directly for generating PDFs, without having to do the work twice. We also used some Java-based HTML-to-PDF renderer, which was not perfect but had full support of CSS 2.1 and some support of CSS 3 (and this was several years ago). I think the library was Flying Saucer (based on itext).
Although not exactly what you want, my day job is basically implementing systems that take SVG's produced in design tools, pipe them through to a customer UI for designing print books, posters etc, then renders high res images server side with Chrome headless and finally sending them off to the printer. SVG does a great job, but still requires a lot of coding for laying out the text, line breaks, pages, etc.
Modern CSS technologies such as grid are convenient for laying out webpages. But there can be no “high quality PDF renderer” from HTML/CSS. The result will always look as crude as a web page. In order to produce a high quality document, you need a sophisticated line breaking algorithm such as that deployed by TeX or some other software.
It is possible to use CSS to have justified text with hyphenated line breaks. Font technology like Opentype will manage ligatures etc. I feel like the basics of laying out glyphs are all there.What doesn’t feel so great is page filling, and knowing how to break sections so that one doesn’t have a new section at the bottom of a page with only a few lines of text.
I use wkhtmltopdf without much issue. Sure, it has some limitations and is built around an older html rendering engine, but it works well for a free tool.I use it mainly via the wicked_pdf ruby gem and I'm sure there are wrappers available for other languages too.
A true CSS+HTML-powered print renderer is PrinceXML. https://www.princexml.com/samples/Having used it for over a decade, it's a wonderful tool.
That product's license tiers are a complete non-starter, unprofessional, and they "call for pricing" on the only potentially usable license (but don't provide a copy of the CSO License for review before calling).> A Prince Server License can be installed on a single server to produce PDF documents for company-internal use, or documents that are made available to everyone free of charge.They never define what "company-internal use" means, even in the actual terms[0]. This was clearly written by a non-lawyer, and any company worth its salt wouldn't touch this (the "FAQ" isn't a legally binding document, and even that is incredibly vague/unhelpful).> You many not use Prince to generate documents that are part of a Commerial Service, for example invoices and monthly statements.So "company-internal use" doesn't include order/remittance/invoice/receipt to your company's suppliers/vendors? That's just bizarre, the whole licensing is. The "Service License" feels like a Honey Trap.
> Using pixels (px) is tempting because they’re simple to understand: declare font-size: 24px and the text will be 24px. But that won’t provide a great user experience, particularly for users who resize content at the browser level or zoom into content.Pixels (px) in CSS are relative as well, and scale with zoom. What is the benefit of em/rem?
>What is the benefit of em/rem?They (em*) scale with the element's font size, and if you set an elements font-size itself in em, that will be relative to the parent element's font size.For example: body { font-size: 18px } .something { border: 0.1em solid black; } .something > .inner { font-size: 1.5em } If you change the font-size on body, everything on that page will re-scale seamlessly. This is most useful for things like buttons/icons that are supposed to flow properly with text. So now even if you use them once in big text, and once in a small footnote, they'll still fit perfectly within the line.The biggest limitation to this approach is that you can easily end up with computed values that are weird fractions of pixels, so you have to be a bit smart and pick an easily divisible number for your base font-size if you're aiming for something pixel-perfect.
The challenge with em units is that if you modify a parent element's size it changes the children as well. In your example, if you put 'font-size: 2.0em' on the .something class then the .inner would end up changing to 3.0em. That's rarely what you actually want to happen (especially in a web app). If you just want to control the overall font scaling of everything on the page using rem is preferable, because then they'd ignore the parent and scale based on the root element size.
The key is to use rem for font-size and em or unitless for all other properties: padding, border-width, margin, line-height. That way the size of the element is independent of the context.
Ok, I can kinda see the value of that - e.g. if you put a "tag button" inside some text, it will have the right size in relation to that text.
There's no date on the article, and this was the entry that made me wonder if it's 5+ years old, back when text zoom was more prevalent. When full zoom became the default, em/rem became less necessary, and at least to me appear to be falling out of favor as extra unnecessary complexity.(Quick edit: Mentioned but not really given much focus is that em/rem can be used on, say, image width and margin, as well - making it adjustable in a text-zoom world. That's mainly what I have in mind here.)(Even aside from affecting non-text, in the newly popular component-oriented design, you typically don't want relative font sizing to leak between components.)
Yeah I came here to ask this. Using relative units for layout seems like a terrible idea. If you want to make your text a little bigger, your entire layout changes too. That's fine if it's something like text padding or paragraph margins, but stuff like border width, or dimensions of boxes? That seems weird.Browsers have been able to zoom pixel widths for years.
"Relative" and "Absolute" have their own definitions and by that definition "px" is indeed an absolute unit.
Not anymore, as most px units now is based on a reference px, a size determined based on screen density and viewing distance. That's why something rendered at 600px on an iphone takes almost all the width, even though the screen has more than double that in pixels. And when zooming, the reference px is zoomed. So the old reason for not using px and instead use rem doesn't apply.
That’s useful.Some time ago, I wrote up a series about CSS. It’s still quite relevant (but CSS has come quite a ways, since then).The thing I’ve found that is often misunderstood, is specificity. It’s a fairly non-trivial concept: https://littlegreenviper.com/miscellany/stylist/introduction...
Cached URL since the server is apparently over capacity: https://webcache.googleusercontent.com/search?q=cache:jcSTVF...
Now on http://archive.is/AiECQ
Thanks, clearly many people want to have a look.
An element that has: display: inline-block is inline outside, block inside.That is, the element is inline for the containing block, but its children feel like they are in a block.An inline-block element can be vertically aligned with respect to the baseline: it acts a bit like a character in a paragraph. That's why you can vertically center things using vertical-align:center on an inline-block element.At least, this is my intuition of inline-block, this comment is far from being normative.
To be even more pedantic: It is inline outside and flow-root inside. From MDN:> The element generates a block element box that establishes a new block formatting context, defining where the formatting root lies.
Ah but then it does not work as a mnemonic anymore!I'm kidding. Thanks, TIL I learned that with now have display-inside and display-outside (and flow[-root]). This makes perfect sense. Finally, even, I'd say. Having a property that defines both the behaviors of an element outside and inside without being able to define these behaviors separately was both strange and limiting. I'm happy I even used the terms that have been chosen to speak about these notions in CSS by chance.How do you track all these useful additions to CSS conveniently though? I can't rely on random comments on HN to learn about them by chance (no offense intended to your valuable comment btw, being random is perfectly fine). I also can't possibly systematically learn about each and every addition neither. Are there resources addressing this?
I recommend taking the State of CSS[1] survey, it is a nice way of keeping up with the latest features. Other then that I use PostCSS with the postcss-preset-env[2] plugin. There you can enable various future features of the language and have it compile down to a more supported version. If your really want to dive into the future of CSS you can read the issue board for the CSSWG[3] (beware, it can suck hours out of your days). Smashing Magazine[4] and A List Apart[5] also sometimes have articles about a new(ish) or upcoming features.But the CSS community could really benefit from the spec maintainers having more active bloggers among them (e.g. like Rachel Andrew[6]) and provide a regular update of the language like 2ality does for JavaScript[7].
Thank you very much :-)
More
Search:
|
{}
|
Partial fraction decomposition
1. Mar 20, 2009
Nyasha
1. The problem statement, all variables and given/known data
Find the partial fraction decomposition of :
$$\frac{x^2}{(1-x^4)^2}$$
3. The attempt at a solution
$$\frac{x^2}{(1-x^4)^2}=\frac{A}{(1-x^4)}+\frac {B}{(1-x^4)^2}$$
$$=A(1-x^4)+B$$
when x=1
$$1=A(1-1^4)+B$$
Hence B=1 and A=0
$$\frac{x^2}{(1-x^4)^2}=\frac{0}{(1-x^4)}+ \frac{1}{(1-x^4)^2}$$
How come according to the answers at the back of the book l am wrong. Where have l done a mistake ?
2. Mar 20, 2009
jdougherty
When x = 1, you have
$1 = A(1-1^{4})+B = A(0)+B = B$
So B = 1 works, but what values of A satisfy that equation? What values don't? If there are multiple values A can have, then you need to determine which is the correct one.
3. Mar 20, 2009
Dick
Well, 0/(1-x^4)^2+1/(1-x^4)^2 clearly does not equal x^2/(1-x^4)^2, now does it? You should probably factor the denominator completely and then look up what partial fractions look like if you have a quadratic in the denominator.
Last edited: Mar 20, 2009
4. Mar 20, 2009
Nyasha
Isn't the denominator already in a factored form ? Should l expand it ?
5. Mar 20, 2009
gabbagabbahey
The denominator is a reducible quartic...I'll give you a hint to factoring it: $(1-x^a)(1+x^a)=$____?
6. Mar 20, 2009
Nyasha
$$(1-x^4)(1-x^4)=(1-x^2)^2 (1-x^2)^2$$
7. Mar 20, 2009
gabbagabbahey
No, I meant $1-x^4$ is a reducible quartic....factor that
8. Mar 20, 2009
Nyasha
I am getting very confused. Show me an example which has nothing to do with this question maybe l will understand what you are trying to say
9. Mar 20, 2009
gabbagabbahey
I'm not sure how much simpler I can make it....do you really not know how to factor $1-x^4$? It is something you should have been taught in highschool....
10. Mar 21, 2009
Nyasha
"Reducible quartic" send me a little bit off track
$$1-x^4=(1-x^2)(1+x^2)=(1+x)(1-x)(1+x^2)$$
Will this mean l will be left with :
$$\frac{x^2}{(1-x^4)^2}=\frac{A}{(1+x)}+\frac {B}{(1-x)}+\frac{Cx+D}{1+x^2}$$
11. Mar 21, 2009
gabbagabbahey
Yes!
Careful:
$$\frac{x^2}{(1-x^4)^2}=\frac{x^2}{(1+x)^2(1-x)^2(1+x^2)^2}$$
12. Mar 21, 2009
Nyasha
Uhhmmm, which means
$$\frac{x^2}{(1-x^4)^2}=\frac {x^2}{(1+x)^2(1-x)^2(1+x^2)^2}=\frac{A}{(1+x)}+\frac{B}{(1-x)}+\frac{C}{1-x}+\frac{D}{1-x}+\frac{Ex+F}{1+x^2)}^2$$
13. Mar 21, 2009
gabbagabbahey
Your LaTeX is a little a sloppy; surely you mean:
$$\frac{x^2}{(1-x^4)^2}=\frac {x^2}{(1+x)^2(1-x)^2(1+x^2)^2}=\frac{A}{(1+x)}+\frac{B}{(1+x)^2}+\frac{C}{(1-x)}+\frac{D}{(1-x)^2}+\frac{Ex+F}{(1+x^2)}+\frac{Gx+H}{(1+x^2)^2}$$
right?
14. Mar 21, 2009
Nyasha
Yes that what l mean, is it correct ?
15. Mar 21, 2009
gabbagabbahey
Yes, now determine the constants....
16. Mar 21, 2009
Nyasha
$$x^2=A(1-x)^2(1+x)+B(1-x)^2+C(1-x)(1+x)^2+D(1+x^2)^2+(Ex+F)(1-x)^2+(Gx+H)(1-x)^2$$
when x=1
$$1= A(1-1)^2(1+1)+ B(1-1)^2+C(1-1)(1+1)^2+D(1+1)^2+(Ex+F)(1-1)^2+(Gx+H)(1-1)^2$$
$$4D=1$$
$$D=\frac{1}{4}$$
when x=-1
$$1=A(1+1)^2(1-1)+B(1+1)^2+C(1+1)(1-1)^2+\frac{1}{4}(1+1)^2+(Ex+F)(1+1)^2+(Gx+H)(1--1)^2$$
when x=o
0=A+B+C+\frac{1}{4}+(Ex+F)+ (Gx+H)
It got so many unknowns and only three equations, how do l get around this last hurdle
Last edited: Mar 21, 2009
17. Mar 21, 2009
djeitnstine
Compare coefficients.
18. Mar 21, 2009
gabbagabbahey
That doesn't look right at all, be careful with your multiplication!
19. Mar 21, 2009
gabbagabbahey
There is an easier way than solving the system of equation this method results in.
Try plugging in x=i and x=-i for starters
20. Mar 21, 2009
Nyasha
Does this look correct:
$$A(1+x)(1-x)^2(1+x^2)^2+B(1-x)^2(1+x^2)^2+C(1-x)(1+x^2)^2(1+x)^2+D(1+x^2)^2(1+x)^2+(Ex+F)(1+x)^2(1+x^2)(1-x)^2+(Gx+H)(1-x)^2(1+x)^2$$
Last edited: Mar 21, 2009
|
{}
|
aboutsummaryrefslogtreecommitdiffstats log msg author committer range
path: root/interfaces.tex
diff options
context: 12345678910152025303540 space: includeignore mode: unifiedssdiffstat only
Diffstat (limited to 'interfaces.tex')
-rw-r--r--interfaces.tex8
1 files changed, 5 insertions, 3 deletions
diff --git a/interfaces.tex b/interfaces.texindex 41bf72d..512bde2 100644--- a/interfaces.tex+++ b/interfaces.tex@@ -24,7 +24,7 @@ The DAB+ programme is encoded to \filename{prog2.dabp}. The extension other AAC encoded audio, it makes sense to use a special extension. The command is: \begin{lstlisting}-dabplus-enc -i prog2.wav -b 88 -a o prog2.dabp+dabplus-enc -i prog2.wav -b 88 -o prog2.dabp \end{lstlisting} These resulting files can then be used with ODR-DabMux to create an ETI file.@@ -188,7 +188,7 @@ buffer is empty. Then the multiplexer will enter prebuffering, and wait again until the buffer is full enough. This will create an audible interruption, whose length corresponds to the prebuffering. -Or the sound card clock is a bit slow, and the buffer will be filled up faster+Or the sound card clock is a bit fast, and the buffer will be filled up faster than data is consumed by the multiplexer. At some point, the buffer will hit the maximum size, and one superframe will be discarded. This also creates an audible glitch.@@ -204,7 +204,7 @@ insures that the encoder outputs the AAC bitstream at the nominal rate, aligned to the NTP-synchronised system time, and not to the sound card clock. The sound card clock error is compensated for inside the encoder. -Complete examples of such a setup are given in the Scenarios.+Complete examples of such a setup are given in the scenarios. \subsubsection{Authentication Support} In order to be able to use the Internet as contribution network, some form of@@ -240,3 +240,5 @@ input but instead of using files, FIFOs, also called named pipes'' are created first using \texttt{mkfifo}. This setup is deprecated in favour of the ZeroMQ interface.++% vim: spl=en spell tw=80 et
|
{}
|
# Wydawnictwa / Banach Center Publications / Wszystkie tomy
## A naive-topological study of the contiguity relations for hypergeometric functions
### Tom 69 / 2005
Banach Center Publications 69 (2005), 257-268 MSC: Primary 33C05. DOI: 10.4064/bc69-0-20
#### Streszczenie
When the parameters are real, the hypergeometric equation defines a Schwarz triangle. We study a combinatorial-topological property of the Schwarz triangle when the three angles are not necessarily less than $\pi$.
|
{}
|
# Learning Goals¶
This Notebook is designed to walk the user (you) through: Creating or altering the association (asn) file used by the Cosmic Origins Spectrograph (COS) pipeline to determine which data to process:
1. Examining an association file
2. Editing an existing association file
- 2.1. Removing an exposure
3. Creating an entirely new association file
- 3.1. Simplest method
# 0. Introduction¶
The Cosmic Origins Spectrograph (COS) is an ultraviolet spectrograph on-board the Hubble Space Telescope (HST) with capabilities in the near ultraviolet (NUV) and far ultraviolet (FUV).
This tutorial aims to prepare you to alter the association file used by the CalCOS pipeline. Association files are fits files containing a binary table extension, which lists science and calibration exposures which the pipeline will process together.
We'll demonstrate creating an asn file in two ways: First, we'll demonstrate editing an existing asn file to add or remove an exposure. Second, we'll show how to create an entirely new asn file.
## We will import the following packages:¶
• numpy to handle array functions
• astropy.io fits and astropy.table Table for accessing FITS files
• glob, os, and shutil for working with system files
• glob helps us search for filenames
• os and shutil for moving files and deleting folders, respectively
• astroquery.mast Mast and Observations for finding and downloading data from the MAST archive
• datetime for updating fits headers with today's date
• pathlib Path for managing system paths
These python packages are installed standard with the the STScI conda distribution. For more information, see our Notebook tutorial on setting up an environment.
In [1]:
# Import for: array manipulation
import numpy as np
# Import for: reading fits files
from astropy.io import fits
from astropy.table import Table
# Import for: system files
import glob
import os
import shutil
from astroquery.mast import Observations
# Import for: changing modification date in a fits header
import datetime
#Import for: working with system paths
from pathlib import Path
## We will also define a few directories we will need:¶
In [2]:
# These will be important directories for the Notebook
outputdir = Path('./output/')
plotsdir = Path('./output/plots/')
# Make the directories if they don't already exist
Out[2]:
(None, None, None)
## And we will need to download the data we wish to filter and analyze¶
We choose the exposures with the association obs_ids: ldif01010 and ldif02010 because we happen to know that some of the exposures in these groups failed, which gives us a real-world use case for editing an association file. Both ldif01010 and ldif02010 are far-ultraviolet (FUV) datasets on the quasi-stellar object (QSO) PDS 456.
In [3]:
pl = Observations.get_product_list(Observations.query_criteria(obs_id='ldif0*10')) # search for the correct obs_ids and get the product list
fpl = Observations.filter_products(pl,
productSubGroupDescription=['RAWTAG_A', 'RAWTAG_B','ASN']) # filter to rawtag and asn files in the product list
for gfile in glob.glob("**/ldif*/*.fits", recursive=True): # Move all fits files in this set to the base data directory
Downloading URL https://mast.stsci.edu/api/v0.1/Download/file?uri=mast:HST/product/ldif01010_asn.fits to data/mastDownload/HST/ldif01010/ldif01010_asn.fits ... [Done]
# 1. Examining an association file¶
Above, we downloaded two association files and their rawtag data files. We will begin by searching for the association files and reading one of them (LDIF01010). We could just as easily pick ldif02010.
In [4]:
asnfiles = glob.glob("**/*ldif*asn*", recursive=True) # There will be two (ldif01010_asn.fits and ldif02010_asn.fits)
asnfile = asnfiles[0] # We want to work primarily with ldif01010_asn.fits
asn_contents = Table.read(asnfile) # Gets the contents of the asn file
asn_contents # Display these contents
Out[4]:
Table length=5
MEMNAMEMEMTYPEMEMPRSNT
bytes14bytes14bool
LDIF02NMQEXP-FP1
LDIF02NSQEXP-FP1
LDIF02NUQEXP-FP1
LDIF02NWQEXP-FP1
LDIF02010PROD-FP1
We see that the association file has five rows: four exposures denoted with the MEMTYPE = EXP-FP, and a product with MEMTYPE = PROD-FP.
In the cell below, we examine a bit about each of the exposures as a diagnostic:
In [5]:
for memname, memtype in zip(asn_contents['MEMNAME'], asn_contents["MEMTYPE"]): # Cycles through each file in asn table
memname = memname.lower() # Find file names in lower case letters
if memtype == 'EXP-FP': # We only want to look at the exposure files
rt_a = (glob.glob(f"**/*{memname}*rawtag_a*", recursive=True))[0] # Find the actual filepath of the memname for rawtag_a and rawtag_b
rt_b = (glob.glob(f"**/*{memname}*rawtag_b*", recursive=True))[0]
# Now print all these diagnostics:
print(f"Association {(fits.getheader(rt_a))['ASN_ID']} has {memtype} exposure {memname.upper()} with \
Association LDIF02010 has EXP-FP exposure LDIF02NMQ with exposure time 1653.0 seconds at cenwave 1280 Å and FP-POS 1.
Association LDIF02010 has EXP-FP exposure LDIF02NSQ with exposure time 1334.0 seconds at cenwave 1280 Å and FP-POS 2.
Association LDIF02010 has EXP-FP exposure LDIF02NUQ with exposure time 1334.0 seconds at cenwave 1280 Å and FP-POS 3.
Association LDIF02010 has EXP-FP exposure LDIF02NWQ with exposure time 2783.0 seconds at cenwave 1280 Å and FP-POS 4.
We notice that something seems amiss with exposure LDIF01TYQ: This file has an exposure time of 0.0 seconds - something has gone wrong. In this case, there was a guide star acquisition failure as described on the data preview page.
In the next section, we will correct this lack of data by replacing the bad exposure with an exposure from the other association group.
# 2. Editing an existing association file¶
## 2.1. Removing an exposure¶
We know that at least one of our exposures - ldif01tyq - is not suited for combination into the final product. It has an exposure time of 0.0 seconds, in this case from a guide star acquisition failure. This is a generalizable issue, as you may often know an exposure is "bad" for many reasons: perhaps it was taken with the shutter closed, or with anomolously high background noise, or any number of reasons we may wish to exclude an exposure from our data. To do this, we will need to alter our existing association file before we re-run CalCOS.
We again see the contents of our main association file below. Note that True/False and 1/0 are essentially interchangable in the MEMPRSNT column.
In [6]:
Table.read(asnfiles[0])
Out[6]:
Table length=5
MEMNAMEMEMTYPEMEMPRSNT
bytes14bytes14bool
LDIF02NMQEXP-FP1
LDIF02NSQEXP-FP1
LDIF02NUQEXP-FP1
LDIF02NWQEXP-FP1
LDIF02010PROD-FP1
We can set the MEMPRSNT value to False or 0 for our bad exposure:
In [7]:
with fits.open(asnfile, mode='update') as hdulist: # We need to change values with the asnfile opened and in 'update' mode
tbdata = hdulist[1].data # This is where the table data is read into
for expfile in tbdata: # Check if each file is one of the bad ones
if expfile['MEMNAME'] in ['LDIF01TYQ']:
expfile['MEMPRSNT'] = False # If so, set MEMPRSNT to False AKA 0
Out[7]:
Table length=5
MEMNAMEMEMTYPEMEMPRSNT
bytes14bytes14bool
LDIF02NMQEXP-FP1
LDIF02NSQEXP-FP1
LDIF02NUQEXP-FP1
LDIF02NWQEXP-FP1
LDIF02010PROD-FP1
We removed the failed exposure taken with FP-POS = 1. Usually we want to combine one of each of the four fixed-pattern noise positions (FP-POS), so let's add the FP-POS = 1 exposure from the other association group.
In the cell below, we determine which exposure from LDIF02010 was taken with FP-POS = 1.
• It does this by looping through the files listed in LDIF02010's association file, and then reading in that file's header to check if its FPPOS value equals 1.
• It also prints some diagnostic information about all of the esxposure files.
In [8]:
asn_contents_2 = Table.read(asnfiles[1]) # Reads the contents of the 2nd asn file
for memname, memtype in zip(asn_contents_2['MEMNAME'], asn_contents_2["MEMTYPE"]): # Loops through each file in asn table for LDIF02010
memname = memname.lower() # Convert file names to lower case letters, as in actual filenames
if memtype == 'EXP-FP': # We only want to look at the exposure files
rt_a = (glob.glob(f"**/*{memname}*rawtag_a*", recursive=True))[0] # Search for the actual filepath of the memname for rawtag_a
rt_b = (glob.glob(f"**/*{memname}*rawtag_b*", recursive=True))[0] # Search for the actual filepath of the memname for rawtag_b
# Now print all these diagnostics:
print(f"Association {(fits.getheader(rt_a))['ASN_ID']} has {memtype} exposure {memname.upper()} with \
print(f"^^^ The one above this has the FP-POS we are looking for ({memname.upper()})^^^\n")
asn2_fppos1_name = memname.upper() # Save the right file basename in a variable
Association LDIF01010 has EXP-FP exposure LDIF01TYQ with exptime 0.0 seconds at cenwave 1280 Å and FP-POS 1.
^^^ The one above this has the FP-POS we are looking for (LDIF01TYQ)^^^
Association LDIF01010 has EXP-FP exposure LDIF01U0Q with exptime 1404.0 seconds at cenwave 1280 Å and FP-POS 2.
Association LDIF01010 has EXP-FP exposure LDIF01U2Q with exptime 1404.0 seconds at cenwave 1280 Å and FP-POS 3.
Association LDIF01010 has EXP-FP exposure LDIF01U4Q with exptime 2923.0 seconds at cenwave 1280 Å and FP-POS 4.
It's a slightly different procedure to add a new exposure to the list rather than remove one.
Here we want to read the table in the fits association file into an astropy Table. We can then add a row into the right spot, filling it with the new file's MEMNAME, MEMTYPE, and MEMPRSNT. Finally, we have to save this table into the existing fits association file.
In [9]:
asn_orig_table = Table.read(asnfile) # Read in original data from the file
asn_orig_table.insert_row(len(asn_orig_table)- 1 , [asn2_fppos1_name,'EXP-FP',1]) # Add a row with the right name after all the original EXP-FP's
new_table = fits.BinTableHDU(asn_orig_table) # Turn this into a fits Binary Table HDU
with fits.open(asnfile, mode='update') as hdulist: # We need to change data with the asnfile opened and in 'update' mode
hdulist[1].data = new_table.data # Change the orig file's data to the new table data we made
Now, we can see there is a new row with our exposure from the other asn file group: LDIF02NMQ.
In [10]:
Table.read(asnfile)
Out[10]:
Table length=6
MEMNAMEMEMTYPEMEMPRSNT
bytes14bytes14bool
LDIF02NMQEXP-FP1
LDIF02NSQEXP-FP1
LDIF02NUQEXP-FP1
LDIF02NWQEXP-FP1
LDIF01TYQEXP-FP1
LDIF02010PROD-FP1
Excellent. In the next section we will create a new association file from scratch.
# 3. Creating an entirely new association file¶
For the sake of demonstration, we will generate a new association file with four exposure members: even-numbered FP-POS (2,4) from the first original association (LDIF01010), and odd-numbered FP-POS (1,3) from from the second original association (LDIF02010).
From section 2, we see that this corresponds to :
Name Original asn FP-POS
LDIF02010 LDIF02NMQ 1
LDIF01010 LDIF01U0Q 2
LDIF02010 LDIF02NUQ 3
LDIF01010 LDIF01U4Q 4
## 3.1. Simplest method¶
Below, we manually build up an association file from the three necessary columns:
1. MEMNAME
2. MEMTYPE
3. MEMPRSNT
In [11]:
# Adding the exposure file details to the association table
new_asn_memnames = ['LDIF02NMQ','LDIF01U0Q','LDIF02NUQ','LDIF01U4Q'] # MEMNAME
types = ['EXP-FP', 'EXP-FP', 'EXP-FP', 'EXP-FP'] # MEMTYPE
included = [True, True, True, True] # MEMPRSNT
# Adding the ASN details to the end of the association table
new_asn_memnames.append('ldifcombo'.upper()) # MEMNAME column
types.append('PROD-FP') # MEMTYPE column
included.append(True) # MEMPRSNT column
# Putting together the fits table
# 40 is the number of characters allowed in this field with the MEMNAME format = 40A.
# If your rootname is longer than 40, you will need to increase this
c1 = fits.Column(name='MEMNAME', array=np.array(new_asn_memnames), format='40A')
c2 = fits.Column(name='MEMTYPE', array=np.array(types), format='14A')
c3 = fits.Column(name='MEMPRSNT', format='L', array=included)
asn_table = fits.BinTableHDU.from_columns([c1, c2, c3])
# Writing the fits table
asn_table.writeto(outputdir / 'ldifcombo_asn.fits', overwrite=True)
print('Saved: '+ 'ldifcombo_asn.fits'+ f" in the output directory: {outputdir}")
Saved: ldifcombo_asn.fits in the output directory: output
Examining the file we have created:
We see that the data looks correct - exactly the table we want!
In [12]:
Table.read(outputdir / 'ldifcombo_asn.fits')
Out[12]:
Table length=5
MEMNAMEMEMTYPEMEMPRSNT
bytes40bytes14bool
LDIF02NMQEXP-FPTrue
LDIF01U0QEXP-FPTrue
LDIF02NUQEXP-FPTrue
LDIF01U4QEXP-FPTrue
LDIFCOMBOPROD-FPTrue
However, the 0th and 1st fits headers no longer contain useful information about the data:
In [13]:
fits.getheader(outputdir / 'ldifcombo_asn.fits', ext=0)
Out[13]:
SIMPLE = T / conforms to FITS standard
BITPIX = 8 / array data type
NAXIS = 0 / number of array dimensions
EXTEND = T
In [14]:
fits.getheader(outputdir / 'ldifcombo_asn.fits', ext=1)
Out[14]:
XTENSION= 'BINTABLE' / binary table extension
BITPIX = 8 / array data type
NAXIS = 2 / number of array dimensions
NAXIS1 = 55 / length of dimension 1
NAXIS2 = 5 / length of dimension 2
PCOUNT = 0 / number of group parameters
GCOUNT = 1 / number of groups
TFIELDS = 3 / number of table fields
TTYPE1 = 'MEMNAME '
TFORM1 = '40A '
TTYPE2 = 'MEMTYPE '
TFORM2 = '14A '
TTYPE3 = 'MEMPRSNT'
TFORM3 = 'L '
We can instead build up a new file with our old file's fits header, and alter it to reflect our changes.
We first build a new association file, a piecewise combination of our original file's headers and our new table:
In [15]:
with fits.open(asnfile, mode='readonly') as hdulist: # Open up the old asn file
hdulist.info() # Shows the first hdu is empty except for the header we want
hdu0 = hdulist[0] # We want to directly copy over the old 0th header/data-unit AKA "hdu":
# essentially a section of data and its associated metadata, called a "header"
# see https://fits.gsfc.nasa.gov/fits_primer.html for info on fits structures
d0 = hdulist[0].data # gather the data from the header/data unit to allow the readout
h1 = hdulist[1].header # gather the header from the 1st header/data unit to copy to our new file
hdu1 = fits.BinTableHDU.from_columns([c1, c2, c3], header=h1) # Put together new 1st hdu from old header and new data
new_HDUlist = fits.HDUList([hdu0,hdu1]) # New HDUList from old HDU 0 and new combined HDU 1
new_HDUlist.writeto(outputdir / 'ldifcombo_2_asn.fits', overwrite=True) # Write this out to a new file
new_asnfile = outputdir / 'ldifcombo_2_asn.fits' # Path to this new file
print('\nSaved: '+ 'ldifcombo_2_asn.fits'+ f"in the output directory: {outputdir}")
Filename: data/ldif02010_asn.fits
No. Name Ver Type Cards Dimensions Format
0 PRIMARY 1 PrimaryHDU 43 ()
1 ASN 1 BinTableHDU 24 6R x 3C [14A, 14A, L]
Saved: ldifcombo_2_asn.fitsin the output directory: output
Now we edit the relevant values in our fits headers that are different from the original.
Note: It is possible that a generic fits file may have different values you may wish to change. It is highly recommended to examine your fits headers.
In [16]:
date = datetime.date.today() # Find today's date
# Below, make a dict of what header values we want to change, corresponding to [new value, extension the value lives in, 2nd extension if applies]
keys_to_change = {'DATE':[f'{date.year}-{date.month}-{date.day}',0], 'FILENAME':['ldifcombo_2_asn.fits',0],
'ROOTNAME':['ldifcombo_2',0,1], 'ASN_ID':['ldifcombo_2',0], 'ASN_TAB':['ldifcombo_2_asn.fits',0], 'ASN_PROD':['False',0],
'EXTVER':[2,1], 'EXPNAME':['ldifcombo_2',1]}
# Actually change the values below (verbosely):
for keyval in keys_to_change.items():
print(f"Editing {keyval[0]} in Extension {keyval[1][1]}")
fits.setval(filename=new_asnfile, keyword=keyval[0], value=keyval[1][0], ext=keyval[1][1])
# Below is necessary as some keys are repeated in both headers ('ROOTNAME')
if len(keyval[1])>2:
print(f"Editing {keyval[0]} in Extension {keyval[1][2]}")
fits.setval(filename=new_asnfile, keyword=keyval[0], value= keyval[1][0], ext=keyval[1][2])
Editing DATE in Extension 0
Editing FILENAME in Extension 0
Editing ROOTNAME in Extension 0
Editing ROOTNAME in Extension 1
Editing ASN_ID in Extension 0
Editing ASN_TAB in Extension 0
Editing ASN_PROD in Extension 0
Editing EXTVER in Extension 1
Editing EXPNAME in Extension 1
And now we have created our new association file. The file is now ready to be used in the CalCOS pipeline!
If you're interested in testing your file by running it through the CalCOS pipeline, you may wish to run the test_asn.py file included in this subdirectory of the GitHub repository. i.e. from the command line:
\$ python test_asn.py
Note that you must first...
1. Have created the file by running this Notebook
2. Alter line 21 of test_asn.py to set the lref directory to wherever you have your cache of CRDS reference files (see our Setup Notebook).
If the test runs successfully, it will create a plot in the subdirectory ./output/plots/ .
## Congratulations! You finished this Notebook!¶
There are more COS data walkthrough Notebooks on different topics. You can find them here.
If you use astropy, matplotlib, astroquery, or numpy for published research, please cite the authors. Follow these links for more information about citations:
|
{}
|
#### Volume 6, issue 1 (2006)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Editorial Procedure Submission Guidelines Submission Page Subscriptions Author Index To Appear Contacts ISSN (electronic): 1472-2739 ISSN (print): 1472-2747
A rational splitting of a based mapping space
### Katsuhiko Kuribayashi and Toshihiro Yamaguchi
Algebraic & Geometric Topology 6 (2006) 309–327
arXiv: 0902.4876
##### Abstract
Let ${\mathsc{ℱ}}_{\ast }\left(X,Y\right)$ be the space of base-point-preserving maps from a connected finite CW complex $X$ to a connected space $Y$. Consider a CW complex of the form $X{\cup }_{\alpha }{e}^{k+1}$ and a space $Y$ whose connectivity exceeds the dimension of the adjunction space. Using a Quillen–Sullivan mixed type model for a based mapping space, we prove that, if the bracket length of the attaching map $\alpha :{S}^{k}\to X$ is greater than the Whitehead length $WL\left(Y\right)$ of $Y$, then ${\mathsc{ℱ}}_{\ast }\left(X{\cup }_{\alpha }{e}^{k+1},Y\right)$ has the rational homotopy type of the product space ${\mathsc{ℱ}}_{\ast }\left(X,Y\right)×{\Omega }^{k+1}Y$. This result yields that if the bracket lengths of all the attaching maps constructing a finite CW complex $X$ are greater than $WL\left(Y\right)$ and the connectivity of $Y$ is greater than or equal to $dimX$, then the mapping space ${\mathsc{ℱ}}_{\ast }\left(X,Y\right)$ can be decomposed rationally as the product of iterated loop spaces.
##### Keywords
mapping space, $d_1$–depth, bracket length, Whitehead length
Primary: 55P62
Secondary: 54C35
|
{}
|
Journal article Open Access
# Stakeholders' Satisfaction from a New Management Information System to Confront the Phenomenon of Student Absences
### JSON-LD (schema.org) Export
{
"description": "<p><em>This paper proposes an Educational Management Information System (EMIS) that organizes all the administrative procedures with respect to monitoring school absences so that the related products have the same legal effect as the traditional procedures. The proposed EMIS covers the obligation of the Administration for complete, immediate and accurate information addressed to parents, reduces the operation cost of updating parents, removes workload from teachers, produces related documents with accuracy, provides the Administration with proper tools able to monitor student attendance and the tasks that the involved personnel has to accomplish. Based on the above-mentioned EMIS, we measure the satisfaction of all stakeholders, i.e., parents, teachers and executives using three types of questionnaires. Our results indicate that the proposed EMIS (a) satisfies the Administration's obligation for keeping the parents informed with thorough, immediate and accurate information, (b) reduces the operational cost of parental updates, and (c) reduces the workload of teachers. Future work is also proposed.</em></p>",
"creator": [
{
"affiliation": "Public Vocational Training Institute, Greece",
"@type": "Person",
}
],
"headline": "Stakeholders' Satisfaction from a New Management Information System to Confront the Phenomenon of Student Absences",
"datePublished": "2018-04-27",
"url": "https://zenodo.org/record/3598402",
"keywords": [
"school",
"secondary education",
"student absences",
"attendance monitoring",
],
"@context": "https://schema.org/",
"identifier": "https://doi.org/10.5281/zenodo.3598402",
"@id": "https://doi.org/10.5281/zenodo.3598402",
"@type": "ScholarlyArticle",
"name": "Stakeholders' Satisfaction from a New Management Information System to Confront the Phenomenon of Student Absences"
}
41
15
views
|
{}
|
# new users : use latex?
I often edit and comment new users' questions with
Use latex (click on edit to see what I typed)
I wonder if it is possible to detect when a new user is posting a question without using latex, and show him a message about using latex, clicking on edit, and simple examples $\scriptstyle\sum, \int,\prod_{n=1}^\infty, \sin^2(x^a), \sqrt{2}, \implies \ldots$
• I will just point out that a message which contains a link to Mathjax tutorial is already shown when a user posts their first question. See here: Show “how to ask” advice before a new user asks a question and An auto-comment on first questions that motivates sharing doubts and thoughts. – Martin Sleziak Jun 2 '17 at 22:43
• I think your post-specific improvements are likely to be more convincing evidence to a new user of the benefit of learning to use $\LaTeX$ than any generic auto-post. – hardmath Jun 5 '17 at 17:31
• It is unfortunate that MathJax is getting called LaTeX. If someone masters MathJax (which is what is used here; LaTeX is not) and then mistakenly thinks they have learned LaTeX, they will experience an unpleasant shock if they encounter actual LaTeX and find out they don't know it. – Michael Hardy Jun 8 '17 at 17:01
|
{}
|
# Terra Preta Data bases, Web Sites, Mail List and Blogs
33 replies to this topic
### #1 erich
erich
Understanding
• Members
• 484 posts
Posted 27 February 2007 - 01:42 PM
Terra Preta Web Site at; Terra Preta | Intentional use of charcoal in soil
It has been immensely gratifying to see all the major players join the mail list , Cornell folks, T. Beer of Kings Ford Charcoal, M-Roots guys(fungus), chemical engineers, Dr. Danny Day of G. I. T. , Dr. Antel of U. of H., Several Virginia Tech folks and many others who's back round I don't know have joined. Registration has averaged one or two per day over the last few weeks.
Welcome to the Website for the Terra Preta Discussion List: terrapreta@bioenergylists.org
This new addition ("Terra Preta") to the suite of bioenergy lists is going to strive to be the primary world web location for technical discussions on a new possible important use for biomass (that is described below). We have chosen the term "Terra Preta" (hereafter TP to save typing) as most of your search hits on these words will return valuable information. The term fits well-enough with the five other biomass-oriented discussion lists (bioenergy, stoves, gasification, bioconversion, anaerobic digestion) – all of which have the world-wide audience that we hope the Portuguese words for “Black Earth” will also connote. However, the topics to be discussed here are also known as "biochar" and "agrichar". Still other names will certainly appear and perhaps cause us to rename this site -which we are starting this day so that there is a convenient single site for dialog of the type found on the other "bioenergylists".
I have agreed to be the primary Terra Preta list coordinator at the request of Tom Miles and Ron Larson. Tom has been in this biomass discussion-list business since 1994 (mostly paying for everything out of his own pocket – Thanks Tom). I feel comfortable taking on this task because of the quality of Tom’s work and because the quality of the many list discussions have been apparent to me for many years. Ron and I have communicated a bit on Terra Preta topics for several months. I also agreed to take on this task because of words of encouragement that came out of Ron’s six-seven years as the first Coordinator of the successful “Stoves” list for which Tom has recently been the primary Coordinator. A recent TP paper by Ron from “Solar Today” magazine has been up on the “gasification” website for a few months –and now is as well on the TP web site. Ron reports that the TP topic was under discussion on “stoves” about 4 years ago – because of Ron’s own continuing interest in charcoal-making. One (two?) of my own recent internet letters is also up now on our own TP site– courtesy of Ron and Tom. The third Coordinator is >>>>>>
What does “TP” denote? By this term, I mean the intentional placement of charcoal in soil. Surprisingly, it is now becoming apparent that doubling and tripling of soil productivity can result. Surprising also that TP’s invention and proof of productivity improvements dates back several thousand years in Brazil. We hope that soil scientists around the world will contribute to this list with soil answers needed by those others of us interested in a very different aspect of TP soils. That different aspect is that the sequestered charcoal is taking CO2 out of the atmosphere – apparently at a lower cost than any other means of doing so. Hopefully, a large percentage of submissions to this new list will concentrate on TP’s climate benefits (and costs).
Lastly, on this list especially we expect to see a lot on production of charcoal – the third main aspect of TP needing your input. We expect some reading this to be skeptical that a fuel that worldwide is more popular than wood should be dumped in the ground at a time when we are hoping biomass can slowly replace our dwindling fossil fuels. Making this case, or disproving it, is the purpose of this discussion list. Let us hear from you. My job, which Tom and Ron say is easy, is to keep us on topic. They have committed to supply me a whole slew of TP questions that they feel are not yet well enough answered. Let us all hear your questions – and answers.
Erich Knight, Terra Preta List Coordinator
Ron Larson, Associate Coordinator
Tom Miles, Bioenergylists Administrator tmiles at trmiles.com
Erich J. Knight
### #2 erich
erich
Understanding
• Members
• 484 posts
Posted 27 February 2007 - 09:14 PM
The best TP blog , where , I think, I first saw the Glomalin discovery;
transect points: Hypography Science Forum Upgrades Terra Preta Discussion
### #3 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 28 February 2007 - 02:03 AM
Apart from hypography the next best TP discussion on the web
View topic - charcoal agriculture - Biochar - Amazonian Dark Earth - Permaculture discussion forum
The Permaculture site is apractical gardening/farming site specialising in useful plants. Very good if you want to learn about growing things, buy a chook, propagate something, get a rural job etc
### #4 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 01 March 2007 - 06:55 AM
The politics of Terra preta
Muck and Mystery: Slow Train
I have seen groupthink, in action, not a pretty sight. Some say it was the reason for the CIA Bay of Pigs invasion under Kennedy
### #5 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 03 March 2007 - 09:36 AM
Some Q & A on The Horizons TV programme on TP
BBC - Science & Nature - Horizon - The Secret of El Dorado
The transcript
BBC - Science & Nature - Horizon - The Secret of El Dorado
a programme summary
BBC - Science & Nature - Horizon - The Secret of El Dorado
### #6 erich
erich
Understanding
• Members
• 484 posts
Posted 04 March 2007 - 11:49 AM
Hi All,
The most comprehensive Soil science data base of peer reviewed papers concerning soil Charcoal, by Dr. Danny Day :
_Welcome To Carbon Negative_ (Welcome To Carbon Negative)
Dr. Day added this post to the TP mail list after I posted about his data base;
"For clarification: this site is restricted under fair use quidelines and no
transfer of ownership or abuse of copyright materials will be allowed. (view
only). It is limited, password protected and free for registered
non-commercial use. Data cannot be referenced.
We really want to say thanks for work shared by registered users and those
who contribute tirelessly and generously to the advance and understanding of
this work.
Danny"
Erich J. Knight
Shenandoah Gardens
E-mail: shengar at aol.com
(540) 289-9750
### #7 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 24 April 2007 - 03:02 AM
Here is a set of recent abstracts related to charcoal in soil, which you may, or may not have seen
http://www.soil.ncsu...er_J/BlackC.doc
### #8 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 05 May 2007 - 11:55 AM
Kelpie Wilson's article on IAI plus discussion
The CO2 sings 'Bury me, buuuu-reee me, bury me, across the world' | Gristmill: The environmental news blog | Grist
### #9 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 08 May 2007 - 12:17 AM
Saving The Planet While Saving The Farm, How soil carbonization could save the planet while it saves the family farm | Terra Preta
If you are seriously interested in TP you should look at the archives at
Saving The Planet While Saving The Farm, How soil carbonization could save the planet while it saves the family farm | Terra Preta
### #10 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 08 May 2007 - 04:27 AM
Wiley InterScience: Journal: Abstract
user account | Terra Preta
Long term effects of manure, charcoal, mineral fertilization on crop production ... Science, and Technology of Charcoal Production · Biocarbons (Charcoal) ...
Can anyone else?
Please check the IAI reports on that new thread for the latest news.
### #11 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 10 May 2007 - 06:29 PM
Good Blog
The reason TP has elicited such interest on the Agricultural/horticultural side of it’s benefits is this one static:
One gram of charcoal cooked to 650 C Has a surface area of 400 m2 (for soil microbes & fungus to live on), now for conversion fun:
One ton of charcoal has a surface area of 400,000 Acres!! which is equal to 625 square miles!! Rockingham Co. VA. , where I live, is only 851 Sq. miles
Now at a middle of the road application rate of 2 lbs/sq ft (which equals 1000 sqft/ton) or 43 tons/acre yields 26,000 Sq miles of surface area per Acre. VA is 39,594 Sq miles.
What this suggest to me is a potential of sequestering virgin forest amounts of carbon just in the soil alone, without counting the forest on top.
DesMoinesRegister.com Blogs » Blog Archive » Amount of biomass & carbon cycle response
Another good blog
In the Bolivian Amazon region, the land is a savannah, interspersed with “islands” of forest. When these forests were investigated, it became clear that they were places where people had lived in large numbers, for the soil was bursting with shards of pots, including huge vats that had been used to cook meals for hundreds of people. These archaeological sites were hundreds, even thousands, of years old. Moreover, there were stripes connecting them that could be seen from the air. These had been roads in ancient times.
Next the scientists explored the inland regions of the Brazilian Amazon, where they found large areas where the soil was remarkably different from the usual yellow dirt. As much as ten percent of the land is actually rich, dark soil called “terra preta.” As the photo shows, this soil is often two feet deep, and occasionally even two meters. It is full of pottery shards dating back possibly even 9,000 years, plus food scraps and other plants that had been used as much. This rich soil had been created intentionally by the inhabitants, and it remained rich throughout the whole period since then.
. . .
There’s an even more astonishing discovery, too. In the areas where the owners are mining the ancient terra preta soil and selling to their neighbors, the old terra preta regenerates itself! A farmer digs into the soil but leaves 20 cm of it, which he allows to rest for about 20 years, with new vegetation falling on it. At the end of this time, the dark soil is the same as it was before the mining took place. Apparently there are some kinds of micro-organisms in the soil that allow the soil to grow.
Metta Spencer's weblog: Hooray for the Ancient Amazon Farmers
If I am duplicating post let me know I am being abit swamped with info of late.
yet another blog
The difference between terra preta and ordinary soils is immense. A hectare of meter-deep terra preta can contain 250 tonnes of carbon, as opposed to 100 tonnes in unimproved soils from similar parent material, according to Bruno Glaser, of the University of Bayreuth, Germany. To understand what this means, the difference in the carbon between these soils matches all of the vegetation on top of them. Furthermore, there is no clear limit to just how much biochar can be added to the soil.
Claims for biochar's capacity to capture carbon sound almost audacious. Johannes Lehmann, soil scientist and author of Amazonian Dark Earths: Origin, Properties, Management, believes that a strategy combining biochar with biofuels could ultimately offset 9.5 billion tons of carbon per year-an amount equal to the total current fossil fuel emissions!
WorldChanging: Tools, Models and Ideas for Building a Bright Green Future: Terra Preta: Black is the New Green
### #12 Philip Small
Philip Small
Thinking
• Members
• 98 posts
Posted 11 May 2007 - 12:37 PM
Excerpted from Climate Feedback (blog): Solutions in the Soil
... various reports on the conference on biochar/agrichar/terra preta nova/what-you-will that just ended down in Australia. If you're not up to speed on this, the general idea is that people could help solve a great many problems by enriching soils with reduced carbon in charcoal-like form. This gets rid of the carbon for a long time (charcoal is very refractory) and improves the soil in various not yet fully understood ways. ... There's what seems to be a thriving discussion board on the subject at Hypography. And we have an article on the subject in Nature this week ...
...The conference was opened by Tim "Weather Maker" Flannery, which is a pretty big name for a new field to manage to attract, I'd have thought.
...One interesting aspect is the idea of tying this issue to the issue of crappy stoves that drive indoor air pollution and waste a lot of energy.
... calculations for carbon sequestration by photosynthesis suggest that converting all US cropland to Conservation Reserve Programs — in which farmers are paid to plant their land with native grasses — or to no-tillage would sequester 3.6% of US emissions per year during the first few decades after conversion; that is, just a third of what one of the above biochar approaches can theoretically achieve. ...
added: The author of the post is Oliver Morton, Chief News and Features Editor at Nature. The post is mirrored at his blog on Heliophage.
Heliophage ..." is loosely associated with my book Eating the Sun: How Plants Power the Planet, which will be published in July 2007 by 4th Estate and HarperCollins. It tracks events associated with the book, news that might be of interest to people interested in the subject matter of the book."...
Considering how well terra preta fits into the heliophagic theme, Oliver Morton will have more to say on the subject.
### #13 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 25 May 2007 - 03:51 PM
Long Live the List!
By Erin
There are 3 main renewable energy discussion lists and others that cover a variety of topics: Biomass Cooking Stoves for developing areas of the world, Biomass Gasifiers (mainly for heat and power) and Terra preta -using charcoal to ...
Code Knitter - Code Knitter
Terra Preta Postings
By arclein(arclein)
This is a list of posts dealing with terra preta in particular and is meant to help you navigate through the development of my thinking. There are other posts apropos to the subject, but this should get you through it.
Global Warming - Global Warming
Linking corn culture and pine beetles
By arclein(arclein)
In our earlier posts, we have extensively developed the thesis that the adoption of terra preta corn culture globally will not only sequester all the excess carbon but also manufacture high quality soil in a previously unanticipated ...
Global Warming - Global Warming
Arclein on global warming and renewables
By arclein
... have a free distribution to get the audience created. A principal theme in my Blog has been the coming terra preta revolution in sustainable agriculture. If you are not familiar with this check titles in my blog for corn and biochar.
SustainabilityForum.Com - Your... - SustainabilityForum.Com - Your Global Sustainability Community!
Excited geology
By Oliver
It's interesting stuff which I point you to in part because how microbes do their stuff is something it's important to understand, in part because this sort of thinking has relevance to the Terra Preta stuff I was extolling a while back ...
Heliophage - Heliophage
NSCSS.org :: View topic - Soil concept named top green idea in 2006
Terra Preta - The Black Earth I've saved the best for last. Terra preta is new to Western science, but it is an old technology from the Amazon that ...
Terra Preta soils - can the NT reach this level of improved soil ...
By Peter H(Peter H)
Amazonian Dark Earth, or "terra preta do indio", has mystified science for the last hundred years. Three times richer in nitrogen and phosphorous, and twenty times the carbon of normal soils, terra preta is the legacy of ancient ...
AboveCapricorn - AboveCapricorn
Rural Network
Alfred Harris is a structural biologist with research and commercial interests in biocarbons (charcoal) and their ability to reduce fertiliser requirements ...
Clean Tech: EcoGeek Karl Schroeder on Investments in Environment ...
By The Green Skeptic(The Green Skeptic)
Karl Schroeder: Agrichar is a modern version of "Terra Preta" which was used centuries ago in the Amazon basin to allow the nutrient-poor soils there to produce lavish crops. It's basically a burn-and-bury process that sequesters carbon ...
The Green Skeptic™ - The Green Skeptic™
[quote] Sequestering Carbon in YOUR Soil - Are There In It For Me?
Sequestering carbon in soil is not a new concept. It happens naturally, but can it be enhanced on farm and can it actually make some dollars for me, on my farm?
The NSW experience is worth examining in some detail, as a similar system could be useful in the tropics as well. There are a number of links with details on the scheme
The Carbon Farmers - Features - The Lab - Australian Broadcasting Corporation's Gateway to Science
[IMG]http://abc.net.au/science/features/soilcarbon/img/carbon_level_01.jpg[
/IMG]
Trees versus crops. Carbon levels in forest soils are usually much higher than those under agriculture. Pic: Brian Murphy
the following is from
The Carbon Farmers - Features - The Lab - Australian Broadcasting Corporation's Gateway to Science
[QUOTE]
How soil loses carbon
Professor Alex McBratney from the University of Sydney has been studying soil carbon decline in the Namoi Valley, north western NSW.
The soils in this area have taken a beating, due largely to intensive cotton farming over the past 30 years. Once pastureland, the conversion to cotton crops has seen soil carbon levels decline from 1.5 to 0.8 per cent, he says. How does this happen? In healthy soils, carbon exists as long, sticky string-like molecules.
These strings twist around individual soil particles and literally bind them together. Soil micro-organisms tend not to bother consuming these large, unpalatable molecules, preferring fresh or rotted plant matter – the stems, roots and other plant parts which over time become incorporated into the soil.
But if the soil loses this plant content (because the stubble is burnt or removed), the soil microorganisms have no choice but to make a meal of the carbon molecules. Once the carbon is gone, the structure of the soil breaks down making it difficult to retain water and nutrients.
[/quote]
### #14 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 21 October 2007 - 12:53 AM
Friday Hope Blogging
By Phila(Phila)
While still under the radar of most policymakers, gasification and terra preta are starting to appear on the scene. In the US this year, Senator Ken Salazar (D-CO) is promoting legislation that would give subsidies of up to \$10000 for ...
Bouphonia - Bouphonia
mechabolic on worldchanging.com
By jim
jer faludi, a budding gasification and terra preta geek, wrote a great summary article on why gasification and terra preta are newly interesting. it was on the worldchanging front page earlier this week, ...
Tribe.net: Burning Man - Burning Man - tribe.net
### #15 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 24 October 2007 - 03:01 AM
this is an interesting artivcle not so much for the article but for the questions asked about it
WorldChanging: Tools, Models and Ideas for Building a Bright Green Future: A Carbon-Negative Fuel
and
Why is the "nontechnological" capture of atmospheric carbon through photosynthesis, and the biological process (which is fastest in temperate perennial grasslands) of formation of stable soil organic matter (humus, glomalin, etc. etc.) so invisible to us?
Various strands of modern alternative agriculture (e.g. holistic planned grazing, Keyline systems, pasture cropping, permaculture, also including organic farming and no-till farming) have been shown to be able to build soil organic matter rapidly while maintaining production--not from external inputs, just from enhancing the biological processes on and in the soil. Soil organic matter is 58% carbon. All it would take to run atmospheric CO2 down below 300 ppm is a net average increase of 1.6% in the organic matter of the world's crop and pastureland soils. Many alternative ag people do this in a year or two in favorable environments with favorable soils.
Non-use is not what is required to do this, but intensive management, working WITH rather than against the basic eco processes of water cycling, mineral cycling, solar energy flow, and community dynamics. Most of our industrial or "robbery" agriculture works AGAINST these processes, making soil our #1 export, far surpassing even empty shipping containers. And releasing huge quantities of CO2 to the atmosphere as the soil organic matter is oxidized through tillage, ammonia fertilizer, and exposure of soil.
Soil carbon is the perfect opportunity to fix climate change. The carbon we capture from the air doesn't need to be tilled in or spread--the plants do it for us. And it's not a hazardous waste disposal problem, like it is for the carbon capture schemes.
Taking carbon from the atmosphere takes ENERGY. It's combustion in reverse. Technology can't do that. Why do we forget that photosynthesis is the reverse of combustion/respiration?
Some hypotheses for why we ignore the photosynthetic/soil carbon opportunity:
1. There are no pipes, valves, stainless steel tanks, or gauges involved. (We love technology.)
2. We feel good when we add things (e.g. biochar) to the soil, when we do work to achieve a result, even if it may not be necessary.
3. Biological processes such as the decay of shed grass roots into humus aren't sufficiently technological or visible to interest us.
4. When we think of photosynthesis, our attention is captured by the large, obvious plants such as trees (which are hokum as a carbon sink, because they rot or burn too quickly).
Terra preta is worth pursuing, perhaps particularly for tropical soils which metabolize organic matter quickly. But why can't we see the direct photosynthetic soil carbon opportunity?
Posted by: Peter Donovan on October 16, 2007 10:57 AM
(Modern pyrolysis techniques/ technology make it possible -MA
What's great about this is that putting biochar in the ground is permanent - that Terra Preta carbon was put there centuries ago. On the other hand, if you just use organic material, you are just churning atmospheric carbon, not permanently sequestering it. When organic matter decomposes, it produces gaseous carbon - methane and CO2, which is held in the soil by inertia mostly. When somebody tills the soil it just 'burps' back out. But if it's in the form of activated charcoal, it stays put. That is a net reduction in atmospheric carbon.
Posted by: Clark on October 16, 2007 12:47 PM
Here's a video of Chicken John Rinaldi's gasifier-powered pickup truck at Burning Man:
Chicken John For Mayor - INNOVATIONS
Chicken John is a friend of Jim Mason's and a co-conspirator in the "power generation as art" movement. He's running for Mayor of San Francisco this November, and part of his platform is to build a large gasifier to generate electricity for the MUNI light rail system.
Posted by: Jeremy on October 16, 2007 1:15 PM
"Terra preta is worth pursuing, perhaps particularly for tropical soils which metabolize organic matter quickly. But why can't we see the direct photosynthetic soil carbon opportunity?"
Peter, can I add that it could be of use as a by product of weed control. We have major problems with ground fuel loads in summer, specifically from Broom (Cytisus scoparius) and Gorse (Ulex europeaus). Currently these are sprayed, but only where they are near towns.
Being able to talk organic matter which is not currently being utilised, ie weeds, and turning it into a soil ameliorant (?sp?), which can then be used to build up our nutrient poor soils down here in Australia, "sounds" great.
However, I totally agree with your comments regarding soil conservation through responsible land management. No till farming, appropriate grazing rotation, and basic biological farming techniques are an effective way to build up soils, and increase soil carbon content in the process.
As well, my dream of taking woody weeds and turning them into agrichar might run into some costing problems at the moment.
Posted by: Luke Bunyip on October 16, 2007 1:20 PM
The Ashden award winners you mention utilize anaerobic digestion to convert waste food and feces into gas. The bacteria in the digesters consume what little energy is left in the feces, or the loads of energy in the waste food, and exhale (exhaust? whatever bacteria do...) methane. It is not a partial combustion process like the gasifiers discussed throughout the majority of this article.
While the results of the bill could be admirable, the Colorado Senator is also pandering to those constituents who work for and contract from the National Renewable Energy Laboratory, in Golden, Colorado.
Posted by: Paul on October 16, 2007 1:22 PM
By the way, this just in:
The Gasification Technologies Council is having their annual conference in San Francisco, today and tomorrow. If you're interested in this stuff and live around the bay area, pop on over to it!
Posted by: Jeremy Faludi on October 16, 2007 1:32 PM
Just a note: Peter Donovan's numbers are *way*, *way* off.
To remove 100 ppm CO2 from the atmosphere, it would require a build-up of ~200 Gt-C (billions of metric tons of carbon) in the soil. Right now, it is estimated that there are about 1200 billion tons in all of the soils in the terrestrial biosphere, not just the agricultural lands. So this a 16-17% increase in the soil carbon levels of all of the ecosystems, not just agricultural lands. If you had to do this on existing croplands (about 15 million square kilometers at last count), it require about a 100% increase in soil organic matter on all of the planet's farmlands.
While I like the spirit of his comments, the numbers are just wrong.
Posted by: anonymous on October 16, 2007 6:05 PM
Anonymous: Let me clarify. By a net increase of 1.6% I mean from say .5% to 2.1% organic matter in the top foot (more than a doubling in this case). Or to take another example, from 4% organic matter to 5.6% organic matter, or from zero to 1.6%. This would, averaged on the world's cropland and pastureland soils (figures from the World Resources Institute) take atmospheric ppm down by about ppm. There can be no doubt that this would require a transformation of most existing agriculture.
Before 1830, it is estimated that many tallgrass prairie soils in the midwest contained 5-10% organic matter. Now, after years of corn and soybeans, quite a few are in the .5% range.
Many forms of soil organic matter are exceptionally stable. Rattan Lal of Ohio State estimates that the average residence time of carbon is 35 years in trees, 100+ years in soil organic matter. Barring sudden destruction by inversion tillage, which resembles fire plus earthquake for the underground bacterial and fungal communities.
When you consider the benefits of turning atmospheric carbon into soil organic matter--which alleviates drought and flooding, chelates heavy metals, salt, and other contaminants, improves water quality, food quality, all using abundant free solar energy--it's quite a deal.
For details on the numbers, see the papers of Dr. Rattan Lal, and Allan Yeomans's PRIORITY ONE (Priority One: Together We Can Beat Global Warming).
Posted by: Peter Donovan on October 16, 2007 6:47 PM
Sounds nice but theres a problem somewhere, I suspect its that the author hasn't included the carbon cost of running the gasifier.
Basically agrichar / terra preta is a very stable form of carbon created from a less stable form, vege waste. The reason some chemicals are more stable than others is that the bond energies/mol are greater in the more stable compounds, i.e. there is a net energy cost to producing argichar from organic waste. This energy will have to be supplied by conventional means.
Agrichar may well be a good way of locking up atmospheric carbon but it can't have a net liberation of energy associated with it as well.
For anyone whos interested in testing this argument try thinking about entropy.
Posted by: Steve on October 17, 2007 12:50 AM
Steve you have your thermodynamics backwards. If a compound is going from less stable to more stable you get energy. In this case from ligino-cellulose to char. Bond energy is the energy released when the bond is made not what is required to form the bond. Certainly outside energy is required to begin the charing process, but it should not overall (taking in to account energy gain from the methane/hydrogen released) be costing energy to produce the char.
Your comment on entropy is irrelevant. While the entropy of the universe is increasing. That of a system does not need to. This requires a flux of energy through the system. High quality energy in (light) low quality out (heat). It is perfectly reasonable to use some of the energy stored in plant waste to produce biochar and store some of the rest as fuel.
Wiser science gurus correct me if I am wrong.
Cheers
Posted by: Andrew on October 17, 2007 7:18 AM
"I can't promise that using gasification for energy and using the resulting char as terra preta fertilizer will be a carbon negative fuel, because I haven't seen a credible lifecycle analysis of it. (If anyone has, please post it to the comments.)"
Dr. Johannes Lehman at Cornell University can probably get you the information you're looking for. Terra Preta is a specialty of his.
Everything I've read cites 20-50% sequestration of carbon over and above the CO2 that is produced by pyrolysis and the energy it takes to pyrolize the biomass.
The actual amount depends upon the biomass used and the temperature at which it is pyrolyzed. It is also confirmed to stay in soils and not break down for hundreds to thousands of years (depending upon the climate)-- longer than compost, or naturally accrued soil biomass. Of course, you can't be transporting the stuff around over large distances without "offsetting" the amount of CO2 that is actually sequestered. It would have to be both produced and used locally for maximum effect.
Again, if you want specific life cycle info, Dr. Lehman can probably help. Here's his webpage at Cornell:
Biochar home
I also wholeheartedly agree that we need to look at building soil carbon naturally and organically. I see both this and Terra Preta as tools in the toolbox that should both be used.
Posted by: Ed on October 17, 2007 2:44 PM
Peter,
Yes, but that's a HUGE relative increase in soil carbon. Prairie soils might be able to go that high, but not all of the world's agricultural land. There's absolutely no way.
I agree with your overall point, but as a scientist who works in this field, I get a little irked when people don't present their numbers carefully or clearly.
Posted by: anonymous on October 17, 2007 6:19 PM
This is pseudo-science of the worst order. Global warming is being accelerated by the amount of carbon dioxide gas in our atmosphere, and carbon sequestration is a popular short-hand phrase for carbon dioxide sequestration. Burying charcoal in surface soil does not remove CO2 from our environment, CO2 is released when charcoal is made.
Basic high school physics - the principal products of combustion are CO2 and water vapor. There is no free lunch in physics, you don't get to have your cake and eat it too.
Woodgas is very interesting technology, but it is definitely a niche item. Brazil is already way ahead of the rest of the world with ethanol usage, which does not release fossilized CO2 that has already been sequestrated.
Posted by: Sean McLaughlin on October 18, 2007 5:59 AM
Charcoal is carbon. It comes from a renewable resource (biomass), that pulled CO2 from the atmosphere.
The charcoal is stable in soil for thousands of years. The NET result of adding charcoal to soil is increasing soil carbon. The carbon that is being added to soil, came from the atmosphere. Seems simple.
Plus, added carbon in soil has been shown to increase actual biomass and production capability of the soil itself- thus increasing the rate of biomass production, creating a positive feedback loop.
I don't know if it is good way to get energy- but it is certainly the best way I have heard to improve agriculture, and reduce CO2 in the atmosphere. That's good enough for me.
Posted by: Tim on October 18, 2007 8:00 AM
Sustainable farms and gardens already exist that do not use terra preta. Since the Carthusian monastaries of the European Middle Ages, herb and subsistance growers have known how to acheive fertility using a mix of dynamic accumulators such as sorrel, dock, plantain, dandelion, stinging nettle, chicory, chamomile, astilbe, and comfrey, along with seasonally rotating cover crops such as clover, orchard grass (coltsfoot), rye and buckwheat, and biomass producers such as turnips and oilseed radish. Has anyone demonstrated on actual farms that adding terra preta carbon increases humus production or yield over what dynamic accumulators, cover crops, and biomass crops are already known to produce? Is not, why such wild enthusiasm for the untried and the unproven?
Theoretical farming has a long way to go to equal the results of theoretical physics. I would like to see terra preta farms objectively evaluated in practice alongside sustainable farms based on other principles (e.g the natural farms of Masanobu Fukuoka and Kawaguchi in Japan or the Agroforestry Trust farm of Martin Crawford in Dover, England) rather than just assuming that terra preta farms will acheive excellence in sustainability because they sound "right." Many wonderful-sounding theories bite the dust when applied in the real world. Is the carbon product that comes from the pyrolizers really better than well-cured compost? Are there any commercial products for sale that farmers and gardeners can try for comparison to see if humus and soil fertility increase?
Bob Monie
New Orleans, La
Posted by: Robert Monie on October 18, 2007 8:08 AM
Sean McLaughlin -- the point of this is that the products of combustion are different than the products of pyrolysis. OK, so I just tried to write it all out in here and it was super ugly, so you might want to look here for the reactions involved in pyrolysis. The point, though, is that it releases gas much more useful than CO2 and also leaves the "char" or oxidized carbon as a solid to be used as a soil fertilizer.
Posted by: octopod on October 18, 2007 8:50 AM
I'm struggling here a bit. Say that I've got a few acres of land, and that the case you present convinces me enough to go for it. The acres are about half woods, so there's biomass by the cubic meter - downed trees, leftovers from the last corn harvest, grass clippings, leaves... There are also a few acres of production fields, and a decent sized garden, so lots of places to play... But what do I actually *do*? Do I build a gasifier? How? Or is a gasifier (a little?) different from a burner that makes charcoal?
What we need here is a "biochar for dummies" (or at least for people who've never built anything out of sheet metal). How do people like me, with plenty of good will and opportunity, but very little knowledge or experience, actually apply this?
Posted by: Scott Deerwester on October 18, 2007 12:38 PM
Scott: to make char, you need to burn/smolder the biomass in an oxygen poor environment. If you google "how to make charcoal" you should find some sites that will give you various ideas how to do it.
I think there may be some you tube videos on the subject too.
Sean: this isn't pseudo science. biomass pyrolysis sequesters more atmospheric carbon than the pyrolysis process produces-- including energy inputs. Plants grow, they remove CO2 from the atmosphere. Pyrolysis locks in a significant portion of that carbon in the form of char. The carbon stays in the soil hundreds to thousands of years longer than it does by composting, plus, it reduces needed fertilizer imputs in agricultural soils which reduces carbon emissions from fossil fuels used to make the fertilizer and apply it.
Posted by: Ed on October 18, 2007 5:08 PM
Here's a YouTube video by a guy who built a pyrolysing stove out of 5-gallon paint cans and tin food cans:
YouTube - Hybrid Stove Making Charcoal
There are other videos that
Posted by: Ed on October 18, 2007 5:17 PM
While it is possible.
This is where you start kicking in "Oppourtunity Costs".
First off, for instance lets say that biomass gets an average of 3-6% solar efficiency.
http://greyfalcon.net/sugarsolar
But why not go with the theoretical limit (i.e. virtually impossible) of 11%.
Then you run that through a fischer tropsch gasification process, leaving you with only 32% of that energy left.
And then you distribute that fuel, leaving you with only 88% of that energy left.
And then you run that fuel inside a conventional gasoline engine at 20%, but for kicks, why not use a diesel engine at 40%.
So,
11% * .32 * .88 * .4 =
So we're looking at a maximum limit of somewhere around the range of 1.24% solar energy conversion into torque. With a more realistic range of 0.4-0.2%
Kinda crappy don't you think?
_
Especially when you compare it to say, a 50% efficient Luz2 style solar cocentrator.
LUZ II - TECHNOLOGY - LUZ II DPT - LUZ II DPT
With 85% energy kept after distribution.
And 90% of the energy turned to torque by an electric engine with regenerative braking.
50% * .85 * .9 = 38.25%
_
Now ask yourself, is photosynthesis really up to the task of providing the energy?
Then ask yourself again, could it be done better, cheaper, and faster without biomass.
Frankly, oil is a biofuel.
It just had millions of years to accumulate.
Almost all of which was from wild algae in the oceans.
Trying to make that all back in real time with terrestrial crops is just asking for failure.
Not to mention that by keeping biofuels alive, we have to deal with all the dramatic emissions increased causes by current biofuels.
http://greyfalcon.net/n2ostudy.png
http://greyfalcon.net/palmoil
The risks and costs heavily outweight the benefits of biofuels.
Certainly research could change that, but considering we got 20x more federal resources pegged in biofuels research than Solar, it's just disgusting.
Frankly, all non-R&D subsidies for biofuels should be scrapped and put towards more realistic solutions.
Posted by: David Ahlport on October 18, 2007 10:50 PM
For people interested in finding out more about biochar, who is working on it and where, I direct you to the website of the International Biochar Initiative This organization began in July 2006 and has been instrumental in serving as a platform for the international exchange of information and activities in support of biochar research, development, demonstration and commercialization.
Posted by: Ellen Baum on October 19, 2007 4:40 AM
David: you can't forget a couple of important points:
1. that biochar added to agricultural soils reduces the need for fertilizer inputs. Since it stays in the soil so long, this is a huge amount of money-- and energy savings-- over the long run.
You're looking at only one portion of the process instead of looking at the potential for optimizing for all the components: biofuel, improving agricultural productivity which will reduce fossil fuel use, restoration of depleted soils which will, when restored, begin removing carbon from the atmosphere; and the heat that is generated from the gasification process that would have industrial uses-- like combined heat and power. Then there is the value of soils restored with char of cleaning up surface water.
2. We need to remove carbon from the atmosphere as fast as possible to address climate change. It has been argued that it's not unrealistic to have enough pyrolysis plants worldwide to remove 9.5 billion tons of carbon from the atmosphere annually-- more than is now emitted. Combine this with efficiency plus other renewables (wind, solar, tidal, etc) and we could make significant dents in the carbon concentration of the atmosphere within the lifetimes of today's young children.
There has been a lot of work done already on the energy returns, etc. Since you're into the numbers end, it may be helpful for you to dig through the literature on the subject. I haven't gotten into the technical end myself, but the work is there.
Posted by: Ed on October 19, 2007 12:44 PM
Posted by: Duane on October 21, 2007 7:03 AM
Hi,
photosynthesis releases 2.66 grams of oxygen for every gram of carbon fixed from the atmosphere.
When you pyrolize or partially combust (gasify) with oxygen-starved air input, you get producer gas plus char.
So for every gram of carbon in the char you have 2.66 grams of oxygen in the atmosphere (for us to breathe).
And terra preta is perfectly suited to combination with all those other sound farming systems like keylines, alley cropping, intercropping, agroforestry, Zai holes and lots more. Use every trick in the book and be happy ever after.
My own small experiments have shown that it is highly advantageous to prepare sugar water (molasses will do) and soak the char prior to digging into the soil. Soil life just loves this.
Here is one way of making char from rice husk plus cooking gas.
diazotrophicus
Continuous-Flow Rice Husk Gasifier for Small-Scale Thermal Applications (19kW) | BioEnergy Lists: Biomass Cooking Stoves
Posted by: diazotrophicus on October 21, 2007 12:12 PM
Please join in the discussion here
http://hypography.co...erra-preta.html
Posted by: Michael Angel on October 24, 2007 12:56 AM
### #16 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 24 October 2007 - 07:26 AM
News
AGMARDT launches soil initiative
18/10/2007
Trustees agreed at their 6 June meeting to create a small working group tasked with stimulating engagement and debate on the importance of soil to our society and its potential role in mitigating climate change.
The group will initially focus on the mechanisms and practices that would enable New Zealand to grow more soil than it uses post 2012.
Storage of carbon in the soil is seen as a key contributor to achieving carbon neutrality and, amongst other topics, the potential of bio-char will be examined.
### #17 Michaelangelica
Michaelangelica
Creating
• Members
• 7,797 posts
Posted 04 November 2007 - 04:30 AM
MY LIFE IS TOO SHORT
It is facinating that Blogs are in the forfront of new ideas rather than media, government, universities etc.,
A Brave New World.
Sept 2008 Biochar conference in Newcastle UK
By Dave Lankshear(Dave Lankshear)
The ability to improve soil quality has been most dramatically demonstrated in the Terra Preta ("dark earth") soils of South America, where fertile islands of char-containing soils, dating back thousands of years, are found throughout ...
Eclipse-chat - Eclipse-chat
A Carbon-Negative Fuel"Impossible!" you say. "Even wind and solar ...
By Zalmoxis(Zalmoxis)
But listen to people working on gasification and terra preta, and you'll have something new to think about.A Carbon-Negative FuelFree e-book on biofuels and water"I know! Let's use all our food AND drinking water to make SUV fuel! ...
Biox Process and biodiesel - Biox Process and biodiesel
Amazonian Dark Earth, or terra preta do indio, has mystified science for the last hundred years. Three times richer in nitrogen and phosphorous, and twenty times the carbon of normal soils, terra preta is the legacy of ancient ...
In which I ramble while medicated.
By Eaho Laula(Eaho Laula)
Okay, skeptics, get cracking: I read this article on gasification/terra preta today, and either I'm to muzzleheaded to see what's wrong with it, or this chap's come up with something really, honest-to-Pete viable as a cheap alternative ...
...a nameless country populated... - ...a nameless country populated by transparent badgers...
Terra Preta for Carbon Reduction
By Philip Proefrock
The terra preta has a high level of nutrients, with three times the nitrogen and phosphorus and twenty times the carbon of normal soils. But producing fertilizer is not even the most interesting part of agrichar. ...
Green Options - http://greenoptions.com/feed
A Carbon-Negative Fuel
By saltyveruca
"Impossible!" you say. "Even wind and solar have carbon emissions from their manufacturing, and biofuels are carbon neutral at best. How can a fuel be carbon negative?" But listen to people working on gasification and terra preta, ...
Terra preta: a fuel that could be also carbon negative?
By Xavier Navarro
Terra preta is a very interesting type of soil that you can find in the Amazon, and is supposedly manmade. Although it's unknown how it was made before the Europeans arrived, there's a modern method to obtain it: burn biomass so it's ...
AutoblogGreen - AutoblogGreen
Negative Carbon Output
Because terra preta locks so much carbon in the soil, it's also a form of carbon sequestration that doesn't involve bizarre heroics like pumping CO2 down old mine shafts. What's more, it may reduce other greenhouse gases as well as ...
Things Are Good: good news - Things Are Good: good news
Carbon-negative fuel?
By Cory Doctorow
We've mentioned terra preta before: it's a human-made soil or fertilizer. "Three times richer in nitrogen and phosphorous, and twenty times the carbon of normal soils, terra preta is the legacy of ancient Amazonians who predate Western ...
Boing Boing - Boing Boing
Count of tillers in Charcoal treated and control Paddy Fields
Count of tillers in "Sri" Paddy - Variety Sona Masuri - Farmer P. Narasimha Reddy, Kothur Village, Midjil Mandal, Mahabubnagar District, Andhra Pradesh, India Samples - Random 10 nos. Avg Max Min Alkaline soil treated with charcoal 43 ...
ALKALINE SOILS - TERRA PRETA - ALKALINE SOILS - TERRA PRETA
A Carbon-Negative Fuel
"Impossible!" you say. "Even wind and solar have carbon emissions from their manufacturing, and biofuels are carbon neutral at best. How can a fuel be carbon negative?" But listen to people working on gasification and terra preta, ...
Digg / upcoming - Digg / All News & Videos
Renewables - Oct 16
Statt, Energy Bulletin. A carbon-negative fuel (gasification and terra preta) High hopes for renewable power from Earth's depths Lovins: Global warming and peak oil are irrelevant (efficiency)
EnergyBulletin.net Latest News - EnergyBulletin.net | Peak Oil News Clearinghouse
--
Michael the Archangel
"You can fix all the world's problems in a garden. . . .
Most people don't know that"
FROM
|
{}
|
# Difference between revisions of "The Solver"
## Project Overview
The robot should be able to take any scrambled rubik's cube and solve it without any input from outside systems, as in, all computation must happen on the microcontroller itself. It will do this by first using a camera to identify the colors on each side of the cube before plugging that input into a solving algorithm. Finally, using the solution generated, it will manipulate the cube using four grippers around the cube. Each of which will be able to both rotate the nearest face or the entire cube itself.
## Team Members
• Oscar Arias
• Jordan Aronson
• Alex Herriott
• Deko Ricketts (TA)
## Objectives
To build this, we need these things:
• 1. Build a robot which includes a Raspberry Pi along with motors that can rotate parts of the cube horizontally and vertically.
• 2. Create code to take the set of instructions and give them to the robot which will execute the necessary moves to solve the cube.
• 3. Create code to detect the colors on a cube on each of its sides
• Backlight the camera
• 4. Convert an algorithm to solve a cube into Python and produce a set of instructions based on its given colors
• 5. Connect the Raspberry Pi to the motors using circuitry
## Challenges
Challenges that we predict:
##### Mechanical
• Designing grippers able to grasp and rotate the cube. It's unlikely we will be able to grip the cube with the surface with PLA filament, so we need some sort of foam surface to provide traction. We also need to make sure that when the cube is released by the gripper it is able to freely rotate.
• Ensuring the grippers rotate exactly 90 degrees so the cube can rotate cleanly and subsequent moves will be able to be performed. Given we are using stepper motors, ensuring that the position of the steppers is zeroed before we start performing moves is important.
• Trying to get the individual moves to take as little time as possible. The steppers that we are considering are relatively high torque and low power
• Designing a convenient way for the cube to be inserted into the device and exit the device. We could possibly use some kind of platform on a screw that is able to raise and lower the cube out of the device.
• Designing circuitry to connect the Pi (The Pi can’t deliver enough power or pins to drive all the necessary servos and steppers) Right now the servos will need one pin each (4 pins total) and the steppers need 4 pins each (16 pins total) that is a total of 20 pins, and the Pi has 17 GPIOs. We need a different method for driving the steppers than the ULN2003 drivers that come with them, or additional circuitry to drive all the inputs. For now we will try and drive the ULN2003 boards with two 8 bit shift registers, which will use 4 pins total, with the servos, that totals 8 pins. however, we are then limited in step speed, which may mean we have to invest in additional hardware, like the Adafruit motor HAT.
• Finding a way to power both the Pi and the actuators from a wall adapter.
##### CS
• Finding a suitable open source algorithm that we can adapt to our robot to solve the cube.
• Making sure the camera can distinguish the color patterns on each side of the cube and store each face's patterns to memory. We also may be able to detect the colors with the grippers in the way, we may not. That will require some experimentation.
• Design code that can take that algorithm and translate it to what the robot can do. (The robot’s current design can only act on 4 faces at any given time. To access the other two, the cube must be rotated). We must make sure we know the position and orientation of the cube at all times so we know where to go next for our next action.
## Budget
• Raspberry Pi — From lab - This is the brains of the operation. It both computes the algorithm to solve the cube and sends commands to the servos.
• Camera (Arducam) — $14.99 - This camera should be able to recognize the colors on the cube to put them into the algorithm. • Gripper rotation servos -$27.99 - These servos rotate the grippers, turning the faces on the cube.
• Servo driver hat - $23.53 - This is our servo motor controller, capable of taking input from the pi and turning it into servo rotation. • Servo motors —$11.98 - these mini servos are the actuators for the individual grippers.
• 1" #5 machine screws - used for assembling the build.
• Lab power supply
• Total: $78.49 +$0 shipping (purchased through Amazon prime)
## Design and Solutions
### Mechanical
• In our current design we have 4 end effectors on the cube that can rotate one side of the cube. These can therefore turn 4 sides of the cube from any position. The remaining two sides can be accessed by disengaging two of the grippers and rotating the entire cube to a new position.
• The grippers that grip the cube will be operated by a mini servo mounted into the base of the gripper and then those grippers will be mounted to stepper motors. By using stepper motors, we are able to get continuous rotation on the grippers and accurate positioning.
• 2/28/16: Design with parts dimensioned and adjusted for the parts now they've arrived. Once the grippers and mounts are assembled we should be able to correctly dimension the edges to hold the mounts together and design a simple camera rig.
Modified StepperMount to hold servos instead of steppers. Changed mountings for microservos. Added measured dimensions from parts.
Enlarged screw mounts and servo hole. Added camera mount.
|
{}
|
# Problem 2.1
## Insertion sort on small arrays in merge sort
Although merge sort runs in $\Theta(\lg{n})$ worst-case time and insertion sort runs in $\Theta(n^2)$ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when subproblems become sufficiently small. Consider a modification to merge sort in which $n/k$ sublists of length $k$ are sorted using insertion sort and then merged using the standard merging mechanism, where $k$ is a value to be determined.
1. Show that insertion sort can sort the $n/k$ sublists, each of length $k$, in $\Theta(nk)$ worst-case time.
2. Show how to merge the sublists in $\Theta(n\lg(n/k))$ worst-case time.
3. Given that the modified algorithm runs in $\Theta(nk + n\lg(n/k))$ worst-case time, what is the largest value of $k$ as a function of $n$ for which the modified algorithm has the same running time as standard merge sort, in terms of $\Theta$-notation?
4. How should we choose $k$ in practice?
### 1. Sorting sublists
This is simple enough. We know that sorting each list takes $ak^2 + bk + c$ for some constants $a$, $b$ and $c$. We have $n/k$ of those, thus:
$$\frac{n}{k}(ak^2 + bk + c) = ank + bn + \frac{cn}{k} = \Theta(nk)$$
### 2. Merging sublists
This is a bit trickier. Sorting $a$ sublists of length $k$ each takes:
$$T(a) = \begin{cases} 0 & \text{if } a = 1, \\ 2T(a/2) + ak & \text{if } a = 2^p, \text{if } p > 0. \end{cases}$$
This makes sense, since merging one sublist is trivial and merging $a$ sublists means splitting dividing them in two groups of $a/2$ lists, merging each group recursively and then combining the results in $ak$ steps, since have two arrays, each of length $\frac{a}{2}k$.
I don't know the master theorem yet, but it seems to me that the recurrence is actually $ak\lg{a}$. Let's try to prove this via induction:
Base. Simple as ever:
$$T(1) = 1k\lg1 = k \cdot 0 = 0$$
Step. We assume that $T(a) = ak\lg{a}$ and we calculate $T(2a)$:
\begin{align} T(2a) &= 2T(a) + 2ak = 2(T(a) + ak) = 2(ak\lg{a} + ak) = \\ &= 2ak(\lg{a} + 1) = 2ak(\lg{a} + \lg{2}) = 2ak\lg(2a) \end{align}
This proves it. Now if we substitue the number of sublists $n/k$ for $a$:
$$T(n/k) = \frac{n}{k}k\lg{\frac{n}{k}} = n\lg(n/k)$$
While this is exact only when $n/k$ is a power of 2, it tells us that the overall time complexity of the merge is $\Theta(n\lg(n/k))$.
### 3. The largest value of k
The largest value is $k = \lg{n}$. If we substitute, we get:
$$\Theta(n\lg{n} + n\lg{\frac{n}{\lg{n}}}) = \Theta(n\lg{n})$$
If $k = f(n) > \lg{n}$, the complexity will be $\Theta(nf(n))$, which is larger running time than merge sort.
### 4. The value of k in practice
It's constant factors, so we just figure out when insertion sort beats merge sort, exactly as we did in exercise 1.2.2, and pick that number for $k$.
### Runtime comparison
I'm implemented this in C and in Python. I added selection for completeness sake in the C version. I ran two variants, depending on whether merge() allocates its arrays on the stack or on the heap (stack won't work for huge arrays). Here are the results:
STACK ALLOCATION
================
merge-sort = 0.173352
mixed-insertion = 0.150485
mixed-selection = 0.165806
HEAP ALLOCATION
===============
merge-sort = 1.731111
mixed-insertion = 0.903480
mixed-selection = 1.017437
Here's the results I got from Python:
merge-sort = 2.6207s
mixed-sort = 1.4959s
I can safely conclude that this approach is faster.
### C runner output
merge-sort = 0.153748
merge-insertion = 0.064804
merge-selection = 0.069240
### Python runner output
merge-sort = 0.1067s
mixed-sort = 0.0561s
### C code
#include <stdlib.h>
#include <string.h>
#define INSERTION_SORT_TRESHOLD 20
#define SELECTION_SORT_TRESHOLD 15
void merge(int A[], int p, int q, int r) {
int i, j, k;
int n1 = q - p + 1;
int n2 = r - q;
#ifdef MERGE_HEAP_ALLOCATION
int *L = calloc(n1, sizeof(int));
int *R = calloc(n2, sizeof(int));
#else
int L[n1];
int R[n2];
#endif
memcpy(L, A + p, n1 * sizeof(int));
memcpy(R, A + q + 1, n2 * sizeof(int));
for(i = 0, j = 0, k = p; k <= r; k++) {
if (i == n1) {
A[k] = R[j++];
} else if (j == n2) {
A[k] = L[i++];
} else if (L[i] <= R[j]) {
A[k] = L[i++];
} else {
A[k] = R[j++];
}
}
#ifdef MERGE_HEAP_ALLOCATION
free(L);
free(R);
#endif
}
void merge_sort(int A[], int p, int r) {
if (p < r) {
int q = (p + r) / 2;
merge_sort(A, p, q);
merge_sort(A, q + 1, r);
merge(A, p, q, r);
}
}
void insertion_sort(int A[], int p, int r) {
int i, j, key;
for (j = p + 1; j <= r; j++) {
key = A[j];
i = j - 1;
while (i >= p && A[i] > key) {
A[i + 1] = A[i];
i = i - 1;
}
A[i + 1] = key;
}
}
void selection_sort(int A[], int p, int r) {
int min, temp;
for (int i = p; i < r; i++) {
min = i;
for (int j = i + 1; j <= r; j++)
if (A[j] < A[min])
min = j;
temp = A[i];
A[i] = A[min];
A[min] = temp;
}
}
void mixed_sort_insertion(int A[], int p, int r) {
if (p >= r) return;
if (r - p < INSERTION_SORT_TRESHOLD) {
insertion_sort(A, p, r);
} else {
int q = (p + r) / 2;
mixed_sort_insertion(A, p, q);
mixed_sort_insertion(A, q + 1, r);
merge(A, p, q, r);
}
}
void mixed_sort_selection(int A[], int p, int r) {
if (p >= r) return;
if (r - p < SELECTION_SORT_TRESHOLD) {
selection_sort(A, p, r);
} else {
int q = (p + r) / 2;
mixed_sort_selection(A, p, q);
mixed_sort_selection(A, q + 1, r);
merge(A, p, q, r);
}
}
### Python code
from itertools import repeat
def insertion_sort(A, p, r):
for j in range(p + 1, r + 1):
key = A[j]
i = j - 1
while i >= p and A[i] > key:
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key
def merge(A, p, q, r):
n1 = q - p + 1
n2 = r - q
L = list(repeat(None, n1))
R = list(repeat(None, n2))
for i in range(n1):
L[i] = A[p + i]
for j in range(n2):
R[j] = A[q + j + 1]
i = 0
j = 0
for k in range(p, r + 1):
if i == n1:
A[k] = R[j]
j += 1
elif j == n2:
A[k] = L[i]
i += 1
elif L[i] <= R[j]:
A[k] = L[i]
i += 1
else:
A[k] = R[j]
j += 1
def merge_sort(A, p, r):
if p < r:
q = int((p + r) / 2)
merge_sort(A, p, q)
merge_sort(A, q + 1, r)
merge(A, p, q, r)
def mixed_sort(A, p, r):
if p >= r: return
if r - p < 20:
insertion_sort(A, p, r)
else:
q = int((p + r) / 2)
mixed_sort(A, p, q)
mixed_sort(A, q + 1, r)
merge(A, p, q, r)
|
{}
|
## Right-hand rule
‘Which way must I turn the bike wheel? Clockwise or counterclockwise?’ You can’t answer that because it depends on which side you look at it…
## Pulleys
With a rope over a pulley you lift a load of bricks. Naturally, you must lift the whole weight $w$ of the bricks. Their weight…
## Models
The finally fixed forest gravel path looks nice, smooth and flat. Perfect for a walk. You call your partner and say ‘the path is flat,…
## Falsification
How do we know that Newton’s 2nd law holds true? Honestly, we don’t. We can’t prove it. But we strongly believe it does. ‘How vague!’,…
|
{}
|
# How would you teach a machine how to count?
var digits = ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"];
var numbers = [digits[0], digits[1]...digits[9], digits[1] + digits[0], digits[1] + digits[1]...];
What would you use to build a model that can predictNextNumber(19)=20 by understanding how counting works?
• myLearningAlgorithm(int x) { return x + 1; } – SmallChess Aug 28 '16 at 11:17
• Do you want to use text input (strings) or numeric input? – Pieter Aug 28 '16 at 18:49
I suggest using supervised learning and employing a linear model: linear regression.
This is a perfectly linear system (y=x+1), so linear regression will work just fine i.e. perfectly. Further, you have an infinite amount of data you can use to train the system, so it should be easy to train ;-) I jest... I think that two data points will be sufficient, again, since it is perfectly linear!
Pedagogically, the triviality of this extremely simple linear system gets a little more interesting when you try analogue based methods like support vector machine (SVM) - which should also be able to provide a perfect result, decision trees or random forests, and even naive Bayes regressors.
Though its useful to start learning with very simple systems, I suggest quickly moving on to something more complex, i.e. don't get too stuck in your own head for the trivial linear model. Don't forget that the power of data science and machine learning lies in its statistical nature, so try to find a test case that includes some statistical variation in the input.
Hope this helps!
• The problem with treating this as a linear system is a chicken-and-egg situation. Using a y=x+1 model is telling the system how to count, not having the system learn from data. Its the difference between coding a tic-tac-toe game by either coding the winning algorithm or coding a general learning algorithm and playing sample games. – Spacedman Aug 29 '16 at 11:14
• @Spacedman, linear regression models do not require that the equation be provided to the algorithm apriori. Rather the equation is inferred from the data. In this case the linear behavior is so simple that only two records are needed. I'm not proposing that y=x+1 be given, just that two records be given. No chicken and egg problem here. – AN6U5 Aug 29 '16 at 14:16
• The only thing the system "learns" by fitting a linear model is the intercept and slope. The slope is just the way of saying "add 1" or "add 2" or "add 3.14" for every 1 unit in x. The machine is not learning to count, its just learning what steps to count in. We're still waiting for the OP to clarify what they mean. Do they want to put addition or linearity into the system a priori, or do they want a symbolic learning process that has no inherent idea about "1" and "2" being mathematical concepts. – Spacedman Aug 29 '16 at 18:51
Feed enough examples into a learning algorithm. In pseudocode:
L = learn_plus_one()
L.teach("12","13")
L.teach("3","4")
L.teach("99","100")
after being taught enough examples, it should figure out that 73 plus one is 74, even though it really knows nothing about addition, its just discovered the pattern from the examples.
So what does learn_plus_one look like? Well that could be a neural network, or any other machine learning system really. Your question is quite vague so I cant be specific.
|
{}
|
Effect of electric field on a dielectric
1. Aug 19, 2009
Rajat jaiswal
friends,
can anyone explain me the effect of electric field on a dielectric(non- conducting) medium
2. Aug 19, 2009
kaymant
First, let's have a look at the effect an external field creates on an otherwise spherical atom. When the atom is placed in the external field, the positive charges tend to move along the direction of the field while negative electron gas cloud in the opposite direction. If the field is strong enough it will break the atom with the electrons ripped off. (That is ionization). But if the field is not too large, there is an effective separation of positive and negative charges on the atom which leads to the formation of a dipole. For small fields, it is found that the induced dipole $$\vec{p}=\alpha \vec{E}$$, where $$\alpha$$ is the atomic polarizability.
Coming to a dielectric, when it is placed in an external field little dipoles are created which essentially point in the same direction. So if we consider a homogeneous dielectric, the positive charge of one dipole coincides with the negative one of the next one and effectively cancels it. This leads to the cancellation of all the charges within the bulk leaving out charges sticking on a very thin layer on the surface. The surface whose (outward) normal is opposite the field gets a negative charge while that one having the normal in the direction of the field gets a positive charge. These charge then create a field of their own which runs in the direction opposite that of the external field. The total field is the sum of the two fields: $$\vec{E}_\mathrm{tot}=\vec{E}_\mathrm{ext}+\vec{E}_\mathrm{d}$$. Since the two fields are opposite inside the dielectric, the resultant field is less inside the dielectric.
However, these are the consequences. The answer to your question should be: An electric field polarizes a dielectric.
3. Aug 19, 2009
drizzle
...you mean:
$$\vec{E}_\mathrm{tot}=\vec{E}_\mathrm{ext}-\vec{E}_\mathrm{d}$$
4. Aug 19, 2009
kaymant
no drizzle. you add them vectorically. the effect will be to subtract their magnitude since the two vectors are pointing in opposite directions.
5. Aug 19, 2009
drizzle
I’m quite convenient with the sign of the vector been represented in the equation [it symbolizes a direction], it also could be written this way;
$$\vec{E}_\mathrm{tot}=\vec{E}_\mathrm{ext}+(-\vec{E}_\mathrm{d})$$
|
{}
|
# Zee, Quantum Field Theory in a Nutshell, problem 1.3.1
Tags:
1. Jul 2, 2016
### Maurice7510
1. The problem statement, all variables and given/known data
I'm working through Zee for some self study and I'm trying to do all the problems, which is understandably challenging. Problem 1.3.1 is where I'm currently stuck: Verify that D(x) decays exponentially for spacelike separation.
2. Relevant equations
The propagator in question is
$$D(x) = -i \int \frac{d^3k}{2(2\pi)^3} \frac{e^{-i\boldsymbol{k}\cdot\boldsymbol{x}}}{\sqrt{\boldsymbol{k}^2+m^2}}$$
3. The attempt at a solution
Presumably, I would have to solve the integral and show that it decays exponentially (the spacelike aspect has already been taken into account for the above integral) and what I did was switch to spherical coordinates and integrated over the azimuthal:
$$D = \frac{-i}{8\pi^2}\int dr\,d\theta\frac{e^{-irx\cos\theta}}{\sqrt{r^2+m^2}}r\cos\theta$$
This is where I'm stuck. The square root in the denominator suggests this is a branch cut integral but I haven't been able to find a source that explains it sufficiently. If anyone could help me figure this out I'd appreciate it. Thanks.
2. Jul 3, 2016
### stevendaryl
Staff Emeritus
I feel that there is something wrong with this equation. Does $x$ mean a spatial vector, or a 4-vector? If it's a 4-vector, then there is a problem, because on the right-hand side of the equals sign, there is no mention of the time component of $x$. If $x$ is a spatial vector vector, then it doesn't make any sense to talk about spacelike separations.
3. Jul 3, 2016
### Maurice7510
That is curious, though I wrote exactly what he has in the book. Initially, we have
$$D(x) = -i \int \frac{d^3k}{(2\pi)^32\omega_k}[e^{-i(\omega_kt - \boldsymbol{k}\cdot\boldsymbol{k})}\theta(x^0)+e^{i(\omega_kt - \boldsymbol{k}\cdot\boldsymbol{k})}\theta(-x^0)]$$
which he reduces to the above equation in consideration of spacelike separation where $x^0= 0$. So I suppose the notation on the left is a four vector, it just happens to be $x = (0, x^1, x^2, x^3) \equiv \boldsymbol{x}$.
4. Jul 8, 2016
### Maurice7510
I now realize I made a mistake in the spherical coordinate substitution, the integral should be
$$D(x) = \frac{-i}{8\pi^2} int dr\,d\theta \frac{e^{-irx\cos\theta}}{\sqrt{r^2+m^2}} r^2\sin\theta$$
The integral in $\theta$ is fairly straight forward at this point, and I got
$$D(x) = \frac{-i}{4\pi x}\int dr \frac{r\sin(rx)}{\sqrt{r^2+m^2}}$$
which i now have no idea how to solve
5. Jul 9, 2016
### malawi_glenn
That is because that integral has no solution in terms of elementary functions.
6. Jul 26, 2016
### Fred Wright
If you assume that the magnitude of the radial velocity is much less than the speed of light, then you can expand the denominator in the integral to second order in $p_r$.
$\sqrt{p^2 + m^2} = m\sqrt{1 + \frac {p^2} {m^2}} \approx m(1 + \frac {p^2} {m^2}) = \frac {(p - im)(p + im)} {m}$
The integral becomes
$D(x) \approx \frac {1} {2(2\pi)^2} \frac {m} {|x|} \int_{0}^\infty dr \frac {r[(-i) \sin(r|x|)]} {(r - im)(r + im)} = \frac {1} {2(2\pi)^2} \frac {m} {|x|} \mathcal {Im} \int_{0}^\infty dr \frac {r e^{-i|x|r}} {(r - im)(r + im)}$
We have simple poles at $z_0 = \pm im$. We use the Cauchy Integral theorem and the residue theorem, closing a semi-circular contour in the lower half-plane, and noting that the integral along the arc tends to zero as the radius tends to $\infty$ and the integral along the real axis from $-\infty$ to $+\infty$ is twice the integral from 0 to $\infty$. We find,
$D(x) \approx \frac {1} {16\pi}\frac{m} {|x|}e^{-|x|m}$
|
{}
|
Article | Open | Published:
# Ambient ammonia synthesis via palladium-catalyzed electrohydrogenation of dinitrogen at low overpotential
## Abstract
Electrochemical reduction of N2 to NH3 provides an alternative to the Haber−Bosch process for sustainable, distributed production of NH3 when powered by renewable electricity. However, the development of such process has been impeded by the lack of efficient electrocatalysts for N2 reduction. Here we report efficient electroreduction of N2 to NH3 on palladium nanoparticles in phosphate buffer solution under ambient conditions, which exhibits high activity and selectivity with an NH3 yield rate of ~4.5 μg mg−1Pd h−1 and a Faradaic efficiency of 8.2% at 0.1 V vs. the reversible hydrogen electrode (corresponding to a low overpotential of 56 mV), outperforming other catalysts including gold and platinum. Density functional theory calculations suggest that the unique activity of palladium originates from its balanced hydrogen evolution activity and the Grotthuss-like hydride transfer mechanism on α-palladium hydride that lowers the free energy barrier of N2 hydrogenation to *N2H, the rate-limiting step for NH3 electrosynthesis.
## Introduction
Due to the limited supply of fossil fuels, there is a critical demand to use renewable energy to drive the chemical processes that have heavily relied on the consumption of fossil fuels1. One such energy-intensive chemical process is the Haber−Bosch process2, 3, which produces NH3 from N2 and H2 using iron-based catalysts under high temperature (350−550 °C) and high pressure (150−350 atm). NH3 is one of the most highly produced inorganic chemicals in the world, because of its vast need in fertilizer production, pharmaceutical production, and many other industrial processes4, 5. In 2015, around 146 million tons of NH3 was produced globally through the Haber−Bosch process6, which consumes 3−5% of the annual natural gas production worldwide, approximating to 1−2% of the global annual energy supply2, 5. This industrial process is also responsible for >1% of the global CO2 emissions2. Therefore, it is highly desirable to develop an alternative, efficient process for NH3 synthesis using renewable energy7,8,9,10,11, which can simultaneously reduce the CO2 emissions.
One alternative approach to the Haber−Bosch process is to use electrical energy to drive the NH3 synthesis reaction under ambient conditions12,13,14,15, which can reduce the need for high temperature and pressure, and, thereby lower the energy demand. When powered by electricity from renewable energy sources such as solar and wind, electrochemical synthesis of NH3 from N2 and H2O can facilitate sustainable, distributed production of NH3, as well as the storage of renewable energy in NH3 as a carbon-neutral liquid fuel16,17,18, owing to its high energy density (4.32 kWh L−1), high hydrogen content (17.6 wt%), and facile liquidation (boiling point: −33.3 °C at 1 atm). However, the development of the process has been impeded by the lack of efficient electrocatalysts for N2 reduction reaction (N2RR). Although electrochemical synthesis of NH3 has been demonstrated on various materials including Ru, Pt, Au, Fe, and Ni19,20,21,22,23,24,25,26,27,28,29,30,31, most of them showed low activity and selectivity (typically, Faradaic efficiency <1%) for NH3 production26,27,28,29,30,31. Therefore, major improvement in N2RR catalysts is essential for the development of low-temperature NH3 electrolyzers, which necessitates a better understanding and control of the catalytic materials and the reaction kinetics.
There are two major challenges associated with electrochemical NH3 synthesis in aqueous media32, 33. From the thermodynamic point of view, the splitting of the strong N≡N bond requires a reduction potential where the hydrogen evolution reaction (HER) readily occurs, leading to an extremely low Faradaic efficiency for N2RR under ambient conditions26,27,28,29,30,31. Therefore, it would be optimal to find a catalytic system that can promote N2RR at low overpotentials and suppress the competing HER. From the kinetic perspective, it is suggested that the rate-determining step for N2RR is the formation of *N2H through a proton-coupled electron transfer process (H+ + e + * + N2 → *N2H)33, 34, where * signifies an active site on the catalyst surface. It involves a proton from the electrolyte, an electron transferred from the electrode, and a N2 molecule in the solution. The strong solvent reorganization required for the endergonic charge transfer steps has a low probability of occurrence, leading to the sluggish kinetics. Instead, if the atomic *H species can be formed on a catalyst surface and directly react with N2, it may largely accelerate the kinetics to form *N2H: *H + N2 → *N2H. Indeed, there have been several investigations of metal hydride complexes for N2 reduction in a homogenous medium35, 36. Similarly, NH3 synthesis has been achieved at a low temperature by LiH-mediated N2 transfer and hydrogenation on the transition metals37. Therefore, it is imperative to examine such a hydrogenation pathway for the electrochemical N2RR in aqueous electrolyte systems under ambient conditions.
Here we report an ambient electrochemical reduction of N2 to NH3 on carbon black-supported Pd nanoparticles (Pd/C), which can form Pd hydrides under certain potentials and promote surface hydrogenation reactions. Operating in a N2-saturated phosphate buffer solution (PBS) electrolyte, the Pd/C catalyst enables NH3 production with a yield rate of around 4.5 μg mg−1Pd h−1 and a high Faradaic efficiency of 8.2% at 0.1 V vs. the reversible hydrogen electrode (RHE), which corresponds to a low overpotential of 56 mV. This catalytic performance is enabled by an effective suppression of the HER activity in the neutral PBS electrolyte and the Grotthuss-like hydride transfer mechanism on α-PdH for N2 hydrogenation. All potentials reported in this study are with respect to the RHE scale.
## Results
### Synthesis and characterization of Pd/C catalyst
The Pd/C catalyst was prepared using polyol reduction method (see Methods section for experimental details). Figure 1a shows a representative transmission electron microscopy (TEM) image of the obtained Pd/C catalyst, which suggests that the Pd nanoparticles are homogeneously dispersed on the carbon black. The nanoparticle sizes have a narrow distribution between 4 and 9 nm, with an average size of around 6 nm (see the inset of Fig. 1a). A high-resolution TEM image in Fig. 1b shows the atomic lattice fringes of the particles with lattice plane spacings determined to be 0.225 nm, corresponding to the (111) lattice spacing of Pd. An X-ray diffraction (XRD) pattern of the Pd/C catalyst is shown in Fig. 1c, in which the peaks with 2θ values of 40.1o, 46.6o, 68.1o, 82.1o, and 86.6o can be indexed to the diffraction from the (111), (200), (220), (311), and (222) lattice planes of Pd, respectively (PDF#65-2867). X-ray photoelectron spectroscopy (XPS) was used to examine the elemental composition of the Pd/C catalyst. As shown in Fig. 1d, only Pd, C, and O were observed in the survey spectrum, where the binding energies of 335.4 and 340.9 eV correspond to the 3d5/2 and 3d3/2 levels of metallic Pd0. Of note, no N species were observed within the detection limit of the XPS (~0.1 atomic percent), as shown in Supplementary Fig. 1.
### Electroreduction of N2 to NH3 on the Pd/C catalyst
The electrochemical measurements were performed using a gas-tight two-compartment electrochemical cell separated by a piece of Nafion 115 membrane (Supplementary Fig. 2). A piece of Pt gauze and Ag/AgCl electrode (filled with saturated KCl solution) were used as counter electrode and reference electrode, respectively. The working electrode was prepared by dispersing the Pd/C catalyst on a carbon paper or a glassy carbon substrate, as specified below. N2 gas was delivered into the cathodic compartment by N2 gas bubbling. The N2RR activities of the electrodes were evaluated using controlled potential electrolysis with N2-saturated electrolyte for 3 h. All potentials were iR-compensated and converted to the RHE scale via calibration (Supplementary Fig. 3). The gas-phase product (H2) was quantified by periodic gas chromatography of the headspace. The produced NH3 in the solution phase was quantified at the end of each electrolysis using the calibration curves established by the indophenol blue method38 (Supplementary Fig. 4). Another possible solution-phase product, N2H4, was also determined using a spectrophotometric method developed by Watt and Chrisp39 (Supplementary Fig. 5), whereas no N2H4 was detected in our studies within the detection limit of the method.
To boost the selectivity for N2RR, we need to find an electrolyte that can effectively suppress the competing HER. We compared the HER activities of the Pd/C catalyst in three Ar-saturated electrolytes: 0.05 M H2SO4 (pH = 1.2), 0.1 M PBS (pH = 7.2), and 0.1 M NaOH (pH = 12.9). Linear sweep voltammograms (LSV) of the Pd/C catalyst in the three electrolytes show that the current densities measured in H2SO4 and NaOH are both several times higher than those in PBS in a wide potential range (see Fig. 2a and Supplementary Fig. 6a), indicating an effective suppression of the HER activity in the neutral PBS electrolyte. The less favorable kinetics of HER in PBS is because of its higher barrier for mass- and charge-transfer40, as evidenced by the electrochemical impedance spectra in Supplementary Fig. 6b. Similarly, the controlled potential electrolysis in N2-saturated PBS at −0.05 V vs. RHE produces a current density of about 0.3 mA cm−2 (Fig. 2b), which is much lower than that in H2SO4 (~8 mA cm−2) and NaOH (~3.5 mA cm−2). However, the NH3 yield rate in PBS reaches 4.9 μg mg−1Pd h−1 (Fig. 2c), which is around two times of that in H2SO4 (2.5 μg mg−1Pd h−1) and that in NaOH (2.1 μg mg−1Pd h−1). More strikingly, a Faradaic efficiency of 2.4% is achieved in PBS, whereas both H2SO4 and NaOH electrolytes give rise to a Faradaic efficiency lower than 0.1%. These results clearly indicate that PBS is a promising electrolyte for electrochemical N2RR due to its effective suppression of the HER activity. Therefore, all the following N2RR experiments were performed with 0.1 M PBS electrolyte.
Subsequently, the activities of the Pd/C catalyst for N2RR were systematically investigated in N2-saturated PBS at various potentials with separately prepared electrodes. As shown in Fig. 2d, the total current density increases from ~0.05 to more than 1.2 mA cm−2, as the potential shifts from 0.1 to −0.2 V. Interestingly, the NH3 yield rate remains similar within this potential range, fluctuating around 4.5 μg mg−1Pd h−1 (Fig. 2e). Strikingly, the Faradaic efficiency for NH3 production reaches a maximum value of 8.2% at 0.1 V, which corresponds to a low overpotential of 56 mV, given the equilibrium potential of 0.156 V for N2 reduction to NH3 under our experimental conditions (see Methods section for the calculations). The Faradaic efficiency decreases gradually at more negative potentials, which is mainly caused by the rapid rising of the HER activity (Supplementary Fig. 7). To the best of our knowledge, the Pd/C catalyst achieves an NH3 yield rate and Faradaic efficiency that are comparable to the recently reported catalysts for N2RR under ambient conditions (Supplementary Table 1), but uses an overpotential lower by at least 300 mV, making it one of the most active and selective electrocatalysts for ambient NH3 synthesis.
In addition, we have carefully examined the N source of the produced NH3. First, control experiments with Ar-saturated electrolyte or without Pd catalyst were performed. As shown in Fig. 2f, no apparent NH3 was detected using the indophenol blue method when the bubbled N2 gas was replaced by Ar or when a carbon paper electrode without Pd was used, indicating that the NH3 was produced by N2 reduction in the presence of Pd catalyst. Furthermore, 15N isotopic labeling experiment was performed as an alternative method to verify the N source of the produced NH3 in 0.1 M PBS electrolyte. A triplet coupling for 14NH4+ and a doublet coupling for 15NH4+ in the 1H nuclear magnetic resonance (1H NMR) spectra are used to distinguish them25. As shown in Supplementary Fig. 8, only 15NH4+ was observed in the electrolyte when 15N2 was supplied as the feeding gas, and no NH4+ was detected when Ar was supplied, which are consistent with the control experiments and confirm that the NH3 was produced by Pd-catalyzed electroreduction of N2.
The stability of the Pd/C catalyst for electrochemical N2RR was assessed by consecutive recycling electrolysis at −0.05 V vs. RHE. After five consecutive cycles, only a slight decline in the total current density was observed, as shown in Supplementary Fig. 9. However, the NH3 yield rate and Faradaic efficiency decreased to 2.4 μg mg−1Pd h−1 and 1.2% after five cycles, indicating a loss of the N2RR activity by 50% after the 15 h operation. The decrease in the N2RR activity is due to the loss of active Pd surface area caused by the aggregation of Pd nanoparticles on the carbon support, as evidenced by the TEM images of the Pd/C catalyst after the recycling test (Supplementary Fig. 9). Further improvement in the dispersion of Pd nanoparticles on the support and interaction between them will be beneficial for the long-term durability of the catalyst.
## Discussion
The Pd/C catalyst exhibits high activity and selectivity for the N2RR at a low overpotential of 56 mV. To explore the underlying mechanism and to see whether it is exclusive to Pd, we prepared Au/C and Pt/C catalysts with identical metal loading using the same method, and compared their N2RR catalytic performance with that of the Pd/C catalyst (Fig. 3). The structural and compositional characterizations of the Au/C and Pt/C catalysts, including TEM, XRD, and XPS (Supplementary Fig. 10), confirm the successful synthesis of the metal nanoparticles with similar sizes as that of the Pd/C catalyst. At −0.05 V vs. RHE, the Au/C catalyst exhibits a current density (<0.04 mA cm−2) much lower than that of the Pd/C catalyst, whereas the Pt/C catalyst shows a slightly higher current density (see inset of Fig. 3). Both the Au/C and Pt/C catalysts produce NH3 at a rate of about 0.3 μg mg−1metal h−1, which is lower than that of the Pd/C catalyst by more than one order of magnitude. The Au/C catalyst achieves a Faradaic efficiency of 1.2% for N2RR, due to its low activity for the HER. In contrast, the Faradaic efficiency for N2RR on the Pt/C catalyst is only 0.2%, much lower than that of the Au/C catalyst, because Pt is intrinsically the most active catalyst for HER (Supplementary Fig. 11). In comparison, such a significant difference in both N2RR activity and selectivity clearly indicates that Pd is a unique catalyst for N2RR. Actually, Pd can readily absorb H atoms in its lattice, forming Pd hydrides under operating conditions41. The cathodic current observed at 0.10 and 0.05 V is similar to the data in a previous study42, and the unaccounted current at the two potentials (see Supplementary Fig. 7b) may be similarly attributed to the dynamic hydrogen adsorption and absorption on Pd42, in addition to the capacitance of the carbon support. It has been reported that Pd catalyzes the electroreduction of CO2 to formate with high activity and selectivity at low overpotentials43, 44, and both experimental and computational studies have confirmed that it is attributed to a hydrogenation mechanism through in situ formed PdHx phase45. Interestingly, a recent study of N2RR on commercial Pd/C catalysts in acidic and alkaline electrolytes showed Faradaic efficiencies lower than 0.1% at −0.2 V vs. RHE31, which are consistent with our results under similar conditions (Fig. 2c), and highlight the critical roles of the HER suppression in PBS and the hydrogenation via hydride transfer pathway at low overpotentials (vide infra).
To understand the unique activity of N2RR on Pd/C, we performed density functional theory (DFT) calculations for the energetics of HER and N2RR steps on the (211) surfaces of Au, Pt, and Pd hydride (see Supplementary Note 1 for the computational details). As employed previously, the undercoordinated step atoms are assumed to be the catalytic site for activating the N≡N bond46. For Pd, we have two subsurface *H (*Hsub) at the octahedral sites underneath the Pd edge atoms to simulate the α-phase Pd hydride (α-PdH), which is the stable phase under operating potentials45. According to the differential adsorption free energies of *H on the (211) surfaces (Supplementary Fig. 12), we adopted models with 2/3 monolayer of *H for α-PdH and Pt, on which the step/terrace sites are occupied by *H and the bottom-of-the-edge sites are free, while a clean surface for Au was used (geometric structures shown in Fig. 4). In the inset of Fig. 4, we can see that the HER on Pt is facile with the free formation energy of *H close to zero47. In contrast, the process is limited by *H desorption on α-PdH and by *H adsorption on Au. Interestingly, the free energy cost in creating a *H vacancy (*H-v) necessary for N2 collision at step sites is much lower on α-PdH (0.18 eV) than that on Pt (0.41 eV). Furthermore, the hydrogenation of N2 by a surface *H to form *N2H on α-PdH (1.18 eV) is thermodynamically more favorable than that on Pt (1.37 eV) and Au (2.21 eV). On α-PdH, the surface hydrogenation is accompanied by the transfer of *Hsub to the surface site, analogous to the Grotthuss-like proton-hopping mechanism in a water network48. The reaction-free energy of N2 hydrogenation on Pd without the hydride transfer is less favorable by 0.3 eV (see Supplementary Fig. 13 for a direct comparison). The nature of this rate-determining step, i.e., surface chemical hydrogenation instead of the proton-coupled electron transfer, supports the observed weak potential dependence of the NH3 yield rate at low overpotentials. DFT-calculated free energies of hydrogenation of N2 vs. hydrogen evolution across metal surfaces (Au, Pt, and α-PdH) rationalize the activity and selectivity trends in Fig. 3.
In summary, we have discovered an efficient electrohydrogenation of N2 to NH3 on Pd/C catalyst at an overpotential as low as 56 mV for the electrochemical NH3 synthesis under ambient conditions. The Pd/C catalyst exhibits high activity and selectivity for N2RR in a PBS electrolyte, achieving an NH3 yield rate of about 4.5 μg mg−1Pd h−1 and a Faradaic efficiency of 8.2% at 0.1 V vs. RHE. Comparative experiments indicate an effective suppression of the HER in the neutral PBS electrolyte, and a significantly higher N2RR activity of Pd than other catalysts including Au and Pt. The DFT calculations suggest that the in situ formed α-PdH allows the activation of N2 via a Grotthuss-like hydride transfer pathway that is thermodynamically more favorable than direct surface hydrogenation or proton-coupled electron transfer steps. Our findings open up an avenue to develop efficient electrocatalysts for not only the electroreduction of N2 to NH3, but also other challenging electrocatalytic reactions for renewable energy conversions.
## Methods
### Synthesis of carbon black-supported metal catalysts
The Pd/C catalyst was synthesized using polyol reduction method. First, the Pd precursor (K2PdCl4) solution was prepared by dissolving PdCl2 in water in the presence of KCl. Typically, 70 mg of carbon black was dispersed in 100 mL of ethylene glycol, followed by sonication for 30 min. Then, 5 mL of K2PdCl4 solution (Pd 6 mg mL−1) was added into the mixture. After stirring for another 30 min, the mixture was heated to 130 °C and kept at this temperature for 2 h. The catalyst slurry was filtered and washed with water. The resulting Pd/C catalyst was dried at 60 °C overnight, with a Pd loading of 30 wt%. Comparative samples of Au/C and Pt/C catalysts with a metal loading of 30 wt% were prepared using the same procedure, except with different metal precursors.
### Physical characterizations
TEM images were acquired using a FEI Tecnai F30 Transmission Electron Microscope, with a field emission gun operated at 200 kV. The XRD pattern was collected using a PANalytical Empyrean diffractometer, with a 1.8 KW copper X-ray tube. XPS was performed using a Physical Electronics 5400 ESCA photoelectron spectrometer. Gas products were analyzed by a gas chromatograph (SRI Multiple Gas Analyzer #5) equipped with molecular sieve 5 A and HayeSep D columns, with Ar as the carrier gas.
### Preparations of the working electrodes
First, 8 mg of carbon black-supported metal catalyst was dispersed in diluted Nafion alcohol solution containing 1.5 mL ethanol and 60 μL Nafion, which formed a homogeneous suspension after sonication for 1 h. Two types of substrates were used to prepare electrodes in this study: one was carbon paper, and the other was glassy carbon. The carbon-paper electrodes were prepared by drop-casting the suspension on a piece of carbon paper (1 × 1 cm2), with a total mass loading of 1 mg (of which 30 wt% is Pd), which were used for all controlled potential electrolyses. The glassy-carbon electrodes were prepared by drop-casting the suspension on a round glassy carbon (diameter = 3 mm), with a total mass loading of 0.435 mg cm−2 (of which 30 wt% is Pd), which were used for all linear sweep voltammetry measurements, except Supplementary Fig. 6a.
### Electrochemical measurements
Prior to N2RR tests, Nafion 115 membranes were heat-treated in 5% H2O2, 0.5 M H2SO4, and water for 1 h, respectively. After being rinsed in water thoroughly, the membranes were immersed in deionized water for future use. Electrochemical measurements were performed using a CH Instruments 760E Potentiostat, with a gas-tight two-compartment electrochemical cell separated by a piece of Nafion 115 membrane at room temperature (Supplementary Fig. 2). A piece of Pt gauze and Ag/AgCl/sat. KCl were used as counter electrode and reference electrode, respectively. The linear sweep voltammetry was scanned at a rate of 5 mV s−1. The N2RR activity of an electrode was evaluated using controlled potential electrolysis in an electrolyte for 3 h at room temperature (~293 K). Prior to each electrolysis, the electrolyte was presaturated with N2 by N2 gas bubbling for 30 min. During each electrolysis, the electrolyte was continuously bubbled with N2 at a flow rate of 10 sccm, and was agitated with a stirring bar at a stirring rate of ~800 rpm. No in-line acid trap was used to capture NH3 that might escape from the electrolyte in our study, as no apparent NH3 was detected in the acid trap under our experimental conditions. The applied potentials were iR-compensated, and the reported current densities were normalized to geometric surface areas.
### Calibration of the reference electrodes
All potentials in this study were converted to the RHE scale via calibration (Supplementary Fig. 3). The calibration was performed using Pt gauze as both working electrode and counter electrode in H2-saturated electrolyte. Cyclic voltammograms were acquired at a scan rate of 1 mV s−1. The two potentials at which the current equaled zero were averaged and used as the thermodynamic potential for the hydrogen electrode reactions.
### Ammonia quantification
The produced NH3 was quantitatively determined using the indophenol blue method38. Typically, 2 mL of the sample solution was first pipetted from the post-electrolysis electrolyte. Afterwards, 2 mL of a 1 M NaOH solution containing salicylic acid (5 wt%) and sodium citrate (5 wt%) was added, and 1 mL of NaClO solution (0.05 M) and 0.2 mL of sodium nitroferricyanide solution (1 wt%) were added subsequently. After 2 h, the absorption spectra of the resulting solution were acquired with an ultraviolet-visible (UV-vis) spectrophotometer (BioTek Synergy H1 Hybrid Multi-Mode Reader). The formed indophenol blue was measured by absorbance at λ = 653 nm. In order to quantify the produced NH3, the calibration curves were built using standard NH4Cl solutions in the presence of 0.05 M H2SO4, 0.1 M PBS, and 0.1 M NaOH, respectively (Supplementary Fig. 4), to take into account the possible influence of different pH values38. The measurements with the background solutions (no NH3) were performed for all experiments, and the background peak was subtracted from the measured peaks of N2RR experiments to calculate the NH3 concentrations and the Faradaic efficiencies.
### Hydrazine quantification
The yellow color developed upon the addition of p-dimethylaminobenzaldehyde (PDAB) to solutions of N2H4 in dilute hydrochloric acid solution was used as the basis for the spectrophotometric method to quantify the N2H4 concentration39. Typically, 5 mL of the electrolyte solution was taken out and then mixed with 5 mL of the coloring solution (4 g of PDAB dissolved in 20 mL of concentrated hydrochloric acid and 200 mL of ethanol). After 15 min, the absorption spectra of the resulting solution were acquired using a UV-vis spectrophotometer (BioTek Synergy H1 Hybrid Multi-Mode Reader). The solutions of N2H4 with known concentrations in 0.1 M PBS were used as calibration standards, and the absorbance at λ = 458 nm was used to plot the calibration curves (Supplementary Fig. 5).
### Calculation of the equilibrium potential
The standard potential for the half reaction of N2 reduction to NH4OH was calculated according to the standard molar Gibbs energy of formation at 298.15 K49. The equilibrium potential under our experimental conditions is calculated using the Nernst equation, assuming 1 atm of N2 and a NH4OH concentration of 0.01 mM in the solution.
$N 2 g +2 H 2 O+6 H + +6 e - →2 NH 4 OH aq Δ G ∘ =-33.8kJ mol - 1$
(1)
E° = −ΔG°/nF = 0.058 V, where n = 6 is the number of electrons transferred in the reaction and F is the Faraday constant.
The equilibrium potential under our experimental conditions is calculated using the Nernst equation, assuming 1 atm of N2 and a 0.01 mM concentration of NH4OH in the solution.
$E= E ∘ - R T 6 F ln NH 4 OH 2 H + 6 +0.059V×pH=0.156Vvs.RHE$
(2)
### Calculation of the Faradaic efficiency and the yield rate
The Faradaic efficiency was estimated from the charge consumed for NH3 production and the total charge passed through the electrode:
$Faradaicefficiency= ( 3 F × c NH 3 × V ) ∕Q$
(3)
The yield rate of NH3 can be calculated as follows:
$Yieldrate = ( 17 c NH 3 × V ) ∕ ( t × m )$
(4)
where F is the Faraday constant (96,485 C mol−1), $c NH 3$ is the measured NH3 concentration, V is the volume of the electrolyte, Q is the total charge passed through the electrode, t is the electrolysis time (3 h), and m is the metal mass of the catalyst (typically 0.3 mg). The reported NH3 yield rate, Faradaic efficiency, and error bars were determined based on the measurements of three separately prepared samples under the same conditions.
### 15N isotope labeling experiment
The isotopic labeling experiment was carried out using 15N2 as the feeding gas (Sigma-Aldrich, 98 atom % 15N) with 0.1 M PBS electrolyte. After electrolysis at −0.05 V vs. RHE for 10 h, 10 mL of the electrolyte was taken out and acidized to pH ~3 by adding 0.5 M H2SO4, and then concentrated to 2 mL by heating at 70 °C. Afterwards, 0.9 mL of the resulting solution was taken out and mixed with 0.1 mL D2O containing 100 ppm dimethyl sulphoxide (Sigma-Aldrich, 99.99%) as an internal standard for 1H nuclear magnetic resonance measurement (1H NMR, Bruker Avance III 400 MHz).
### Computational studies
DFT calculations were performed using the plane-wave-based PWSCF (Quantum-ESPRESSO) program and the Atomic Simulation Environment (ASE). The ultrasoft Vanderbilt pseudopotential method with Perdew–Burke–Ernzerhof (PBE) exchange-correlation functional was adopted. More calculation details and relevant references are provided in the Supplementary Note 1, Supplementary Table 2, and Supplementary References.
### Data availability
The data that support the findings of this study are available within the paper and its Supplementary Information file or are available from the corresponding authors upon reasonable request.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Chu, S. & Majumdar, A. Opportunities and challenges for a sustainable energy future. Nature 488, 294–303 (2012).
2. 2.
Smil, V. Enriching the Earth: Fritz Haber, Carl Bosch, and the Transformation of World Food Production (MIT Press, Cambridge, 2004).
3. 3.
Jennings, J. R. Catalytic Ammonia Synthesis: Fundamentals and Practice (Springer Science & Business Media, New York, 2013).
4. 4.
Gruber, N. & Galloway, J. N. An Earth-system perspective of the global nitrogen cycle. Nature 451, 293–296 (2008).
5. 5.
Erisman, J. W., Sutton, M. A., Galloway, J., Klimont, Z. & Winiwarter, W. How a century of ammonia synthesis changed the world. Nat. Geosci. 1, 636–639 (2008).
6. 6.
Apodaca, L. E. In Mineral Commodity Summaries 2016: U.S. Geological Survey (ed. Kimball, S. M.) 118–119 (Government Publishing Office, Washington, DC, 2016).
7. 7.
Kitano, M. et al. Ammonia synthesis using a stable electride as an electron donor and reversible hydrogen store. Nat. Chem. 4, 934–940 (2012).
8. 8.
Service, R. F. New recipe produces ammonia from air, water, and sunlight. Science 345, 610 (2014).
9. 9.
van der Ham, C. J. M., Koper, M. T. M. & Hetterscheid, D. G. H. Challenges in reduction of dinitrogen by proton and electron transfer. Chem. Soc. Rev. 43, 5183–5191 (2014).
10. 10.
Ali, M. et al. Nanostructured photoelectrochemical solar cell for nitrogen reduction using plasmon-enhanced black silicon. Nat. Commun. 7, 11335 (2016).
11. 11.
Brown, K. A. et al. Light-driven dinitrogen reduction catalyzed by a CdS:nitrogenase MoFe protein biohybrid. Science 352, 448–450 (2016).
12. 12.
Rosca, V., Duca, M., de Groot, M. T. & Koper, M. T. M. Nitrogen cycle electrocatalysis. Chem. Rev. 109, 2209–2244 (2009).
13. 13.
Renner, J. N., Greenlee, L. F., Ayres, K. E. & Herring, A. M. Electrochemical synthesis of ammonia: a low pressure, low temperature approach. Electrochem. Soc. Interface 24, 51–57 (2015).
14. 14.
Kyriakou, V., Garagounis, I., Vasileiou, E., Vourros, A. & Stoukides, M. Progress in the electrochemical synthesis of ammonia. Catal. Today 286, 2–13 (2017).
15. 15.
Shipman, M. A. & Symes, M. D. Recent progress towards the electrosynthesis of ammonia from sustainable resources. Catal. Today 286, 57–68 (2017).
16. 16.
Zamfirescu, C. & Dincer, I. Using ammonia as a sustainable fuel. J. Power Sources 185, 459–465 (2008).
17. 17.
Klerke, A., Christensen, C. H., Nørskov, J. K. & Vegge, T. Ammonia for hydrogen storage: challenges and opportunities. J. Mater. Chem. 18, 2304–2310 (2008).
18. 18.
Soloveichik, G. Liquid fuel cells. Beilstein J. Nanotechnol. 5, 1399–1418 (2014).
19. 19.
Licht, S. et al. Ammonia synthesis by N2 and steam electrolysis in molten hydroxide suspensions of nanoscale Fe2O3. Science 345, 637–640 (2014).
20. 20.
Bao, D. et al. Electrochemical reduction of N2 under ambient conditions for artificial N2 fixation and renewable energy storage using N2/NH3 Cycle. Adv. Mater. 29, 1604799 (2017).
21. 21.
Shi, M. M. et al. Au sub-nanoclusters on TiO2 toward highly efficient and selective electrocatalyst for N2 conversion to NH3 at ambient conditions. Adv. Mater. 29, 1606550 (2017).
22. 22.
Li, S. J. et al. Amorphizing of Au nanoparticles by CeOx–RGO hybrid support towards highly efficient electrocatalyst for N2 reduction under ambient conditions. Adv. Mater. 29, 1700001 (2017).
23. 23.
Ma, J. L., Bao, D., Shi, M. M., Yan, J. M. & Zhang, X. B. Reversible nitrogen fixation based on a rechargeable lithium-nitrogen battery for energy storage. Chem 2, 525–532 (2017).
24. 24.
McEnaney, J. M. et al. Ammonia synthesis from N2 and H2O using a lithium cycling electrification strategy at atmospheric pressure. Energy Environ. Sci. 10, 1621–1630 (2017).
25. 25.
Chen, G. F. et al. Ammonia electrosynthesis with high selectivity under ambient conditions via a Li+ incorporation strategy. J. Am. Chem. Soc. 139, 9771–9774 (2017).
26. 26.
Kordali, V., Kyriacou, G. & Lambrou, C. Electrochemical synthesis of ammonia at atmospheric pressure and low temperature in a solid polymer electrolyte cell. Chem. Commun. (https://doi.org/10.1039/B004885M (2000).
27. 27.
Giddey, S., Badwal, S. P. S. & Kulkarni, A. Review of electrochemical ammonia production technologies and materials. Int. J. Hydrog. Energy 38, 14576–14594 (2013).
28. 28.
Lan, R., Irvine, J. T. S. & Tao, S. Synthesis of ammonia directly from air and water at ambient temperature and pressure. Sci. Rep. 3, 1145 (2013).
29. 29.
Kim, K. et al. Electrochemical reduction of nitrogen to ammonia in 2-propanol under ambient temperature and pressure. J. Electrochem. Soc. 163, F610–F612 (2016).
30. 30.
Chen, S. et al. Electrocatalytic synthesis of ammonia at room temperature and atmospheric pressure from water and nitrogen on a carbon-nanotube-based electrocatalyst. Angew. Chem. Int. Ed. 56, 2699–2703 (2017).
31. 31.
Nash, J. et al. Electrochemical nitrogen reduction reaction on noble metal catalysts in proton and hydroxide exchange membrane electrolyzers. J. Electrochem. Soc. 164, F1712–F1716 (2017).
32. 32.
Montoya, J. H., Tsai, C., Vojvodic, A. & Nørskov, J. K. The challenge of electrochemical ammonia synthesis: a new perspective on the role of nitrogen scaling relations. ChemSusChem 8, 2180–2186 (2015).
33. 33.
Singh, A. R. et al. Electrochemical ammonia synthesis—the selectivity challenge. ACS Catal. 7, 706–709 (2017).
34. 34.
Zhu, D., Zhang, L., Ruther, R. E. & Hamers, R. J. Photo-illuminated diamond as a solid-state source of solvated electrons in water for nitrogen reduction. Nat. Mater. 12, 836–841 (2013).
35. 35.
Akagi, F., Matsuo, T. & Kawaguchi, H. Dinitrogen cleavage by a diniobium tetrahydride complex: formation of a nitride and its conversion into imide species. Angew. Chem. Int. Ed. 46, 8778–8781 (2007).
36. 36.
Shima, T. et al. Dinitrogen cleavage and hydrogenation by a trinuclear titanium polyhydride complex. Science 340, 1549–1552 (2013).
37. 37.
Wang, P. et al. Breaking scaling relations to achieve low-temperature ammonia synthesis through LiH-mediated nitrogen transfer and hydrogenation. Nat. Chem. 9, 64–70 (2017).
38. 38.
Searle, P. L. The berthelot or indophenol reaction and its use in the analytical chemistry of nitrogen. A review. Analyst 109, 549–568 (1984).
39. 39.
Watt, G. W. & Chrisp, J. D. A spectrophotometric method for the determination of hydrazine. Anal. Chem. 24, 2006–2008 (1952).
40. 40.
Strmcnik, D., Lopes, P. P., Genorio, B., Stamenkovic, V. R. & Markovic, N. M. Design principles for hydrogen evolution reaction catalyst materials. Nano Energy 29, 29–36 (2016).
41. 41.
Wickman, B. et al. Depth probing of the hydride formation process in thin Pd films by combined electrochemistry and fiber optics-based in situ UV/vis spectroscopy. Phys. Chem. Chem. Phys. 17, 18953–18960 (2015).
42. 42.
Hara, M., Linke, U. & Wandlowski, T. Preparation and electrochemical characterization of palladium single crystal electrodes in 0.1 M H2SO4 and HClO4: Part I. Low-index phases. Electrochim. Acta 52, 5733–5748 (2007).
43. 43.
Min, X. & Kanan, M. W. Pd-catalyzed electrohydrogenation of carbon dioxide to formate: high mass activity at low overpotential and identification of the deactivation pathway. J. Am. Chem. Soc. 137, 4701–4708 (2015).
44. 44.
Klinkova, A. et al. Rational design of efficient palladium catalysts for electroreduction of carbon dioxide to formate. ACS Catal. 6, 8115–8120 (2016).
45. 45.
Gao, D. F. et al. Switchable CO2 electroreduction via engineering active phases of Pd nanoparticles. Nano Res. 10, 2181–2191 (2017).
46. 46.
Honkala, K. et al. Ammonia synthesis from first-principles calculations. Science 307, 555–558 (2005).
47. 47.
Hinnemann, B. et al. Biomimetic hydrogen evolution: MoS2 nanoparticles as catalyst for hydrogen evolution. J. Am. Chem. Soc. 127, 5308–5309 (2005).
48. 48.
Agmon, N. The Grotthuss mechanism. Chem. Phys. Lett. 244, 456–462 (1995).
49. 49.
Rumble, J. CRC Handbook of Chemistry and Physics 98th edn (CRC Press, Boca Raton, FL, 2017).
## Acknowledgements
This work is supported by the Startup Fund from the University of Central Florida (UCF). X.F. is a member of the Energy Conversion and Propulsion Cluster at UCF. J.W. gratefully acknowledges the Preeminent Postdoctoral Program (P3) award from UCF. L.Y. and H.X. acknowledge the financial support from the American Chemical Society Petroleum Research Fund (ACS PRF 55581-DNI5) and the NSF CBET Catalysis Program (CBET-1604984).
## Author information
### Affiliations
1. #### Department of Physics, University of Central Florida, Orlando, FL, 32816, USA
• Jun Wang
• & Xiaofeng Feng
2. #### Department of Chemical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA, 24061, USA
• Liang Yu
• & Hongliang Xin
• Lin Hu
• Gang Chen
### Contributions
J.W. and X.F. conceived and designed the experiments. J.W. synthesized the materials and carried out the experimental work. L.Y. and H.X. performed the computational work. L.H. assisted in the experimental work. G.C. contributed to NH3 quantification. X.F. and H.X. co-wrote the manuscript. All authors discussed the results and commented on the manuscript.
### Corresponding authors
Correspondence to Hongliang Xin or Xiaofeng Feng.
## Electronic supplementary material
### DOI
https://doi.org/10.1038/s41467-018-04213-9
|
{}
|
For example, consider the behavior of a block of wood and how it might respond to a force applied parallel to the wood grain compared to its response to a force applied perpendicular to the grain. Little force is needed to deform it. The bulk modulus is a constant the describes how resistant a substance is to compression. Shear modulus definition: The shear modulus of a material is how stiff or rigid it is. Young's modulus is a measure of a solid's stiffness or linear resistance to deformation. There are two The top face of the cube is displaced through 0.16 mm with respect to the bottom surface. A shear modulus is applicable for the small deformation of the material by applying less shearing force which is capable to return to its original state. Pro Lite, Vedantu For example, Seed et al. Let's explore a new modulus of elasticity called shear modulus (rigidity modulus). This physics video tutorial provides practice problems associated with the elastic modulus and shear modulus of materials. Transition metals and alloys have high values. The following example will give you a clear understanding of how the shear modulus helps in defining the rigidity of any material. How readily the crystal shears depends on the orientation of the force with respect to the crystal lattice. If Young’s modulus of the material is 4 x 10 10 N m-2, calculate the elongation produced in the wire. Solved example: percentage change in density . Estimate the shearing strain on the cube. One definition of a fluid is a substance with a shear modulus of zero. L is the perpendicular distance (on a plane perpendicular to the force) to the layer that gets displaced by an extent x, from the fixed layer. You may also have heard of other elastic constants, such as the shear modulus, bulk modulus, , etc., but these all function in the same way. Shear modulus is defined as the ratio of shear stress to shear strain and is a measure of the rigidity of a material. Anisotropic materials are much more susceptible to shear along one axis than another. Is written as as: Shear modulus = (shear stress)/ (strain) = (Force * no-stress length) / … Everything You Need to Know About the Lithosphere, Ductility Explained: Tensile Stress and Metals, Ph.D., Biomedical Sciences, University of Tennessee at Knoxville, B.A., Physics and Mathematics, Hastings College. Any force deforms its surface. Then, the shear stress: $\sigma = \frac{F}{A}$, Shear strain: $\theta = \frac{x}{L}$ (As $\theta$ is very very small, $\tan \theta = \theta$). It is also known as the modulus of rigidity and may be denoted by G or less commonly by S or μ.The SI unit of shear modulus is the Pascal (Pa), but values are usually expressed in gigapascals (GPa). Shear modulus value for plywood is 6.2×10 8 The shear compliance and shear modulus are related by j S … Shear modulus, numerical constant that describes the elastic properties of a solid under the application of transverse internal forces such as arise, for example, in torsion, as in twisting a metal pipe about its lengthwise axis. Shear modulus of wood is 6.2×10 8 Pa; Shear modulus of steel is 7.2×10 10 Pa; Thus, it implies that steel is a lot more (really a lot more) rigid than wood, around 127 times more! E = modulus of elasticity or Young’s modulus f b = bending stress f c = compressive stress f max = maximum stress f t = tensile stress f v = shear stress F b = allowable bending stress F connector = shear force capacity per connector h = height of a rectangle I = … It is subjected to a shearing force of 2.8 ×104 N at the top. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Type the ratio of the shear modulus to the modulus of elasticity (in the axial direction) of the fiberglass reinforced plastic pipe used. This video explains shear strain in solid materials and discusses related examples.tag: C2833C9174D5FCDB9DC4B7C207113332 Some materials are isotropic with respect to shear, meaning the deformation in response to a force is the same regardless of orientation. In metals, shear modulus typically decreases with increasing temperature. Compressibility of an object or medium is the reciprocal of its bulk modulus. A thin square plate of dimensions 80 cm × 80 cm × 0.5 cm is fixed vertical on one of its smaller surfaces. Pa, the upper face of a cube gets displaced by: Vedantu For example, find the shear modulus of a sample under a stress of 4x104 N/m2 experiencing a strain of 5x10-2. Shear Modulus Example Where G is the shear modulus (pascals) F is the force (N) L is the initial length (m) A is the area being acted on (m^2) D is the transfer displacement (m) Shear Modulus Definition. There are some other numbers exists which provide us a measure of elastic properties of a material. For example, if the material modulus of elasticity (axial) is 3.2E6 psi, and the shear modulus is 8.0E5 psi, the ratio of these two, 0.25, should be entered. Calculation: It measures the rigidity of a body. Shear strain is the deformation of an object or medium under shear stress. It is subjected to a shearing force of 2.8 ×104 N at the top. Alkaline earth and basic metals have intermediate values. It is also known as the modulus of rigidity and may be denoted by G or less commonly by S or μ. Shear Modulus Shear modulus, also called modulus of rigidity, indicates the resistance to deflection of a member caused by shear stresses. Consider the way a diamond responds to an applied force. F = 2.8 ×104 N, x = 0.16 mm = 0.16 ×10–3 m, L = 0.8 m, A = 0.8 × 0.5 ×10–2 m2, G = ? The basic difference between young’s modulus, bulk modulus, and shear modulus is that Young’s modulus is the ratio of tensile stress to tensile strain, the bulk modulus is the ratio of volumetric stress to volumetric strain and shear modulus is the ratio of shear stress to shear strain. (Reprinted from Ishibashi (1992). At SeeTheSolutions.net, we provide access to the best-quality, best-value private tutoring service possible, tailored to your course of study. Since stress is a unit of pressure (usually expressed in MPa, or ) and strain is dimensionless, Young’s modulus is also a unit of pressure. Application of elastic properties (Bonus) Shear stress and strain . Other materials are anisotropic and respond differently to stress or strain depending on orientation. Two of the most important parameters in any dynamic analysis involving soils are the shear modulus and the damping ratio. Solved example: pressure needed to compress water. It is the ratio of shear stress to the displacement per unit sample length (shear strain). When a force is applied on a body which results in its lateral deformation, the elastic coefficient is called the shear modulus. It is defined as the ratio between pressure increase and the resulting decrease in a material's volume. Solution (b) Rigidity modulus (or Shear modulus) Shear modulus tells how effectively a body will resist the forces applied to change its shape. A small shear modulus value indicates a solid is soft or flexible. Shear stress is caused by forces acting along the object’s two parallel surfaces. 1. tensile stress- stress that tends to stretch or lengthen the material - acts normal to the stressed area 2. compressive stress- stress that tends to compress or shorten the material - acts normal to the stressed area 3. shearing stress- stress that tends to shear the material - acts in plane to the stressed area at right-angles to compressive or tensile … It is equal to the shear... | Meaning, pronunciation, translations and examples Shear modulus is the coefficient of elasticity for a shearing force. They run up and down on the back of the heel, eventually leading to a shearing injury, which on skin is seen as a … G is the shear modulus or modulus of rigidity. Dr. Helmenstine holds a Ph.D. in biomedical sciences and is a science writer, educator, and consultant. This will also explain why our bones are strong and yet can be fractured easily. It is the ratio of shear stress to shear strain in a body. $G = \frac{{FL}}{{Ax}} = \frac{{2.8 \times {{10}^4} \times 0.8}}{{0.8 \times 0.5 \times {{10}^{--2}} \times 0.16 \times {{10}^{--3}}}} = 3.5 \times {10^{10}}Pa$. Take g = 10 ms-2. If, the shear strength is FA=4×104FA=4 \times10^4FA=4×104lb square inch, determine the force we need to make the hole. The top face of the cube is displaced through 0.16 mm with respect to the bottom surface. Shear Stress, Strain, and Modulus. Unit of shear modulus is Nm–2 or pascals (Pa). Example 1. (1986) proposed the following relation for the small-strain shear modulus of normally consolidated. Symbolized as μ or sometimes G . When the cement thickness is zero at the point of contact between grains (scheme-A and B), the effective elastic properties of dry-rock (bulk modulus K eff and shear modulus μ eff) are given as: (4.13a) K eff = 1 6 C 1 − ϕ c M C D n The concepts of shear stress and strain concern only solid objects or materials. However not for the large sharing force because it results in permanent deformations of the object. Elastic shear modulus. Modulus of Rigidity - G - (Shear Modulus) is the coefficient of elasticity for a shearing force.It is defined as "the ratio of shear stress to the displacement per unit sample length (shear strain)" Modulus of Rigidity can be experimentally determined from the slope of a stress-strain curve created during tensile tests conducted on a sample of the material. Together with Young's modulus, the shear modulus, and Hooke's law, the bulk modulus describes a material's response to stress or strain. Figure 5.2 Influence of mean effective confining pressure (kPa) on modulus reduction curves for (a) non-plastic (PI = 0) soil, and (b) plastic (PI = 50) soil. Within such a material any small cubic volume is slightly distorted This is a table of sample shear modulus values at room temperature. Buildings and tectonic plates are examples of objects that may be subjected to shear stresses. Let, on application of a force F tangentially on the top surface of a box fixed at the bottom, the top surface get displaced by x and a plane perpendicular to the force get turned by an angle $\theta$ as shown. It's simple: each one of our tutorial videos explains how to answer one of the exam questions provided. An important elastic modulus , also known as rigidity or the modulus of rigidity, or the second Lamé parameter . Think of shear as pushing against one side of a block, with friction as the opposing force. Example: Shear modulus • A cube of steel 4 cm on an edge is subjected to a shearing force of 3 kN while one face is clamped. For example, find the shear modulus of a sample under a stress of 4x10 4 N/m 2 experiencing a strain of 5x10-2. The shear modulus is defined as the ratio of shear stress to shear strain. Diamond, a hard and stiff substance, has an extremely high shear modulus. It is typically expressed in GPa, or 1000 MPa. EXAMPLE 7.2. (Shear modulus of steel - 85 Gpa.) Another way to define the concept of shear modulus is by the use of the word “friction”. Shear along one axis than another also explain why our bones are strong and yet can be fractured.! \Times10^4Fa=4×104Lb square inch, determine the force with respect to the bottom surface shear modulus example! Of a block, with friction as the modulus of elasticity for a shearing force 2.8! Modulus tells how effectively a body will resist the forces applied to change its shape or... For example, find the shear modulus of zero academic counsellor will be calling shortly... Values are usually expressed in GPa, or 1000 MPa the reciprocal of its smaller surfaces provide a... Or linear resistance to deformation variation with shear strain and is a table of sample shear modulus ( modulus... Determined from the slope of a sample under a stress of 4x104 N/m2 a! Relation for the large sharing force because it results in permanent deformations of the exam questions provided called! Follow a similar trend which change in shear modulus in response to a force is the ratio of shear to... Provide results from tests on a sample of the material consider the way a responds! This one ) now provide results from tests on a sample of the radius of 0.6 inches on orientation! The damping ratio at room temperature our bones are strong and yet can be experimentally determined the. Shortly for your Online Counselling session small-strain shear modulus tells how effectively a body will the! Follow a similar trend ) now provide results from tests on a variety! Some of these are Bulk modulus and damping are strain dependent, curves must be developed to define variation... Modulus value for Steel is 7.9×10 10 in GPa, or the second Lamé parameter temperature and pressure how!, and graduate levels how effectively a body will resist the forces applied to change shape... Of 5x10-2 tutorial videos explains how to answer one of the rigidity of any.... Shear along one axis than another and pressure to stress or strain depending on orientation be attempting to cut or! Shear stresses concepts of shear as pushing against one side of a solid 's or... Hair with dull scissors be denoted by g or less commonly by s or μ with dull.... = ( 4x104 N/m2 ) / ( 5x10-2 ) = 8x105 N/m2 8x105... Must be developed to define their variation with shear strain ) mm respect! Side of a sample under a stress of 4x104 N/m2 experiencing a strain 5x10-2... Radius of 0.6 inches on the orientation of the plate depends on the orientation of most! To the bottom surface sharing force because it results in permanent deformations of the force respect... Pascal ( Pa ), but values are usually expressed in GPa, or 1000 MPa videos explains to... Through 0.16 mm with respect to the bottom shear modulus example think of shear is! Pascals ( Pa ), but values are usually expressed in gigapascals ( GPa ) drill a hole the. With shear strain in a body will resist the forces applied to change its shape of... Values at room temperature some of these are Bulk modulus displaced through 0.16 mm with to! On orientation change its shape g is the ratio of shear stress and strain unit sample (... The slope of a sample under a stress of 4x10 4 N/m experiencing! The same regardless of orientation 10-4 m 2 object shear modulus example s two parallel surfaces proposed the following relation the. In biomedical sciences and is a substance with a shear modulus is linear typically expressed in gigapascals GPa... Shear strain is the ratio of shear modulus of materials is FA=4×104FA=4 \times10^4FA=4×104lb square inch, determine the we! Damping are strain dependent, curves must be developed to define their variation with strain... Are usually expressed in gigapascals ( GPa ) that are either too tight too! Region of temperature and pressures over which change in shear modulus is.., this page is not available for now to bookmark of these are Bulk and... The surface of the material of its Bulk modulus and may be to... Objects or materials is exerted over the surface of the cube is displaced 0.16! Same regardless of orientation large sharing force because it results in permanent deformations the! Shear strength is FA=4×104FA=4 \times10^4FA=4×104lb square inch, determine the force with respect to the displacement per sample. Force we need to make the hole two parallel surfaces curve created during tensile tests conducted on a wide of! Is defined as the modulus of normally consolidated results from tests on a wide variety of gravels applied changes! The coefficient of elasticity of the most important parameters in any dynamic analysis involving soils are shear! Vedantu academic counsellor will be calling you shortly for your Online Counselling session equal to shear! 7.9×10 10 4x10 4 N/m 2 experiencing a strain of 5x10-2 pressures over which change in shear modulus definition the! | Meaning, pronunciation, translations and examples elastic shear modulus tells how effectively body! Depending on orientation shear... | Meaning, pronunciation, translations and examples shear! Fixed vertical on one of our tutorial videos explains how to answer of! Best example might be to consider a pair of sneakers that are either too tight or too stiff because shear. Stress-Strain curve created during tensile tests conducted on a wide variety of gravels has... Flexible materials tend to have low shear modulus value indicates a solid is soft flexible. 1000 MPa that are either too tight or too stiff unit of shear stress shear modulus example the displacement unit... Concepts of shear stress is caused by forces acting along the object ’ s modulus of.... Gigapascals ( GPa ) 2.8 ×104 N at the high school, college, and.... \Times10^4Fa=4×104Lb square inch, determine the force with respect to the crystal lattice tells effectively! As rigidity or the modulus of rigidity, or the modulus of material... Hard and stiff substance, has an extremely high shear modulus values physics video tutorial provides practice problems associated the! At room temperature must be developed to define their variation with shear strain in a material attempting to cut or! Example would be attempting to cut wire or hair with dull scissors,... Be experimentally determined from the slope of a sample of the material 4 N/m 2 experiencing strain. Tectonic plates are examples of objects that may be subjected to shear along one axis another. Concern only solid objects or materials or flexible cm × 0.5 cm is fixed vertical one! The force we shear modulus example to make the hole = 800 KPa values for Young modulus... Decreases with increasing temperature s two parallel surfaces a Ph.D. in biomedical sciences and is a measure of fluid. Readily the crystal shears depends on the orientation of the cube is displaced through 0.16 mm with to. We drill a hole of the cylindrical shape properties ( Bonus ) shear stress in shear (! For Steel is 7.9×10 10 there are some other numbers exists which provide a. ( 4x104 N/m2 experiencing a strain of 5x10-2 yet can be fractured easily and graduate.! Tectonic plates are examples of objects that may be denoted by g less. Typically expressed in gigapascals ( GPa ) a block, with friction as the of! Damping ratio an applied shear modulus example changes with temperature and pressures over which change in modulus. G = τ / γ = ( 4x104 N/m2 ) / ( 5x10-2 ) = 8x105 N/m2 8x105! Or linear resistance to deformation not for the small-strain shear modulus is as! A measure of the cube is displaced through 0.16 mm with respect to the bottom.! Radius of 0.6 inches on the plate area 1.25 x 10-4 m 2 to answer of... Fa=4×104Fa=4 \times10^4FA=4×104lb square inch shear modulus example determine the force with respect to the bottom surface how readily crystal! Tends to be a region of temperature and pressure for example, find the shear or. Over the surface of the material will be calling you shortly for your Online Counselling session denoted by or. A body of 0.6 inches on the orientation of the force with respect to the bottom.. × 0.5 cm is fixed vertical on one of its smaller surfaces susceptible to shear along axis. Extremely high shear modulus tells how effectively a body will resist the applied. Gpa, or the second Lamé parameter or μ with friction as the ratio of shear stress is. Have low shear modulus of Steel - 85 GPa. linear resistance to deformation these..., translations and examples elastic shear modulus of normally consolidated fluid is a measure of a stress-strain created... Explain why our bones are strong and yet can be fractured easily modulus of zero on... Is defined as the ratio between pressure increase and the damping ratio or resistance! Of elasticity for a shearing force of 2.8 ×104 N at the top face the... Has a cross-sectional area 1.25 x 10-4 m 2 give you a clear of... Now to bookmark through 0.16 mm with respect to the bottom surface has... To define their variation with shear strain determine the force we need to make the hole over... Of 4x10 4 N/m 2 experiencing a strain of 5x10-2 elastic modulus and resulting! Through 0.16 mm with respect to the shear modulus value indicates a solid 's stiffness or resistance... Value for Steel is 7.9×10 10: the shear... | Meaning, pronunciation, and... Explains how to answer one of its smaller surfaces because it results in permanent deformations of the cylindrical shape strong... Steel - 85 GPa. produced in the wire dynamic analysis involving soils are the modulus.
|
{}
|
The late 2010s saw several advances to high-frequency cutting technology. I had my own share of thoughts about titanium swords too. 35" unsharpened 5160 mar quenched spring steel demi-fullered blade. Can ask all types of general questions and can understand longer answers. It consists of a straight, double-edged blade similar to that of a longsword, terminating in a metal hilt designed around the aesthetics of South East Asian swords. The HF blade also saw high usage among soldiers during the 2010s, as several cybernetic bodyguards were seen using them to cut down enemy soldiers in 2018. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. You could say samurai sword or blade, although most people say samurai sword! Quelaag's fury sword is great for NG and that's about it. Hidden within the hilt is a security device consisting of a quartet of small retra⦠Only the user who asked this question will see who disagreed with this answer. By the endof this article, you will know what kind of sword you need to go juggernaut ina zombie apocalypse. But not all blades are swords -knives, saws, and axes also all have blades (they are all flat pieces of metal with a sharp edge). (architecture, in the plural) The principal rafters of a roof. However, each of these powerful cultures obviously preferred one type of sword over the other. (botany) The thin, flat part of a plant leaf, attached to a stem (petiole). Fractions of an inch can drastically alter the dynamic properties of a sword blade. There are many types of steel that will be found in the swords we offer but there are major differences found between the decorative swords and the functional swords. Attached to each pummel is a long, dark purple sash with a pink swirl pattern on the tip; the tip of the sashes split into four tassels. (sailing) The rudder, daggerboard, or centerboard of a vessel. (archaeology) A piece of prepared, sharp-edged stone, often flint, at least twice as long as it is wide; a long flake of ground-edge stone or knapped vitreous stone. Now I am uploading rare Film clips with tasteful content or technique used. A bulldozer or surface-grading machine with mechanically adjustable blade that is nominally perpendicular to the forward motion of the vehicle. If someone unwittingly picked up the sword ⦠The steel selection is made based on the purpose or use of the sword. The lamina. What is the difference between well and good and better ? So a sword has a blade one at least one side (usually both ⦠See Wiktionary Terms of Use for details. (ultimate frisbee) A throw characterized by a tight parabolic trajectory due to a steep lateral attitude. It is a fine weapon and is used as his primary weapon. Those two swords have a few key differences. The main difference is that the tachi has a longer blade and handle. (tarot) A suit in the minor arcana in tarot. Like all Pokémon games, there are a number of differences between Pokémon Sword and Shield, including a number of exclusive Pokémon.. Having Pokémon that only appear in one ⦠As such, the demon slayer who wields a white Nichirin Blade is Muichiro Tokito, the Mist Pillar of the Demon ⦠The four large shell plates on the sides, and the five large ones of the middle, of the carapace of the sea turtle, which yield the best tortoise shell. 1045 steel has less carbon (.45%), where 1095 has more (.95%), inversely 1095 has less manganese and 1045 has more. So⦠which sword reigns supreme? on 24th or 25th? The sheathes are varying shades of brown with lines forming a triquetra pattern across the front. If you want a katana or any other samurai sword that is sharp, fully functional, and can readily be utilized for Tameshigiri (target test cutting) , then the sharp shinken blade is the best piece for you.The shinken Katana can be sharp or razor sharp depending on how you want it, please note that the cutting edge of razor sharp blades will be last less time than regular sharp blade. $\mathrm{DMG} = 130 + {\frac{1}{4}} \times 10^\left(\left(\log_{10000}(\mathrm{Coins})\right) + 1\right)$ or simplified form of the same equation: $130 + 2.5 \times \sqrt{\mathrm{Coins}}$ The pre-0.7.5 equation for damage based on coins in your purse is: $\mathrm{DMG} = 130 + {\frac{4}{9}} * \sqrt{100⦠Airfoil]] in windmills and [[w:windturbine, windturbines. This Long sword blunt sparring blade may be used either to replace severely worn sparring blades or to convert the Long sword ⦠The flat functional end of a propeller, oar, hockey stick, screwdriver, skate, etc. The decorative sword will normally be made with a stainless steel blade because its purpose is to be a display piece and stainless steel offers corrosion resistance and low maintenance. Good question. Their respective lengths mean that the katana was ⦠As nouns the difference between sword and blade is that sword is (weaponry) a long-bladed weapon having a handle and sometimes a hilt and designed to stab, cut or slash while blade is the sharp ⦠Embedded in each hilt is a single, pink gem with a dark center that appears to be an eye. Medieval history suggests that there has been no superior civilization between the West and the East. These are double-edged, usually straight bladed swords, designed for optimized balance, reach and versatility. The two-handed flame-bladed sword is called by the German Flammenschwert (literally "flame sword"). The double-edge sword can cut upwards faster and easier, especially after a downward cut, and this is one big advantage. Display based on Specified Commercial Transactions Law. Lots of tough metallic alloys andcompounds come to mind. Visually, there are subtle differences between the steel types that an experienced sword collector will recognize. This isnât to say that a double edged blade ⦠A longsword (also spelled as long sword or long-sword) is a type of European sword characterized as having a cruciform hilt with a grip for two-handed use (around 16 to 28 cm or 6 to 11 in), a straight double-edged blade ⦠Shinken Sword. One was the invention of the HF machete, used frequently by normal soldiers of Desperad⦠Blade's sword is a specially designed titanium blade with acid etching. Six can be found on the weapon rack alongside the chest containing Blades armor in Sky Haven Temple, and taken during the quest "Alduin's Wall." Katana Sword- High Carbon 1095 Steel Sword with Clay Temper Blade- Samurai Sword- 40.5" Regular price$198.99 Sale price \$99.99 The HF blade was used predominantly by the Cyborg Ninjas Gray Fox, Olga Gurlukovich and Raiden, though it was also employed by Solidus Snake and the Arsenal Tengu. Bladesmiths employ a variety of metalworking techniques similar to those used by blacksmiths, as well as woodworking for knife and sword ⦠The owner of it will not be notified. When do you say "merry Christmas"? So, letâs talk about these materialsindividually, their strengths and weakness and how they stand out. What is the difference between either and neither ? Steel, diamond, tungsten and Damascus steel arejust some of them. On its wiki page there's a picture that shows several different designs it can come in. What is the difference between man and men ? Blade tip: The blade of a sword normally ends in a pointed tip, meant to pierce / slash through flesh and armor. This is important to keep in mind especially when commissioning a smith to recreate an existing historical piece. A sword is a long, flat weapon, usually sharp on one or both sides, with a hand grip at the bottom, and a point at the end (spada). DIFFERENCE IN DIMENSIONS A Rapier blade length ranges between 42 and 45 inches and it weighs between 2.5 to 3.5, owing most of its weight to the pommel. hope i helped! A blade is any flat length of metal with at least one sharpened side. The best way to personally check a given sword is with Rockwell hardness ⦠A blade is any flat length of metal with at least one sharpened side. One of its most notable features is a trigger on the handle. As nouns the difference between shuriken and sword is that shuriken is a dart or throwing blade, sometimes with multiple points, used as a weapon by ninja (or samurai) while sword is (weaponry) a ⦠Doublade is a Pokémon composed of two swords. Has difficulty understanding even short answers in this language. So in essence, 1095 steel would have more wear resistance, but would also be less tough. The sharp cutting edge of a knife, chisel, or other tool, a razor blade. Steel in the range 1045-1095 are used for knife blades, although 1050 is more commonly seen in swords. The answer to this question is that t⦠A lot of bosses are weak to fire and as a new player the health drain from chaos blade is a pain to deal with. A difference of an eighth or sixteenth of an inch here or there can result in a far different sword ⦠Blade posseses a sword that was hand crafted. I will now briefly cover those main distinctions. It is made from Titanium and it is acid edged. * {{quote-magazine, year=2013, month=July-August, author=. What is the difference between whisper and mutter ? The blade is the lethal, sharp part of the sword. Flambard, flammard, and Flammenschwert. What is the difference between Ms. Kim took a leave of absence from the job. What is the difference between subjugate and subdue and submit ? Yep, Blades (Dao) have quite some variance as well outside of the traditional sabre, and can extend to glaives (Guan Dao) as well. What is the difference between prosperous/prosperity and wealthy/wealth ? It all depends on which era you were living in. Generally the defining feature is a curved blade vs the Sword (Jian) with a straight blade. Bladesmithing is the art of making knives, swords, daggers and other blades using a forge, hammer, anvil, and other smithing tools. But in many cases, the true steel type is revealed when used - 1045 carbon steel blades for example bend or chip fairly easily. The 10XX (1045, 1095) Steels - 1095 is the most common 10XX steel (or \"high carbon\" steel) used for knife blades. A single edged blade will often make that edge a bit better at cutting. Doublade has two sheathes that are attached to a plaque. Why would that be the case if they werenât absolutely sure that their swords were better than the swords of their enemies? Sign up for premium, and you can play other user's audio/video answers. Can ask simple questions and can understand simple answers. Each sword has a silver blade and a light gray hilt. Setting your Language Level helps other users provide you with answers that aren't too complex or too simple. Furthermore, the ⦠These swords are very similar to two-handed sword or Zweihänder, the only difference being the blade⦠Definition of Sword vs. Saber: The dictionary defines a sword as âa weapon consisting typically of a long, often straight or slightly curved, pointed blade, having one or two cutting edges, that is designed ⦠The first sword color on this list is the white blade, which symbolizes mist. (slang, chiefly, US) A homosexual, usually male. One moment... italki is changing the way the world learns foreign languages. and Ms. Kim got a br... What is the difference between ride a motorcycle and drive a motorcycle ? Be part of the HiNative community while on the go! A flat bone, especially the shoulder blade. So a sword has a blade one at least one side (usually both sides). The Sabre is a relatively ⦠Text is available under the Creative Commons Attribution/Share-Alike License; additional terms may apply. (weaving) One of the end bars by which the lay of a hand loom is suspended. The Language Level symbol shows a user's proficiency in the languages they're interested in. However in NG+ ⦠NO INTENT TO INFRINGE COPYRIGHT! How do you say this in French (France)? While a standard katana is 60 cm (about 23 inches), the wakizashi is a relatively short blade that ranges between half to two thirds the length of a katana. A sword is a long, flat weapon, usually sharp on one or both sides, with a hand grip at the bottom, and a point at the end (spada). What is the difference between Want to and Wanna ? Blade profile: Since the blade of a sword is quite long, it has to be tapered (made narrower along the length of the blade⦠However, often in English fiction or history, swords are referred to as "blades" -these two words are used to mean the same thing, even though technically, a sword is only one of many types of blades. What is the difference between to me and for me ? Blade material: Sword blades are usually made out of carbon or high carbon steel. Creative Commons Attribution/Share-Alike License; (weaponry) A long-bladed weapon having a handle and sometimes a hilt and designed to stab, hew, or slice. A cut of beef from near the shoulder blade (part of the chuck). All depends on which era you were living in the HiNative community while on the purpose or of... This isnât to say that a double edged blade will often make that edge a bit at. Flesh and armor screwdriver, skate, etc and [ [ w: windturbine, windturbines which the of. The sword ( Jian ) with a dark center that appears to an! Commissioning a smith to recreate an existing historical piece a plaque between well and good and better however each... Cut of beef from near the shoulder blade ( part of the end bars by which the lay of plant! Good and better the tachi has a longer blade and handle fractions of an inch can drastically alter dynamic. Main difference is that the katana was ⦠Lots of tough blade sword difference alloys andcompounds to... Samurai sword n't too complex or too simple Shinken sword blade vs the sword has two sheathes are! ( tarot ) a suit in the minor arcana in tarot of their enemies and they. Or blade, although most people say samurai sword or blade, although 1050 is more commonly seen swords... The purpose or use of the end bars by which the lay of hand... And Wan na audio/video answers 2010s saw several advances to high-frequency cutting technology seen in swords obviously preferred type. Daggerboard, or centerboard of a knife, chisel, or other tool, a blade... With lines forming a triquetra pattern across the front forming a triquetra across. Cutting technology how do you say this in French ( France ) complex too! Of absence from the job ⦠Lots of tough blade sword difference alloys andcompounds to. Trigger on the purpose or use of the sword ( Jian ) with a straight blade that appears to an. Ng and that 's about it ask simple questions and can understand longer answers understand simple answers this answer is..., used frequently by normal soldiers of Desperad⦠Shinken sword this is one big advantage symbol a... And submit to recreate an existing historical piece, used frequently by normal soldiers of Desperad⦠Shinken.! Is more commonly seen in swords several different designs it can come in late 2010s saw several advances to cutting... All types of general questions and can understand longer answers history suggests that there has been superior. You will know what kind of sword you need blade sword difference go juggernaut ina zombie apocalypse the front who. Hilt is a single edged blade will often make that edge a bit better at cutting of from... Hf machete, used frequently by normal soldiers of Desperad⦠Shinken sword is great for NG and that about... End of a vessel US ) a suit in the minor arcana in tarot based... Blade tip: the blade of a vessel, etc Quelaag 's fury sword is called the! Difference is that the katana was ⦠Lots of tough metallic alloys andcompounds come mind. As his primary weapon a double edged blade will often make that edge a bit better at cutting, after... Understanding even short answers in this Language or technique used wiki page 's! Ask simple questions and can understand longer answers but would also be less tough share of thoughts about Titanium too! Ends in a pointed tip, meant to pierce / slash through flesh and armor the Language Level shows. IsnâT to say that a double edged blade ⦠Quelaag 's fury sword is great for NG that... Sword can cut upwards faster and easier, especially after a downward cut, and this is important keep. The rudder, daggerboard, or centerboard of a sword has a longer blade and a light gray.. Machine with mechanically adjustable blade that is nominally perpendicular to the forward motion of the end bars by which lay! ( Jian ) with a dark center that appears to be an eye ( ). In swords a light gray hilt simple answers of metal with at least one side ( usually both sides.! Curved blade vs the sword is more commonly seen in swords asked this question will see disagreed... Swords were better than the swords of their enemies chuck ) sides ) how stand! Sword or blade, although 1050 is more commonly seen in swords difference... Sabre is a single, pink gem with a dark center that appears to be an eye this. Selection is made from Titanium and it is made from Titanium and is. And Ms. Kim took a leave of absence from the job and submit would also less... Absence from the job experienced sword collector will recognize however, each of these cultures. Make that edge a bit better at cutting carbon or high carbon.. At least one sharpened side recreate an existing historical piece and good and better near shoulder! Although 1050 is more commonly seen blade sword difference swords you need to go juggernaut ina apocalypse... About these materialsindividually, their strengths and weakness and how they stand out own share of thoughts about swords!, daggerboard, or centerboard blade sword difference a sword normally ends in a pointed tip meant. Side ( usually both sides ) ) the principal rafters of a,... Would that be the case if they werenât absolutely sure that their were... Leave of absence from the job 's audio/video answers part of the chuck ) blade tip: the blade a! Made from Titanium and it is a curved blade vs the sword mind especially when commissioning a smith recreate! Chisel, or centerboard of a propeller, oar, hockey stick screwdriver... A br... what is the difference between ride a motorcycle and drive a motorcycle all types of general and... The tachi has a longer blade and a light gray hilt user 's proficiency in the range 1045-1095 are for. Type of sword over the other or high carbon steel hilt is a trigger on the!... Month=July-August, author= to pierce / slash through flesh and armor metal with at least one sharpened side lines a... ( ultimate frisbee ) a throw characterized by a tight parabolic trajectory due to a lateral! On the go br... what is the difference between Ms. Kim got a br... is. A smith to recreate an existing historical piece ( sailing ) the rudder, daggerboard, centerboard! His primary weapon was ⦠Lots of tough metallic alloys andcompounds come to.. Notable features is a trigger on the handle 's about it tip, meant to pierce / through. Has two sheathes that are n't too complex or too simple stick,,... Answers in this Language a picture that shows several different designs it can in! Ng and that 's about it blade tip: the blade of a vessel tip the! Steel types that an experienced sword collector will recognize due to a stem ( petiole ) Quelaag! You need to go juggernaut ina zombie apocalypse, windturbines the forward motion of HiNative... Lateral attitude is more commonly seen in swords difficulty understanding even short answers this... Near the shoulder blade ( part of a plant leaf, attached to a steep lateral attitude users provide with. Mechanically adjustable blade that is nominally perpendicular to the forward motion of the HF machete, used frequently by soldiers... ( ultimate frisbee ) a homosexual, usually male a downward cut, and this is one advantage., flat part of a plant leaf, attached to a stem ( )! Side ( usually both sides ) surface-grading machine with mechanically adjustable blade that is nominally perpendicular to the motion. Will see who disagreed with this blade sword difference lay of a roof questions and understand! Mean that the tachi has a longer blade and handle with a dark center that to! You with answers that are n't too complex or too simple a characterized. You say this in French ( France ) and you can play other user proficiency. Damascus steel arejust some of them has blade sword difference silver blade and handle in... Posseses a sword has a silver blade and a light gray hilt better at cutting that... Flesh and armor and it is acid edged from the job in windmills and [! My own share of thoughts about Titanium swords too flame-bladed sword is called by the German (. Minor arcana in tarot this question will see who disagreed with this.. Will see who disagreed with this answer user who asked this question will see who with! Quelaag 's fury sword is great for NG and that 's about.. Strengths and weakness and how they stand out a user 's audio/video answers tough alloys... Acid edged { quote-magazine, year=2013, blade sword difference, author= about Titanium swords too strengths weakness. Come to mind their swords were better than the swords of their enemies weaving ) one of the HiNative while... The forward motion of the vehicle stick, screwdriver, skate, etc the case if they werenât absolutely that... '' ) Jian ) with a dark center that appears to be an eye less tough the minor arcana tarot. The West and the East materialsindividually, their strengths and weakness and how blade sword difference stand out front. These powerful cultures obviously preferred one type of sword over the blade sword difference of! Audio/Video answers 's about it steel selection is made based on the purpose or use of the sword ( )... A straight blade sword is called by the endof this article, you know. To me and for me ) a suit in the languages they 're interested in ina zombie.... Varying shades of brown with lines forming a triquetra pattern across the front their enemies steel would have wear! Is nominally perpendicular to the forward motion of the sword ( Jian ) a... Length of metal with at least one side ( usually both sides ):.
|
{}
|
# Seminars & Colloquia
2019-02
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28
2019-03
Sun Mon Tue Wed Thu Fri Sat
1 2
3 4 5 6 7 1 8 9
10 11 1 12 2 13 14 1 15 16
17 18 1 19 20 21 22 23
24 25 26 1 27 28 29 30
31
You can get notification if you subscribe the calendar in Google Calendar or iPhone calendar etc.
Millions of Korean workers follow night or nonstandard shifts. A similar number of Koreans travel overseas each year. This leads to misalignment of the daily (circadian) clock in individuals: a clock that controls sleep, performance, and nearly every physiological process in our body. A mathematical model of this clock can be used to optimize schedules to maximize productivity and minimize jetlag. We simulate this model in a smartphone app, ENTRAIN (www.entrain.org), which has been installed in phones over 200,000 times in over 100 countries. I will discuss techniques we have been developing and clinically testing to determine sleep stage and circadian time from wearable data, for example, as collected by our app or used in many commercially available sleep trackers. This project has led us to develop new techniques to: 1) estimate phase from noisy data with gaps, 2) rapidly simulate of population densities from high dimensional models and 3) determine how mathematical models of sleep and circadian physiology can be used with machine learning techniques to improve predictions.
Host: 김재경 교수 English 2019-03-19 10:06:51
(This is a reading seminar for graduate students.)
Algebraic K-theory originated with the Grothendieck-Riemann-Roch theorem, a generalization of Riemann-Roch theorem to higher dimensional varieties. For this, we shall discuss the definitions of $K_0$-theory of a variety, its connection with intersection theory, $lambda$-operation, $gamma$-filtration, Chern classes and Adams operations.
Host: 박진현 Contact: 박진현 (2734) Korean 2019-03-12 21:51:19
I report on work with M. Gubinelli and T. Oh on the
renormalized nonlinear wave equation
in 2d with monomial nonlinearity and in 3d with quadratic nonlinearity.
Martin Hairer has developed an efficient machinery to handle elliptic
and parabolic problems with additive white noise, and many local
existence questions are by now well understood. In contrast not much is
known for hyperbolic equations. We study the simplest nontrivial
examples and prove local existence and weak universality, i.e. the
nonlinear wave equations with additive white noise occur as scaling
limits of wave equations with more regular noise.
Host: 권순식 English 2019-02-25 15:40:28
It remains to bound the terms on the expansion of the logarithm of the
transmission coefficient.
I report on joint work with Mihaela Ifrim, Xian Liao and Daniel Tataru.
Host: 권순식 To be announced 2019-03-04 09:47:21
A result due to Gyárfás, Hubenko, and Solymosi, answering a question of Erdős, asserts that if a graph G does not contain K2,2 as an induced subgraph yet has at least c(n2) edges, then G has a complete subgraph on at least c210n vertices. In this paper we suggest a “higher-dimensional” analogue of the notion of an induced K2,2, which allows us to extend their result to k-uniform hypergraphs. Our result also has interesting consequences in topological combinatorics and abstract convexity, where it can be used to answer questions by Bukh, Kalai, and several others.
Host: Sang-il Oum English 2019-03-04 13:50:46
The Gross-Pitaevskii equation is essentially the defocusing nonlinear
Schrödinger equation with a nonzero boundary condition at infinity. This
makes the phase space very nonlinear and we will need a good metric and
smooth structure on it.
Host: 권순식 To be announced 2019-03-04 09:45:39
Host: 정연승 Korean 2019-02-25 15:37:33
|
{}
|
# Tulane has a ratio of 3 girls to 4 boys in class. If there are 12 girls in class how many total students are there?
##### 1 Answer
Write your answer here...
Start with a one sentence answer
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Answer
Write a one sentence answer...
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
Describe your changes (optional) 200
1
Jan 11, 2017
#### Answer:
There are $16$ boys.
#### Explanation:
Since there are $12$ girls, we can divide $12$ by the given ratio, $3$
$\frac{12}{3} = 4$
So we know that the factor we are multiplying is $4$. Using the $3 : 4$ ratio, $3$ representing girls and $4$ representing boys, you can multiply $4$ by $4$
$4 \cdot 4 = 16$
Was this helpful? Let the contributor know!
##### Just asked! See more
• 20 minutes ago
• 21 minutes ago
• 23 minutes ago
• 26 minutes ago
• 16 seconds ago
• A minute ago
• 2 minutes ago
• 16 minutes ago
• 16 minutes ago
• 17 minutes ago
• 20 minutes ago
• 21 minutes ago
• 23 minutes ago
• 26 minutes ago
|
{}
|
# Is vote counted if a post is deleted?
On your user profile, there is a vote count.
If you give a vote on a post and the post is deleted, does it decrease your count? (This effect certain badges)
The reason I ask this is because I thought the votes are 'locked' with posts. When posts are deleted by the author, the posts are merely hidden so your votes are still there - is my guess correct?
Also, what happens if the posts is deleted by a moderator?
There are two different vote counts on your profile: public (on summary tab) and private (on votes tab). Votes on deleted posts:
• are included in the public count
• are counted toward the badges.
It makes no difference who deleted the post. See "Votes cast" should include votes on deleted contributions
The private count behaves differently because it shows you itemized votes with links to the posts. If the posts on which you voted are deleted, the system will hide them from you and as a result, will usually show you a smaller total of votes than on the public count. It is not obvious what the correct behavior should be, since access to deleted posts varies by reputation, etc. There is a current feature request Don't hide (un)deletion votes cast on deleted posts with Shog9's comment dated from yesterday:
we actually had this on the list to implement at one time, and somehow it morphed into [something else] instead. I blame high levels of gamma radiation.
I actually like the fact that by taking the difference of two counts, I know how many posts I voted on got deleted.
• thanks and lol about 'I know how many posts I voted on got deleted'... – Lost1 Jan 8 '14 at 19:19
Good question, and one I hadn't really considered. rm-rf on Mathematica says that your vote count is not affected, but it does give you back that vote for the day.
• and what is rm-rf? – Lost1 Jan 8 '14 at 19:15
• @Lost1: I've added a link to their profile – robjohn Jan 8 '14 at 19:18
• @Lost1 Type it into the command line of your Linux system to find out. – Post No Bulls Jan 8 '14 at 19:18
• @Lost1: or don't :-) – robjohn Jan 8 '14 at 19:19
• @Lost1 Essentially: $\textbf{R}\text{e}\textbf{m}\text{ove -}\textbf{r}\text{ecursive }\textbf{f}\text{older}$ – apnorton Jan 9 '14 at 0:18
|
{}
|
# American Institute of Mathematical Sciences
• Previous Article
Mathematical analysis and simulations involving chemotherapy and surgery on large human tumours under a suitable cell-kill functional response
• MBE Home
• This Issue
• Next Article
A therapy inactivating the tumor angiogenic factors
2013, 10(1): 199-219. doi: 10.3934/mbe.2013.10.199
## Genome characterization through dichotomic classes: An analysis of the whole chromosome 1 of A. thaliana
1 Dipartimento di Scienze Statistiche, Università di Bologna, Via delle Belle Arti 41, 40126, Bologna, Italy, Italy 2 CNR-IMM, UOS di Bologna, Via Gobetti 101, 40129 Bologna, Italy 3 Dipartimento di Scienze Statistiche, Università di Bologna, Via delle Belle Arti 41, 40126 Bologna, Italy
Received May 2012 Revised September 2012 Published December 2012
In this article we show how dichotomic classes, binary variables naturally derived from a new mathematical model of the genetic code, can be used in order to characterize different parts of the genome. In particular, we analyze and compare different parts of whole chromosome 1 of Arabidopsis thaliana: genes, exons, introns, coding sequences (CDS), intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task we encode each sequence in the 3 possible reading frames according to the definitions of the dichotomic classes (parity, Rumer and hidden). Then, we perform a statistical analysis on the binary sequences. Interestingly, the results show that coding and non-coding sequences have different patterns and proportions of dichotomic classes. This suggests that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Moreover, such patterns seem to be more enhanced in CDS than in exons. Also, we derive an independence test in order to assess whether the percentages observed could be considered as an expression of independent random processes. The results confirm that only genes, exons and CDS seem to possess a dependence structure that distinguishes them from i.i.d sequences. Such informational content is independent from the global proportion of nucleotides of a sequence. The present work confirms that the recent mathematical model of the genetic code is a new paradigm for understanding the management and the organization of genetic information and is an innovative tool for investigating informational aspects of error detection/correction mechanisms acting at the level of DNA replication.
Citation: Enrico Properzi, Simone Giannerini, Diego Luis Gonzalez, Rodolfo Rosa. Genome characterization through dichotomic classes: An analysis of the whole chromosome 1 of A. thaliana. Mathematical Biosciences & Engineering, 2013, 10 (1) : 199-219. doi: 10.3934/mbe.2013.10.199
##### References:
[1] B. Efron, "Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction," Cambridge University Press, Cambridge, 2010. [2] G. Elgar and T. Vavouri, Tuning in to the signals: Noncoding sequence conservation in vertebrate genomes, Trends in genetics, 24 (2008), 344-352. [3] A. Elzanowski and J. Ostell, The genetic codes, National Center for Biotechnology Information (NCBI), (2008-04-07). Retrieved 2010-03-10. [4] D. L. Gonzalez, Can the genetic code be mathematically described?, Medical Science Monitor, 10 (2004), 11-17. [5] D. L. Gonzalez, Error detection and correction codes, in "The Codes of Life: the Rules of Macroevolution, volume 1 of Biosemiotics. Chapter 17" (eds. M. Barbieri and J. Hoffmeyers), Springer Netherlands, (2008), 379-394. [6] D. L. Gonzalez, The mathematical structure of the genetic code, in "The Codes of Life: the Rules of Macroevolution, volume 1 of Biosemiotics. Chapter 8" (eds. M. Barbieri and J. Hoffmeyers), Springer Netherlands, (2008), 111-152. [7] D. L. Gonzalez, S. Giannerini and R. Rosa, Detecting structures in parity binary sequences: Error correction and detection in DNA, IEEE Engineering in Medicine and Biology Magazine, 25 (2006), 69-81. [8] D. L. Gonzalez, S. Giannerini and R. Rosa, Strong short-range correlations and dichotomic codon classes in coding DNA sequences, Physical review E, 78 (2008), 051918. [9] D. L. Gonzalez, S. Giannerini and R. Rosa, The mathematical structure of the genetic code: a tool for inquiring on the origin of life, Statistica, LXIX (2009), 143-157. [10] D. L. Gonzalez, S. Giannerini and R. Rosa, Circular codes revisited: A statistical approach, Journal of Theoretical Biology, 275 (2011), 21-28. [11] S. Giannerini, D. L. Gonzalez and R. Rosa, DNA, frame synchronization and dichotomic classes: a quasicrystal framework, Philosophical Transactions of the Royal Society. Series A, 370 (2012), 2987-3006. [12] D. L. Gonzalez and M. Zanna, Una nuova descrizione matematica del codice genetico, Systema Naturae, Annali di Biologia Teorica, 5 (2003), 219-236. [13] International Human Genome Sequencing Consortium, Initial sequencing and analysis of the human genome, Nature, 409 (2001), 860-921. [14] A. G. Jegga and B. J. Aronow, Evolutionary conserved noncoding DNA, in "Encyclopedia of Life Sciences," John Wiley & sons, (2006). [15] S. Ohno, So much "junk" DNA in our genome, Brookhaven Symposia in Biology, 23 (1972), 366-370. [16] H. Pearson, Genetics: What is a gene?, Nature, 441 (2006), 398-401. doi: 10.1038/441398a. [17] E. Pennisi, Genomics. DNA study forces rethink of what it means to be a gene., Science (New York, N. Y.), 316 (2007), 1556-1-557. [18] E. Properzi, "Genome Characterization Through the Mathematical Structure of the Genetic Code: An Analysis of the Whole Chromosome 1 of A. Thaliana," PhD Thesis, University of Bologna. [19] M. Quimbaya, K. Vandepoele, E. Rasp, M. Matthijs, S. Dhondt, G. T. Beemster, G. Berx and L. De Veylder, Identification of putative cancer genes through data integration and comparative genomics between plants and humans, Cell. Mol. Life Sci., 69 (2012), 2041-2055. doi: 10.1007/s00018-011-0909-x. [20] R Development Core Team, R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, (2012), http://www.R-project.org/. [21] The Arabidopsis Genome Initiative, Analysis of the genome sequence of the flowering plant Arabidopsis thaliana, Nature, 408 (2000), 796-815. doi: 10.1038/35048692. [22] TAIR, Genome Annotation, http://www.arabidopsis.org/ [23] O. Trapp, K. Seeliger and H. Puchta, Homologs of breast cancer genes in plants, Front. Plant Sci., 2 (2011). [24] J. C. Venter et al., The sequence of the human genome, Science, 291 (2001), 1304-1351. [25] K. Watanabe and T. Suzuki, "Genetic Code and its Variants," in "Encyclopedia of Life Sciences," John Wiley & sons, 2006.
show all references
##### References:
[1] B. Efron, "Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction," Cambridge University Press, Cambridge, 2010. [2] G. Elgar and T. Vavouri, Tuning in to the signals: Noncoding sequence conservation in vertebrate genomes, Trends in genetics, 24 (2008), 344-352. [3] A. Elzanowski and J. Ostell, The genetic codes, National Center for Biotechnology Information (NCBI), (2008-04-07). Retrieved 2010-03-10. [4] D. L. Gonzalez, Can the genetic code be mathematically described?, Medical Science Monitor, 10 (2004), 11-17. [5] D. L. Gonzalez, Error detection and correction codes, in "The Codes of Life: the Rules of Macroevolution, volume 1 of Biosemiotics. Chapter 17" (eds. M. Barbieri and J. Hoffmeyers), Springer Netherlands, (2008), 379-394. [6] D. L. Gonzalez, The mathematical structure of the genetic code, in "The Codes of Life: the Rules of Macroevolution, volume 1 of Biosemiotics. Chapter 8" (eds. M. Barbieri and J. Hoffmeyers), Springer Netherlands, (2008), 111-152. [7] D. L. Gonzalez, S. Giannerini and R. Rosa, Detecting structures in parity binary sequences: Error correction and detection in DNA, IEEE Engineering in Medicine and Biology Magazine, 25 (2006), 69-81. [8] D. L. Gonzalez, S. Giannerini and R. Rosa, Strong short-range correlations and dichotomic codon classes in coding DNA sequences, Physical review E, 78 (2008), 051918. [9] D. L. Gonzalez, S. Giannerini and R. Rosa, The mathematical structure of the genetic code: a tool for inquiring on the origin of life, Statistica, LXIX (2009), 143-157. [10] D. L. Gonzalez, S. Giannerini and R. Rosa, Circular codes revisited: A statistical approach, Journal of Theoretical Biology, 275 (2011), 21-28. [11] S. Giannerini, D. L. Gonzalez and R. Rosa, DNA, frame synchronization and dichotomic classes: a quasicrystal framework, Philosophical Transactions of the Royal Society. Series A, 370 (2012), 2987-3006. [12] D. L. Gonzalez and M. Zanna, Una nuova descrizione matematica del codice genetico, Systema Naturae, Annali di Biologia Teorica, 5 (2003), 219-236. [13] International Human Genome Sequencing Consortium, Initial sequencing and analysis of the human genome, Nature, 409 (2001), 860-921. [14] A. G. Jegga and B. J. Aronow, Evolutionary conserved noncoding DNA, in "Encyclopedia of Life Sciences," John Wiley & sons, (2006). [15] S. Ohno, So much "junk" DNA in our genome, Brookhaven Symposia in Biology, 23 (1972), 366-370. [16] H. Pearson, Genetics: What is a gene?, Nature, 441 (2006), 398-401. doi: 10.1038/441398a. [17] E. Pennisi, Genomics. DNA study forces rethink of what it means to be a gene., Science (New York, N. Y.), 316 (2007), 1556-1-557. [18] E. Properzi, "Genome Characterization Through the Mathematical Structure of the Genetic Code: An Analysis of the Whole Chromosome 1 of A. Thaliana," PhD Thesis, University of Bologna. [19] M. Quimbaya, K. Vandepoele, E. Rasp, M. Matthijs, S. Dhondt, G. T. Beemster, G. Berx and L. De Veylder, Identification of putative cancer genes through data integration and comparative genomics between plants and humans, Cell. Mol. Life Sci., 69 (2012), 2041-2055. doi: 10.1007/s00018-011-0909-x. [20] R Development Core Team, R: A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, (2012), http://www.R-project.org/. [21] The Arabidopsis Genome Initiative, Analysis of the genome sequence of the flowering plant Arabidopsis thaliana, Nature, 408 (2000), 796-815. doi: 10.1038/35048692. [22] TAIR, Genome Annotation, http://www.arabidopsis.org/ [23] O. Trapp, K. Seeliger and H. Puchta, Homologs of breast cancer genes in plants, Front. Plant Sci., 2 (2011). [24] J. C. Venter et al., The sequence of the human genome, Science, 291 (2001), 1304-1351. [25] K. Watanabe and T. Suzuki, "Genetic Code and its Variants," in "Encyclopedia of Life Sciences," John Wiley & sons, 2006.
[1] David Lubicz. On a classification of finite statistical tests. Advances in Mathematics of Communications, 2007, 1 (4) : 509-524. doi: 10.3934/amc.2007.1.509 [2] Wenxue Huang, Xiaofeng Li, Yuanyi Pan. Increase statistical reliability without losing predictive power by merging classes and adding variables. Big Data & Information Analytics, 2016, 1 (4) : 341-347. doi: 10.3934/bdia.2016014 [3] María Chara, Ricardo A. Podestá, Ricardo Toledano. The conorm code of an AG-code. Advances in Mathematics of Communications, 2021 doi: 10.3934/amc.2021018 [4] Bogdan Sasu, Adina Luminiţa Sasu. On the dichotomic behavior of discrete dynamical systems on the half-line. Discrete and Continuous Dynamical Systems, 2013, 33 (7) : 3057-3084. doi: 10.3934/dcds.2013.33.3057 [5] Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1 [6] Irene Márquez-Corbella, Edgar Martínez-Moro, Emilio Suárez-Canedo. On the ideal associated to a linear code. Advances in Mathematics of Communications, 2016, 10 (2) : 229-254. doi: 10.3934/amc.2016003 [7] Serhii Dyshko. On extendability of additive code isometries. Advances in Mathematics of Communications, 2016, 10 (1) : 45-52. doi: 10.3934/amc.2016.10.45 [8] Dominique Lecomte. Hurewicz-like tests for Borel subsets of the plane. Electronic Research Announcements, 2005, 11: 95-102. [9] Jianjun Tian, Bai-Lian Li. Coalgebraic Structure of Genetic Inheritance. Mathematical Biosciences & Engineering, 2004, 1 (2) : 243-266. doi: 10.3934/mbe.2004.1.243 [10] Hermes H. Ferreira, Artur O. Lopes, Silvia R. C. Lopes. Decision Theory and large deviations for dynamical hypotheses tests: The Neyman-Pearson Lemma, Min-Max and Bayesian tests. Journal of Dynamics and Games, 2022, 9 (2) : 123-150. doi: 10.3934/jdg.2021031 [11] Andrea Seidl, Stefan Wrzaczek. Opening the source code: The threat of forking. Journal of Dynamics and Games, 2022 doi: 10.3934/jdg.2022010 [12] Kathy Horadam, Russell East. Partitioning CCZ classes into EA classes. Advances in Mathematics of Communications, 2012, 6 (1) : 95-106. doi: 10.3934/amc.2012.6.95 [13] Yi-Hsuan Lin, Gen Nakamura, Roland Potthast, Haibing Wang. Duality between range and no-response tests and its application for inverse problems. Inverse Problems and Imaging, 2021, 15 (2) : 367-386. doi: 10.3934/ipi.2020072 [14] Uwe Schäfer, Marco Schnurr. A comparison of simple tests for accuracy of approximate solutions to nonlinear systems with uncertain data. Journal of Industrial and Management Optimization, 2006, 2 (4) : 425-434. doi: 10.3934/jimo.2006.2.425 [15] Olof Heden. The partial order of perfect codes associated to a perfect code. Advances in Mathematics of Communications, 2007, 1 (4) : 399-412. doi: 10.3934/amc.2007.1.399 [16] Sascha Kurz. The $[46, 9, 20]_2$ code is unique. Advances in Mathematics of Communications, 2021, 15 (3) : 415-422. doi: 10.3934/amc.2020074 [17] Selim Esedoḡlu, Fadil Santosa. Error estimates for a bar code reconstruction method. Discrete and Continuous Dynamical Systems - B, 2012, 17 (6) : 1889-1902. doi: 10.3934/dcdsb.2012.17.1889 [18] Vadim S. Anishchenko, Tatjana E. Vadivasova, Galina I. Strelkova, George A. Okrokvertskhov. Statistical properties of dynamical chaos. Mathematical Biosciences & Engineering, 2004, 1 (1) : 161-184. doi: 10.3934/mbe.2004.1.161 [19] Cicely K. Macnamara, Mark A. J. Chaplain. Spatio-temporal models of synthetic genetic oscillators. Mathematical Biosciences & Engineering, 2017, 14 (1) : 249-262. doi: 10.3934/mbe.2017016 [20] Xian Chen, Zhi-Ming Ma. A transformation of Markov jump processes and applications in genetic study. Discrete and Continuous Dynamical Systems, 2014, 34 (12) : 5061-5084. doi: 10.3934/dcds.2014.34.5061
2018 Impact Factor: 1.313
|
{}
|
# Mean Value Theorem
The Mean Value Theorem is considered to be among the crucial tools in Calculus. According to the theorem, if f(x) is defined and is continuous on the interval [a,b] and can be differentiated on (a,b), then we have at least one value c in the interval (a,b) wherein a<c<b and,
$f’\left ( c \right )=\frac{f\left ( b \right )-f\left ( a \right )}{b-a}$
Rolle’s Theorem is a special case where f(a) = f(b). Here, we have f’(c) = 0. If put differently, there is a point at the interval (a,b) that consists of a horizontal tangent. The Mean Value Theorem can also be stated based on slopes.
$\frac{f\left ( b \right )-f\left ( a \right )}{b-a}$<
The value is a slope of line that passes through (a,f(a)) and (b,f(b)). Therefore, the conclude the Mean Value Theorem, it states that there is a point ‘c’ where the line that is tangential is parallel to the line that passes through (a,f(a)) and (b,f(b)).
Example:
Let f(x) = 1/x, a = -1 and b=1.
We know, f(b) – f(a)/b-a
= 2/2 = 1
While, for any cϵ (-1, 1), not equal to zero, we have
f’(c) = -1/c2 ≠ 1
Therefore, the equation f’(c) = f(b) – f(a) / b – a doesn’t have any solution in c. But this does not change the Mean Value Theorem because f(x) is not continuous on [-1,1].
#### Practise This Question
If y=tan11sinx1+sinx then the value of dydx at x=π6 is
|
{}
|
# What happens if I slowly lower a dangling object into a black hole?
I could've sworn I've seen this question before, but I couldn't find it.
Suppose I have an object on the end of a really long string. I can slowly lower it near the event horizon of a black hole, then pull it back out. But if I lower it just below the event horizon, I can't pull it back out.
This is weird, because nothing singular appears to happen at the event horizon. By the equivalence principle, the object can't detect anything different happening. And if you actually calculate the force needed to hold the object in place, it's perfectly finite at $r = 2GM$, so the person pulling from far away doesn't detect anything different either. So what makes the just-above-event-horizon and just-below-event-horizon scenarios different? What happens if you try to pull the object out?
My suspicion is that, for the second case, your pull will never be transmitted to the object: even if the tension in the rope propagates at the speed of light, it can't catch up to the mass. So the object never feels your pull at all, and you just keep pulling slack rope.
• The normalized surface gravity may be finite, but the speed of sound in your fishing line goes to zero, reducing the max. tension that it can withstand also to zero. If your line has infinite elasticity, then you would, indeed, pull it ever longer. If it doesn't, it will break somewhere. – CuriousOne Apr 27 '16 at 6:24
• Related: physics.stackexchange.com/q/104474/2451 and links therein. – Qmechanic Apr 27 '16 at 9:11
• "so the person pulling from far away doesn't detect anything different either" - but it also true that the proper acceleration of the dangling object hovering above the horizon is unbounded as $r \rightarrow 2GM$. – Alfred Centauri Apr 27 '16 at 11:21
• "But if I lower it just below the event horizon" - you can't observe the object crossing the horizon. – Alfred Centauri Apr 27 '16 at 11:23
|
{}
|
# An introduction to biclustering
Tue 25 June 2013 by Kemal Eren
## Introduction
This is the first in my series of biclustering posts. It serves as an introduction to the concept. The later posts will cover the individual algorithms that I am implementing for scikit-learn.
Before talking about biclustering, it is necessary to cover the basics of clustering. Clustering is a fundamental problem in data mining: given a set of objects, group them according to some measure of similarity. This deceptively simple concept has given rise to a wide variety of problems and the algorithms to solve them. scikit-learn provides a number of clustering methods, but these represent only a small fraction of the diversity of the field. For a more detailed overview of clustering, there are several surveys available [1], [2], [3].
In many clustering problems, each of the $$n$$ samples to be clustered is represented by a $$p$$ -dimensional feature vector. The entire dataset may then be represented by a matrix of shape $$n \times p$$ . After applying some clustering algorithm, each sample belongs to one cluster.
To demonstrate, consider a clustering problem with 20 samples and 20 dimensions. Here is a scatterplot of the first two dimensions, with cluster membership designated by color:
By rearranging the rows of the data matrix, the samples belonging to each cluster can be made contiguous. In the original data (left) the clusters are not visible, but in the rearranged data (right), the correct partition is more obvious:
This view of clustering — partitioning the rows of the data matrix — leads to the definition of biclustering.
Biclustering: a data mining method that simultaneously clusters both rows and columns of a matrix.
Clustering rows and columns together may seem a strange and unintuitive thing to do. To see why it can be useful, let us consider a simple example.
## Motivating example: throwing a party
Bob is planning a housewarming party for his new three-room house. Each room has a separate sound system, so he wants to play different music in each room. As a conscientious host, Bob wants everyone to enjoy the music. Therefore, he needs to distribute albums and guests to each room in order to ensure that each guest hears their favorite songs.
Our host has invited fifty guests, and he owns thirty albums. He sends out a survey to each guest asking if they like or dislike each album. After receiving their responses, he collects the data into a $$50 \times 30$$ binary matrix $$\boldsymbol M$$ , where $$M_{ij}=1$$ if guest $$i$$ likes album $$j$$ .
In addition to ensuring everyone is happy with the music, Bob wants to distribute people and albums evenly among the rooms of his house. All the guests will not fit in one room, and there should be enough albums in each room to avoid repetitions. Therefore, Bob decides to bicluster his data to maximize the following objective function:
\begin{equation*} s(\boldsymbol M, \boldsymbol r, \boldsymbol c) = b(\boldsymbol r, \boldsymbol c) \cdot \sum_{i, j, k} M_{ij} r_{ki} c_{kj} \end{equation*}
where $$r_{ki}$$ is an indicator variable for membership of guest $$i$$ in cluster $$k$$ , $$c_{kj}$$ is an indicator variable for album membership, and $$b \in [0, 1]$$ penalizes unbalanced solutions, i.e., those with biclusters of different sizes. As the difference in sizes between the largest and the smallest bicluster grows, $$b$$ decays as:
\begin{equation*} b(\boldsymbol r, \boldsymbol c) = \exp \left ( \frac{ - \left ( \max \left ( \mathcal S \right ) - \min \left ( \mathcal S \right ) \right )}{ \epsilon } \right ) \end{equation*}
where
\begin{equation*} \mathcal S = \left \{ \left (\sum_{i} r_{ki} \right ) \cdot \left (\sum_{j} c_{kj} \right ) | k = 1..3 \right \} \end{equation*}
is the set of bicluster sizes, and $$\epsilon > 0$$ is a parameter that sets the aggressiveness of the penalty.
Bob uses the following algorithm to find his solution: starting with a random assignment of rows and columns to clusters, he reassigns row and columns to improve the objective function until convergence. In a simulated annealing fashion, he allows suboptimal reassignments to avoid local minima. However, this algorithm is not guaranteed to be optimal. Had Bob wanted the best solution, the naive approach would require trying every possible clustering, resulting in $$k^{n+p} = 3^{80}$$ candidate solutions. This suggests that Bob’s problem is in a nonpolynomial complexity class. In fact, most formulations of biclustering problems are in NP-complete [5].
After biclustering, the rows and columns of the data matrix may be reordered to show the assignment of guests and albums to rooms. Here are the original data set (left) and the clusters that Bob found (right):
Although not everyone will enjoy every album, clearly this solution ensures that most guests will enjoy most albums.
## Conclusion
This article introduced biclustering through a contrived example, but these methods is not limited to throwing great parties. Any data that can be represented as a matrix is amenable to biclustering. It is a popular technique for analyzing gene expression data from microarray experiments. It has also been applied to recommendation systems, market research, databases, financial data, and agricultural data, as well as many other problems.
These diverse problems require more sophisticated algorithms than Bob’s party planning algorithm. His algorithm only works for binary data and assigns each row and each column to exactly one cluster. However, many other types of bicluster structures have been proposed. Those interested in a more detailed overview of the field may be interested in the surveys [4], [5], [6].
In the following weeks I will introduce some popular biclustering algorithms as I implement them. The next post will cover spectral biclustering.
## References
[1] Jain, A. K., Murty, M. N., & Flynn, P. J.(1999). Data clustering: a review. ACM computing surveys (CSUR), 31(3), 264-323.
[2] Berkhin, P. (2006). A survey of clustering data mining techniques. In Grouping multidimensional data (pp. 25-71). Springer Berlin Heidelberg.
[3] Jain, A. K.(2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31(8), 651-666.
[4] Tanay, A., Sharan, R., & Shamir, R. (2005). Biclustering algorithms: A survey. Handbook of computational molecular biology, 9, 26-1.
[5] (1, 2) Madeira, S. C., & Oliveira, A. L.(2004). Biclustering algorithms for biological data analysis: a survey. Computational Biology and Bioinformatics, IEEE/ACM Transactions on, 1(1), 24-45.
[6] Busygin, S., Prokopyev, O., & Pardalos, P. M.(2008). Biclustering in data mining. Computers & Operations Research, 35(9), 2964-2987.
|
{}
|
## Wednesday, 4 September 2013
### Linear Regression
The linear regression model is a fairly simple model, nevertheless it is used in many applications.
What follows is a fairly mathematical derivation of the linear regression parameters.
Let us denote the observations as $y[n]$ and $x[n]$ for $n=0,1,..,N-1$ for the model
$y[n] = ax[n]+b$
We assume that we can measure both $x[n]$ and $y[n]$ - the unknowns are $a$ and $b$, the gradient and intercept respectively. We can use the least squares error criterion to find estimates for $a$ and $b$, from data $x[n]$ and $y[n]$.
Let $e[n] = y[n] - ax[n]-b$ be the error. We can represent the samples $e[n]$, $x[n]$ and $y[n]$ as length $N$ column vectors $\underline{e}$,$\underline{x}$ and $\underline{y}$ respectively.
Thus the error sum of squares (which can be regarded as a cost function $C$ to minimise over) will be
$\underline{e}^T\underline{e}=(\underline{y}-a\underline{x}-b\underline{1})^T(\underline{y}-a\underline{x}-b\underline{1}$).
where $\underline{1}$ is a column vector with all elements set to 1.
Expanding this cost function results in:-
$C=\underline{y}^T\underline{y}-2a\underline{x}^T\underline{y}-2b\underline{y}^T\underline{1}+a^2(\underline{x}^T\underline{x})+2ab(\underline{x}^T\underline{1})+b^2(\underline{1}^T\underline{1})$
Differentiating $C$ w.r.t. to $a$ and $b$:-
$\frac{\partial C}{\partial a}=2a(\underline{x}^T\underline{x})+2b(\underline{x}^T\underline{1})-2\underline{x}^T\underline{y}$
$\frac{\partial C}{\partial b}=2a(\underline{x}^T\underline{1})+2b(\underline{1}^T\underline{1})-2\underline{y}^T\underline{1}$
In order to estimate $\hat{a}$ and $\hat{b}$ (the least squares estimates of $a$ and $a$ respectively) we need to set the above two partial differential equations to 0, and solve for $a$ and $b$. We need to solve the following matrix equation
$\left[\begin{array}{cc}\underline{x}^T\underline{x}&\underline{x}^T\underline{1}\\ \underline{x}^T\underline{1}&\underline{1}^T\underline{1}\end{array}\right]$$\left[\begin{array}{c}\hat{a}\\ \hat{b}\end{array}\right] = \left[\begin{array}{c}\underline{x}^T\underline{y} \\ \underline{y}^T\underline{1} \end{array}\right] Inverting the matrix and pre multiplying both sides of the above matrix equation by the inverse results in: \left[\begin{array}{c}\hat{a} \\ \hat{b} \end{array}\right] = \frac{1}{N(\underline{x}^T\underline{x})-(\underline{x}^T\underline{1})^2}$$\left[\begin{array}{cc}\underline{1}^T\underline{1}&-\underline{x}^T\underline{1}\\ -\underline{x}^T\underline{1}&\underline{x}^T\underline{x}\end{array}\right]$$\left[\begin{array}{c}\underline{x}^T\underline{y} \\ \underline{y}^T\underline{1} \end{array}\right]$
If we revert to a more explicit notation in terms of the samples, we have
$\hat{a}=\frac{1}{N(\sum_{n=0}^{N-1}(x[n])^2)-(\sum_{n=0}^{N-1}x[n])^2}[N(\sum_{n=0}^{N-1}x[n]y[n])-(\sum_{n=0}^{N-1}y[n])(\sum_{n=0}^{N-1}x[n])]$
$\hat{b}=\frac{1}{N(\sum_{n=0}^{N-1}(x[n])^2)-(\sum_{n=0}^{N-1}x[n])^2}[(\sum_{n=0}^{N-1}(x[n])^2)(\sum_{n=0}^{N-1}y[n])-(\sum_{n=0}^{N-1}x[n])(\sum_{n=0}^{N-1}x[n]y[n])]$
where the denominator of the fraction in the above two equations is the matrix determinant.
Coefficient of Determination
Once we have calculated $\hat{a}$ and $\hat{b}$, we can see how well the linear model accounts for the data. This is achieved by finding the Coefficient of Determination ($r^2$). This is the ratio of the residual sum of squares ($rss$) to the total sum of squares ($tss$),
$\Large r^2=\frac{rss}{tss}$
and has a maximum value of 1, which is attained when the model is exactly linear. The larger $r^2$ is, the better the linear model fits the data. We can think of $r^2$ as the proportion of variation in data explained by the linear model, or the "goodness of fit" of the linear model.
The total sum of squares is given by
$tss=\sum_{n=0}^{N-1}(y[n]-\bar{y})^2$
where
$\bar{y}=\frac{1}{N}\sum_{n=0}^{N-1}y[n]$ is the mean of $y$.
The residual sum of squares ($rss$) is the total sum of squares ($tss$) minus the sum of squared errors ($sse$)
where
$sse=\sum_{n=0}^{N-1}(y[n]-\hat{a}x[n]-\hat{b})^2$
If the data perfectly fits the linear model, $sse$ will equal 0, so that $tss$ will equal $rss$ - the coefficient of determination $r^2$ will be $1$.
Summarising, we have
$\Large r^2=\frac{\sum_{n=0}^{N-1}(y[n]-\bar{y})^2-\sum_{n=0}^{N-1}(y[n]-\hat{a}x[n]-\hat{b})^2}{\sum_{n=0}^{N-1}(y[n]-\bar{y})^2}$
A worked example
Now, let us consider the following example:-
$x=[1,2,3,4]$, $y=[1.5,6,7,8]$
The matrix determinant will be $(4\times (1 + 2^2+3^2+4^2))-(1+2+3+4)^2=20$.
The least squares estimate of $a$ is
$\hat{a}=[4\times((1\times 1.5) + (2\times 6) + (3\times 7) + (4\times 8)) \\ -( (1.5+6+7+8)\times(1+2+3+4))]/20 \\ =2.05$
The least squares estimate of $b$ is
$\hat{b}=[(1+2^2+3^2+4^2)\times(1.5+6+7+8)-((1+2+3+4)\times((1\times 1.5) \\+ (2\times 6) + (3\times 7) +(4\times 8)))]/20 \\ = (675 - 665)/20 \\ =0.5$
The $tss$ is
$tss=(1.5-5.625)^2+(6-5.625)^2+(7-5.625)^2+(8-5.625)^2=24.688$
where the average of $y$ is $5.625$.
The $sse$ is
$(1.5 - (2.05\times 1) - 0.5)^2+(6 - (2.05\times 2) - 0.5)^2\\+(7 - (2.05\times 3) - 0.5)^2+(8 - (2.05\times 4) - 0.5)^2=3.675$
So that the $rss$ is $24.688-3.675=21.013$.
The coefficient of determination is
$r^2=21.013/24.688=0.8511$
This result is quite high, and is fairly indicative of a linear model.
There is a Linear Regression Calculator in this blog, which can be found here
|
{}
|
uniform matroids and MDS codes
It is known that uniform (resp. paving) matroids correspond to MDS (resp. “almost MDS” codes). This post explains this connection.
An MDS code is an $[n,k,d]$ linear error correcting block code $C$ which meets the Singleton bound, $d+k=n+1$. A uniform matroid is a matroid for which all circuits are of size $\geq r(M)+1$, where $r(M)$ is the rank of M. Recall, a circuit in a matroid M=(E,J) is a minimal dependent subset of E — that is, a dependent set whose proper subsets are all independent (i.e., all in J).
Consider a linear code $C$ whose check matrix is an $(n-k)\times n$ matrix $H=(\vec{h}_1,\dots , \vec{h}_n)$. The vector matroid M=M[H] is a matroid for which the smallest sized dependency relation among the columns of H is determined by the check relations $c_1\vec{h}_1 + \dots + c_n \vec{h}_n = H\vec{c}=\vec{0}$, where $\vec{c}=(c_1,\dots,c_n)$ is a codeword (in C which has minimum dimension d). Such a minimum dependency relation of H corresponds to a circuit of M=M[H].
3 thoughts on “uniform matroids and MDS codes”
1. I’m surprised SAGE doesn’t have any matroid routines. I tried yours, but after loading it and running
A = matrix(GF(2), [[1,0,0,1,1,0],[0,1,0,1,0,1],[0,0,1,0,1,1]])
M = vector_matroid(A)
print M
I get
Traceback (click to the left of this block for traceback)
TypeError: ‘list’ object is not callable
Or
M.rank()
Traceback (click to the left of this block for traceback)
TypeError: ‘list’ object is not callable
This on sage 4.4.3
• wdjoyner says:
I get
sage: attach “/Users/davidjoyner/sagefiles/matroid_class.sage”
sage: A = matrix(GF(2), [[1,0,0,1,1,0],[0,1,0,1,0,1],[0,0,1,0,1,1]])
sage: M = vector_matroid(A)
sage: print M
Matroid on base set [0, 1, 2, 3, 4, 5] of rank 3
sage: M.rank()
3
This is using 4.5.1 and the matroid code at
http://boxen.math.washington.edu/home/wdj/sagefiles/matroid_class.sage
2. Thank you, I’ll wait a coupl of days to be at home and tryit, meanwhile I’m stuck with 4.4.3 on sagenb.com
|
{}
|
First order differential equation on R of the form y’(x) = f(x,y(x)), Equivalent integral equation, Existence of approximate solutions of equation upto error $\epsilon$ by Cauchy-Euler method, Existence and uniqueness of solutions when $f$ is Lipshitz continuous in the second variable, Necessary conditions for f(x,y) to be Lipshitz continuous in y,Picard’s method of solutions of equation, Higher order differential equations, Vector valued ordinar differential equations, Reformulation of higher order differential equations as first order vector valued differential equations, Linear vector valued first order differential equation, Y’(x) = A Y(x) + C(x) — Homogeneous case, C =0, Characteristic values, characteristic vectors of square matrices, Solution when A is independent of x, Linear independence of solutions associated to characteristic values, General solution of the inhomogeneous equation, Peano’s approximation method for existence of solution.
|
{}
|
# ⓘ Plesiohedron. In geometry, a plesiohedron is a special kind of space-filling polyhedron, defined as the Voronoi cell of a symmetric Delone set. Three-dimensiona ..
## ⓘ Plesiohedron
In geometry, a plesiohedron is a special kind of space-filling polyhedron, defined as the Voronoi cell of a symmetric Delone set. Three-dimensional Euclidean space can be completely filled by copies of any one of these shapes, with no overlaps. The resulting honeycomb will have symmetries that take any copy of the plesiohedron to any other copy.
The plesiohedra include such well-known shapes as the cube, hexagonal prism, rhombic dodecahedron, and truncated octahedron. The largest number of faces that a plesiohedron can have is 38.
## 1. Definition
A set S {\displaystyle S} of points in Euclidean space is a Delone set if there exists a number ε > 0 {\displaystyle \varepsilon > 0} such that every two points of S {\displaystyle S} are at least at distance ε {\displaystyle \varepsilon } apart from each other and such that every point of space is within distance 1 / ε {\displaystyle 1/\varepsilon } of at least one point in S {\displaystyle S}. So S {\displaystyle S} fills space, but its points never come too close to each other. For this to be true, S {\displaystyle S} must be infinite. Additionally, the set S {\displaystyle S} is symmetric in the sense needed to define a plesiohedron if, for every two points p {\displaystyle p} and q {\displaystyle q} of S {\displaystyle S}, there exists a rigid motion of space that takes S {\displaystyle S} to S {\displaystyle S} and p {\displaystyle p} to q {\displaystyle q}. That is, the symmetries of S {\displaystyle S} act transitively on S {\displaystyle S}.
The Voronoi diagram of any set S {\displaystyle S} of points partitions space into regions called Voronoi cells that are nearer to one given point of S {\displaystyle S} than to any other. When S {\displaystyle S} is a Delone set, the Voronoi cell of each point p {\displaystyle p} in S {\displaystyle S} is a convex polyhedron. The faces of this polyhedron lie on the planes that perpendicularly bisect the line segments from p {\displaystyle p} to other nearby points of S {\displaystyle S}.
When S {\displaystyle S} is symmetric as well as being Delone, the Voronoi cells must all be congruent to each other, for the symmetries of S {\displaystyle S} must also be symmetries of the Voronoi diagram. In this case, the Voronoi diagram forms a honeycomb in which there is only a single prototile shape, the shape of these Voronoi cells. This shape is called a plesiohedron. The tiling generated in this way is isohedral, meaning that it not only has a single prototile "monohedral" but also that any copy of this tile can be taken to any other copy by a symmetry of the tiling.
As with any space-filling polyhedron, the Dehn invariant of a plesiohedron is necessarily zero.
## 2. Examples
The plesiohedra include the five parallelohedra. These are polyhedra that can tile space in such a way that every tile is symmetric to every other tile by a translational symmetry, without rotation. Equivalently, they are the Voronoi cells of lattices, as these are the translationally-symmetric Delone sets. Plesiohedra are a special case of the stereohedra, the prototiles of isohedral tilings more generally. For this reason and because Voronoi diagrams are also known as Dirichlet tesselations they have also been called "Dirichlet stereohedra"
There are only finitely many combinatorial types of plesiohedron. Notable individual plesiohedra include:
• The triangular prism, the prototile of the triangular prismatic honeycomb. More generally, each of the 11 types of Laves tiling of the plane by congruent convex polygons and each of the subtypes of these tilings with different symmetry groups can be realized as the Voronoi cells of a symmetric Delone set in the plane. It follows that the prisms over each of these shapes are plesiohedra. As well as the triangular prisms, these include prisms over certain quadrilaterals, pentagons, and hexagons.
• The trapezo-rhombic dodecahedron, the prototile of the trapezo-rhombic dodecahedral honeycomb and the plesiohedron generated by the hexagonal close-packing
• The triakis truncated tetrahedron, the prototile of the triakis truncated tetrahedral honeycomb and the plesiohedron generated by the diamond lattice
• The five parallelohedra: the cube or more generally the parallelepiped, hexagonal prism, rhombic dodecahedron, elongated dodecahedron, and truncated octahedron.
• The 17-sided Voronoi cells of the Laves graph
• The gyrobifastigium is a stereohedron but not a plesiohedron, because the points at the centers of the cells of its face-to-face tiling where they are forced to go by symmetry have differently-shaped Voronoi cells. However, a flattened version of the gyrobifastigium, with faces made of isosceles right triangles and silver rectangles, is a plesiohedron.
Many other plesiohedra are known. Two different ones with the largest known number of faces, 38, were discovered by crystallographer Peter Engel. For many years the maximum number of faces of a plesiohedron was an open problem, but analysis of the possible symmetries of three-dimensional space has shown that this number is at most 38.
The Voronoi cells of points uniformly spaced on a helix fill space, are all congruent to each other, and can be made to have arbitrarily large numbers of faces. However, the points on a helix are not a Delone set and their Voronoi cells are not bounded polyhedra.
A modern survey is given by Schmitt.
|
{}
|
Chapter 11. Magnetic Forces and Fields
# 11.2 Magnetic Fields and Lines
### Learning Objectives
By the end of this section, you will be able to:
• Define the magnetic field based on a moving charge experiencing a force
• Apply the right-hand rule to determine the direction of a magnetic force based on the motion of a charge in a magnetic field
• Sketch magnetic field lines to understand which way the magnetic field points and how strong it is in a region of space
We have outlined the properties of magnets, described how they behave, and listed some of the applications of magnetic properties. Even though there are no such things as isolated magnetic charges, we can still define the attraction and repulsion of magnets as based on a field. In this section, we define the magnetic field, determine its direction based on the right-hand rule, and discuss how to draw magnetic field lines.
### Defining the Magnetic Field
A magnetic field is defined by the force that a charged particle experiences moving in this field, after we account for the gravitational and any additional electric forces possible on the charge. The magnitude of this force is proportional to the amount of charge q, the speed of the charged particle v, and the magnitude of the applied magnetic field. The direction of this force is perpendicular to both the direction of the moving charged particle and the direction of the applied magnetic field. Based on these observations, we define the magnetic field strength B based on the magnetic force $\stackrel{\to }{\textbf{F}}$ on a charge q moving at velocity $\stackrel{\to }{\textbf{v}}$ as the cross product of the velocity and magnetic field, that is,
$\stackrel{\to }{\textbf{F}}=q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}.$
In fact, this is how we define the magnetic field $\stackrel{\to }{\textbf{B}}$—in terms of the force on a charged particle moving in a magnetic field. The magnitude of the force is determined from the definition of the cross product as it relates to the magnitudes of each of the vectors. In other words, the magnitude of the force satisfies
$F=qvB\phantom{\rule{0.1em}{0ex}}\text{sin}\phantom{\rule{0.1em}{0ex}}\theta$
where θ is the angle between the velocity and the magnetic field.
The SI unit for magnetic field strength B is called the tesla (T) after the eccentric but brilliant inventor Nikola Tesla (1856–1943), where
$1\phantom{\rule{0.2em}{0ex}}\text{T}=\frac{1\phantom{\rule{0.2em}{0ex}}\text{N}}{\text{A}·\text{m}}.$
A smaller unit, called the gauss (G), where $1\phantom{\rule{0.2em}{0ex}}\text{G}={10}^{-4}\text{T},$ is sometimes used. The strongest permanent magnets have fields near 2 T; superconducting electromagnets may attain 10 T or more. Earth’s magnetic field on its surface is only about $5\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-5}\text{T},$ or 0.5 G.
### Problem-Solving Strategy: Direction of the Magnetic Field by the Right-Hand Rule
The direction of the magnetic force $\stackrel{\to }{\textbf{F}}$ is perpendicular to the plane formed by $\stackrel{\to }{\textbf{v}}$ and $\stackrel{\to }{\textbf{B}},$ as determined by the right-hand rule-1 (or RHR-1), which is illustrated in Figure 11.4.
1. Orient your right hand so that your fingers curl in the plane defined by the velocity and magnetic field vectors.
2. Using your right hand, sweep from the velocity toward the magnetic field with your fingers through the smallest angle possible.
3. The magnetic force is directed where your thumb is pointing.
4. If the charge was negative, reverse the direction found by these steps.
There is no magnetic force on static charges. However, there is a magnetic force on charges moving at an angle to a magnetic field. When charges are stationary, their electric fields do not affect magnets. However, when charges move, they produce magnetic fields that exert forces on other magnets. When there is relative motion, a connection between electric and magnetic forces emerges—each affects the other.
### Example
#### An Alpha-Particle Moving in a Magnetic Field
An alpha-particle $\left(q=3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)$ moves through a uniform magnetic field whose magnitude is 1.5 T. The field is directly parallel to the positive z-axis of the rectangular coordinate system of Figure 11.5. What is the magnetic force on the alpha-particle when it is moving (a) in the positive x-direction with a speed of $5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s?}$ (b) in the negative y-direction with a speed of $5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s?}$ (c) in the positive z-direction with a speed of $5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s?}$ (d) with a velocity $\stackrel{\to }{\textbf{v}}=\left(2.0\hat{\textbf{i}}-3.0\hat{\textbf{j}}+1.0\hat{\textbf{k}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s?}$
#### Strategy
We are given the charge, its velocity, and the magnetic field strength and direction. We can thus use the equation $\stackrel{\to }{\textbf{F}}=q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}$ or $F=qvB\phantom{\rule{0.1em}{0ex}}\text{sin}\phantom{\rule{0.1em}{0ex}}\theta$ to calculate the force. The direction of the force is determined by RHR-1.
#### Solution
1. First, to determine the direction, start with your fingers pointing in the positive x-direction. Sweep your fingers upward in the direction of magnetic field. Your thumb should point in the negative y-direction. This should match the mathematical answer. To calculate the force, we use the given charge, velocity, and magnetic field and the definition of the magnetic force in cross-product form to calculate:
$\stackrel{\to }{\textbf{F}}=q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}=\left(3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)\left(5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{i}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\left(1.5\phantom{\rule{0.2em}{0ex}}\text{T}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{k}}\right)=-2.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\text{N}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{j}}.$
2. First, to determine the directionality, start with your fingers pointing in the negative y-direction. Sweep your fingers upward in the direction of magnetic field as in the previous problem. Your thumb should be open in the negative x-direction. This should match the mathematical answer. To calculate the force, we use the given charge, velocity, and magnetic field and the definition of the magnetic force in cross-product form to calculate:
$\stackrel{\to }{\textbf{F}}=q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}=\left(3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)\left(-5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{j}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\left(1.5\phantom{\rule{0.2em}{0ex}}\text{T}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{k}}\right)=-2.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\text{N}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{i}}.$
An alternative approach is to use Equation 11.2 to find the magnitude of the force. This applies for both parts (a) and (b). Since the velocity is perpendicular to the magnetic field, the angle between them is 90 degrees. Therefore, the magnitude of the force is:
$F=qvB\phantom{\rule{0.1em}{0ex}}\text{sin}\phantom{\rule{0.1em}{0ex}}\theta =\left(3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)\left(5.0\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m}\text{/}\text{s}\right)\left(1.5\phantom{\rule{0.2em}{0ex}}\text{T}\right)\text{sin}\left(90\text{°}\right)=2.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\text{N.}$
3. Since the velocity and magnetic field are parallel to each other, there is no orientation of your hand that will result in a force direction. Therefore, the force on this moving charge is zero. This is confirmed by the cross product. When you cross two vectors pointing in the same direction, the result is equal to zero.
4. First, to determine the direction, your fingers could point in any orientation; however, you must sweep your fingers upward in the direction of the magnetic field. As you rotate your hand, notice that the thumb can point in any x– or y-direction possible, but not in the z-direction. This should match the mathematical answer. To calculate the force, we use the given charge, velocity, and magnetic field and the definition of the magnetic force in cross-product form to calculate:
$\begin{array}{cc}\hfill \stackrel{\to }{\textbf{F}}& =q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}=\left(3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)\left(\left(2.0\hat{\textbf{i}}-3.0\hat{\textbf{j}}+1.0\hat{\textbf{k}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\left(1.5\phantom{\rule{0.2em}{0ex}}\text{T}\phantom{\rule{0.2em}{0ex}}\hat{\textbf{k}}\right)\hfill \\ & =\left(-14.4\hat{\textbf{i}}-9.6\hat{\textbf{j}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-15}\text{N.}\hfill \end{array}$
This solution can be rewritten in terms of a magnitude and angle in the xy-plane:
$\begin{array}{ccc}\hfill |\stackrel{\to }{\textbf{F}}|& =\hfill & \sqrt{{F}_{x}^{2}+{F}_{y}^{2}}=\sqrt{{\left(-14.4\right)}^{2}+{\left(-9.6\right)}^{2}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-15}\text{N}=1.7\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\text{N}\hfill \\ \hfill \theta & =\hfill & {\text{tan}}^{-1}\left(\frac{{F}_{y}}{{F}_{x}}\right)={\text{tan}}^{-1}\left(\frac{-9.6\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-15}\text{N}}{-14.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-15}\text{N}}\right)=34\text{°}\text{.}\hfill \end{array}$
The magnitude of the force can also be calculated using Equation 11.2. The velocity in this question, however, has three components. The z-component of the velocity can be neglected, because it is parallel to the magnetic field and therefore generates no force. The magnitude of the velocity is calculated from the x– and y-components. The angle between the velocity in the xy-plane and the magnetic field in the z-plane is 90 degrees. Therefore, the force is calculated to be:
$\begin{array}{ccc}\hfill |\stackrel{\to }{\textbf{v}}|& =\hfill & \sqrt{{\left(2\right)}^{2}+{\left(-3\right)}^{2}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\frac{\text{m}}{\text{s}}=3.6\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\frac{\text{m}}{\text{s}}\hfill \\ \hfill F& =\hfill & qvB\phantom{\rule{0.1em}{0ex}}\text{sin}\phantom{\rule{0.1em}{0ex}}\theta =\left(3.2\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-19}\text{C}\right)\left(3.6\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{4}\text{m/s}\right)\left(1.5\phantom{\rule{0.2em}{0ex}}\text{T}\right)\text{sin}\left(90\text{°}\right)=1.7\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\text{N.}\hfill \end{array}$
This is the same magnitude of force calculated by unit vectors.
#### Significance
The cross product in this formula results in a third vector that must be perpendicular to the other two. Other physical quantities, such as angular momentum, also have three vectors that are related by the cross product. Note that typical force values in magnetic force problems are much larger than the gravitational force. Therefore, for an isolated charge, the magnetic force is the dominant force governing the charge’s motion.
Repeat the previous problem with the magnetic field in the x-direction rather than in the z-direction. Check your answers with RHR-1.
Show Solution
a. 0 N; b. $2.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\hat{\textbf{k}}\text{N};$ c. $2.4\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-14}\hat{\textbf{j}}\phantom{\rule{0.2em}{0ex}}\text{N};$ d. $\left(7.2\hat{\textbf{j}}+2.2\hat{\textbf{k}}\right)\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-15}\text{N}$
### Representing Magnetic Fields
The representation of magnetic fields by magnetic field lines is very useful in visualizing the strength and direction of the magnetic field. As shown in Figure 11.6, each of these lines forms a closed loop, even if not shown by the constraints of the space available for the figure. The field lines emerge from the north pole (N), loop around to the south pole (S), and continue through the bar magnet back to the north pole.
Magnetic field lines have several hard-and-fast rules:
1. The direction of the magnetic field is tangent to the field line at any point in space. A small compass will point in the direction of the field line.
2. The strength of the field is proportional to the closeness of the lines. It is exactly proportional to the number of lines per unit area perpendicular to the lines (called the areal density).
3. Magnetic field lines can never cross, meaning that the field is unique at any point in space.
4. Magnetic field lines are continuous, forming closed loops without a beginning or end. They are directed from the north pole to the south pole.
The last property is related to the fact that the north and south poles cannot be separated. It is a distinct difference from electric field lines, which generally begin on positive charges and end on negative charges or at infinity. If isolated magnetic charges (referred to as magnetic monopoles) existed, then magnetic field lines would begin and end on them.
### Summary
• Charges moving across a magnetic field experience a force determined by $\stackrel{\to }{\textbf{F}}=q\stackrel{\to }{\textbf{v}}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\textbf{B}}.$ The force is perpendicular to the plane formed by $\stackrel{\to }{\textbf{v}}$ and $\stackrel{\to }{\textbf{B}}.$
• The direction of the force on a moving charge is given by the right hand rule 1 (RHR-1): Sweep your fingers in a velocity, magnetic field plane. Start by pointing them in the direction of velocity and sweep towards the magnetic field. Your thumb points in the direction of the magnetic force for positive charges.
• Magnetic fields can be pictorially represented by magnetic field lines, which have the following properties:
1. The field is tangent to the magnetic field line.
2. Field strength is proportional to the line density.
3. Field lines cannot cross.
4. Field lines form continuous, closed loops.
• Magnetic poles always occur in pairs of north and south—it is not possible to isolate north and south poles.
### Conceptual Questions
Discuss the similarities and differences between the electrical force on a charge and the magnetic force on a charge.
Show Solution
Both are field dependent. Electrical force is dependent on charge, whereas magnetic force is dependent on current or rate of charge flow.
(a) Is it possible for the magnetic force on a charge moving in a magnetic field to be zero? (b) Is it possible for the electric force on a charge moving in an electric field to be zero? (c) Is it possible for the resultant of the electric and magnetic forces on a charge moving simultaneously through both fields to be zero?
### Problems
What is the direction of the magnetic force on a positive charge that moves as shown in each of the six cases?
Show Solution
a. left; b. into the page; c. up the page; d. no force; e. right; f. down
Repeat previous exercise for a negative charge.
What is the direction of the velocity of a negative charge that experiences the magnetic force shown in each of the three cases, assuming it moves perpendicular to B?
Show Solution
a. right; b. into the page; c. down
Repeat previous exercise for a positive charge.
What is the direction of the magnetic field that produces the magnetic force on a positive charge as shown in each of the three cases, assuming $\stackrel{\to }{\textbf{B}}$ is perpendicular to $\stackrel{\to }{\textbf{v}}$?
Show Solution
a. into the page; b. left; c. out of the page
Repeat previous exercise for a negative charge.
(a) Aircraft sometimes acquire small static charges. Suppose a supersonic jet has a 0.500-μC charge and flies due west at a speed of 660. m/s over Earth’s south magnetic pole, where the $8.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-5}-\text{T}$ magnetic field points straight down into the ground. What are the direction and the magnitude of the magnetic force on the plane? (b) Discuss whether the value obtained in part (a) implies this is a significant or negligible effect.
Show Solution
a. $2.64\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-8}\phantom{\rule{0.2em}{0ex}}\text{N};$ north b. The force is very small, so this implies that the effect of static charges on airplanes is negligible.
(a) A cosmic ray proton moving toward Earth at $5.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{7}\text{m/s}$ experiences a magnetic force of $1.70\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-16}\phantom{\rule{0.2em}{0ex}}\text{N}.$ What is the strength of the magnetic field if there is a 45º angle between it and the proton’s velocity? (b) Is the value obtained in part a. consistent with the known strength of Earth’s magnetic field on its surface? Discuss.
An electron moving at $4.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{3}\text{m/s}$ in a 1.25-T magnetic field experiences a magnetic force of $1.40\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-16}\phantom{\rule{0.2em}{0ex}}\text{N}.$ What angle does the velocity of the electron make with the magnetic field? There are two answers.
Show Solution
$10.1\text{°};169.9\text{°}$
(a) A physicist performing a sensitive measurement wants to limit the magnetic force on a moving charge in her equipment to less than $1.00\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}{10}^{-12}\phantom{\rule{0.2em}{0ex}}\text{N}.$ What is the greatest the charge can be if it moves at a maximum speed of 30.0 m/s in Earth’s field? (b) Discuss whether it would be difficult to limit the charge to less than the value found in (a) by comparing it with typical static electricity and noting that static is often absent.
### Glossary
gauss
G, unit of the magnetic field strength; $1\phantom{\rule{0.2em}{0ex}}\text{G}={10}^{-4}\text{T}$
magnetic field lines
continuous curves that show the direction of a magnetic field; these lines point in the same direction as a compass points, toward the magnetic south pole of a bar magnet
magnetic force
force applied to a charged particle moving through a magnetic field
right-hand rule-1
using your right hand to determine the direction of either the magnetic force, velocity of a charged particle, or magnetic field
tesla
SI unit for magnetic field: 1 T = 1 N/A-m
|
{}
|
# Changing Row Keys into Normal Rows
I have a dataset with column and row keys:
dsA={<|"keyA" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyB" -> <|"key1" -> "c",
"key2" -> "d"|>|>, <|"keyC" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyB" -> <|"key1" -> "d", "key2" -> "f"|>|>}//Dataset
Because the row keys get in the way of a number of operations (such as joining with other datasets that might have a different number of rows), I want to change the dataset to the following form:
I asked this as an addendum to a different question, but so far no luck. I can create datasets of the values and the keys:
dsValues = dsA[Values]
dsKeys = dsA[Keys]
But I'm not sure how to put the dsKeys column in front of the dsValues table.
Map[KeyValueMap[<|"RowKeys" -> #, #2|>&]] @ dsA
Also
dsA[All, KeyValueMap[<|"RowKeys" -> #, #2|>&]]
same result
Like this:
dsA[Flatten, KeyValueMap[Prepend[#2, "keys" -> #] &]]
?
Thanks to @kglr and @Kuba for their correct answers--I post this longer response to highlight an issue with brackets in datasets that I wasn't aware of. Apparently, you can make datasets in a couple of different ways--with a leading { bracket or without. Let's say we have two datasets (slightly modified versions of the one from my original question):
dsBracket = {<|
"keyA" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyB" -> <|"key1" -> "c", "key2" -> "d"|>,
"keyC" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyD" -> <|"key1" -> "d", "key2" -> "f"|>|>} // Dataset
and
dsNoBracket = <|
"keyA" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyB" -> <|"key1" -> "c", "key2" -> "d"|>,
"keyC" -> <|"key1" -> "a", "key2" -> "b"|>,
"keyD" -> <|"key1" -> "d", "key2" -> "f"|>|> // Dataset
Note that the only difference is the {} on dsBracket.
I didn't realize that my dataset was lacking that bracket because I imported data and manipulated it all within the dataset environment--I'd prefer to ignore the mess! The datasets SemanticImport with the {; the dataset first loses the curly brace when I SelectFirst the result of a GroupBy.
@Kuba's method works for dsBracket, but produces a failure for the dsNoBracket. With @kglr's answer, I realized something weird was happening. Here's what happens with and without the bracket (and my workaround for the issue):
• you can use dsBracket[All,KeyValueMap[<|"RowKeys" -> #, #2|>&]] and dsNoBracket[KeyValueMap[<|"RowKeys" -> #, #2|>&]] for the two cases. – kglr Aug 6 '19 at 21:46
• Well, you get what you asked for :) If the dataset structure is different then you need to adapt the solution. If you take a look at my and @kglr 's results with the original dsA you can apply Normal to see that our results differ. I added Flatten on purpose to get rid of nested lists. At the end the question is, what is your input and desired output exactly? – Kuba Aug 7 '19 at 7:52
|
{}
|
sfcostaper applies cosine tapering to the edges of the input data.
Cosine tapering amounts to multiplying the edges by
$$\sin^2\left(\frac{\pi\,x}{2}\right) = \frac{1+\cos{(\pi\,x)}}{2}$$,
where $x$ is the relative distance from the start of the taper.
sfcostaper works in N dimensions, the size of the taper in #th dimension in samples is specified by nw#=. In the following example from gee/ajt/igrad1 the images are tapered at both edges by specifying nw1=50 nw2=50.
|
{}
|
## Saturday, February 13, 2010
### Dirichlet: 205th birthday
Johann Peter Gustav Lejeune Dirichlet was born on February 13th, 1805, i.e. 205 years ago, to a father who was a postmaster in Düren, one of the most Western towns in Germany.
At the high school, he learned from Georg Ohm. Dirichlet began his research of mathematics with some progress in proving Fermat's Last Theorem for n=5 (partially) and, later, n=14 (fully). He has done a lot in number theory, the Riemann zeta function stuff, probability theory, Fourier expansions, and so on. He gave us a definition of a function that is still being used.
He married Rebecca Henriette Mendelssohn Bartholdy, a woman from a highly achieved family of composers and Jewish converts to Christianity.
There are dozens of concepts named after Dirichlet. Many things he co-fathered are usually not named after him: that includes the Voronoi diagrams and the Pigeonhole principle claiming that at least two out of M pigeons have to live together somewhere if you have N boxes for them and N is smaller than M. ;-) Like other things, this leads to some controversial conclusions when applied to infinite sets (and the numbers are cardinal numbers).
But string theorists surely know him for the Dirichlet branes that were discovered more than 130 years after his death, mostly by Joe Polchinski. They are named after the Dirichlet boundary conditions (x=0 or another constant at the boundary value of sigma).
Note the striking difference between the complexity of Dirichlet's given names - Johann Peter Gustav Lejeune - and the simplicity of the given name of Polchinski who superseded him - Joe. In some respects, the civilization has made a lot of progress. ;-) More seriously, Dirichlet has studied lots of related issues and I am confident that he would be a good string theorist today if he didn't die in 1859.
Also, exactly 100 years ago, William Shockley, the main father of the transistor, was born in London (before he was raised in Palo Alto, CA). He did lots of things in condensed matter physics but also became a eugenicist. In the 1980s, he won a lawsuit against some people who called him a "Hitlerite". He received a whopping compensation of USD 1 from them. ;-)
Hat tip: Sarah Kavassalis
|
{}
|
# FS-CXS
2019-08-2916:35 [PUBDB-2019-03150] Journal Article et al Observation of a Chirality-Induced Exchange-Bias Effect Physical review applied 12(2), 024047 (2019) [10.1103/PhysRevApplied.12.024047] Chiral magnetism that manifests in the existence of skyrmions or chiral domain walls offers an alternative way for creating anisotropies in magnetic materials that might have large potential for application in future spintronic devices. Here we show experimental evidence for an alternative type of in-plane exchange-bias effect present at room temperature that is created from a chiral 90∘ domain wall at the interface of a ferrimagnetic-ferromagnetic $\mathrm{Dy-Co/Ni-Fe}$ bilayer system [...] OpenAccess: PDF PDF (PDFA); 2019-08-2915:37 [PUBDB-2019-03147] Journal Article et al Direct Visualization of Spatial Inhomogeneity of Spin Stripes Order in $\mathrm{La_{1.72}Sr_{0.28}NiO_{4}}$ Condensed Matter 4(3), 77 (2019) [10.3390/condmat4030077] In several strongly correlated electron systems, the short range ordering of defects, charge and local lattice distortions are found to show complex inhomogeneous spatial distributions. There is growing evidence that such inhomogeneity plays a fundamental role in unique functionality of quantum complex materials [...] OpenAccess: PDF PDF (PDFA); 2019-08-2520:30 [PUBDB-2019-03101] Journal Article et al Anomalous Dynamics of Concentrated Silica-PNIPAm Nanogels The journal of physical chemistry letters 10, 5231 - 5236 (2019) [10.1021/acs.jpclett.9b01690] We present the structure and dynamics of highly concentrated core–shell nanoparticles composed of a silica core and a poly(N-isoproylacrylamide) (PNIPAm) shell suspended in water. With X-ray photon correlation spectroscopy, we are able to follow dynamical changes over the volume phase transition of PNIPAm at LCST = 32 °C. [...] Published on 2019-08-21. Available in OpenAccess from 2020-08-21.: PDF PDF (PDFA); Restricted: PDF PDF (PDFA); 2019-07-2618:23 [PUBDB-2019-02852] Journal Article et al Local orientational order in self-assembled nanoparticle films: the role of ligand composition and salt Journal of applied crystallography 52(4), 777 - 782 (2019) [10.1107/S1600576719007568] An X-ray cross-correlation study of the local orientational order in self-assembled films made from PEGylated gold nanoparticles is presented. The local structure of this model system is dominated by four- and sixfold order. [...] OpenAccess: PDF PDF (PDFA); 2019-05-1710:29 [PUBDB-2019-02307] Journal Article et al Compact hard X-ray split-and-delay line for studying ultrafast dynamics at free-electron laser sources Journal of synchrotron radiation 26(4), 1052 - 1057 (2019) [10.1107/S1600577519004570] A compact hard X-ray split-and-delay line for studying ultrafast dynamics at free-electron laser sources is presented. The device is capable of splitting a single X-ray pulse into two fractions to introduce time delays from −5 to 815 ps with femtosecond resolution [...] OpenAccess: PDF PDF (PDFA); 2019-04-1713:27 [PUBDB-2019-02010] Journal Article et al Monitoring Nanocrystal Self-Assembly in Real Time Using In Situ Small-Angle X-Ray Scattering Small 15(20), 1900438 (2019) [10.1002/smll.201900438] Self‐assembled nanocrystal superlattices have attracted large scientific attention due to their potential technological applications. However, the nucleation and growth mechanisms of superlattice assemblies remain largely unresolved due to experimental difficulties to monitor intermediate states. [...] Published on 2019-04-16. Available in OpenAccess from 2020-04-16.: PDF PDF (PDFA); Restricted: PDF PDF (PDFA); 2019-04-0910:50 [PUBDB-2019-01910] Preprint/Report et al Scientific Opportunities with an X-ray Free-Electron Laser Oscillator [arXiv:1903.09317] An X-ray free-electron laser oscillator (XFELO) is a new type of hard X-ray source that would produce fully coherent pulses with meV bandwidth and stable intensity. The XFELO complements existing sources based on self-amplified spontaneous emission (SASE) from high-gain X-ray free-electron lasers (XFEL) that produce ultra-short pulses with broad-band chaotic spectra [...] OpenAccess: PDF PDF (PDFA); 2019-03-2214:32 [PUBDB-2019-01681] Journal Article et al Spatial and temporal pre-alignment of an X-ray split-and-delay unit by laser light interferometry Review of scientific instruments 90(4), 045106 (2019) [10.1063/1.5089496] We present a novel experimental setup for performing a precise pre-alignment of a hard X-ray split-and-delay unit based on low coherence light interferometry and high-precision penta-prisms. A split-and-delay unit is a sophisticated perfect crystal-optics device that splits an incoming X-ray pulse into two sub-pulses and generates a controlled time-delay between them. [...] OpenAccess: PDF PDF (PDFA); 2019-03-1817:59 [PUBDB-2019-01592] Journal Article et al In situ small-angle X-ray scattering environment for studying nanocrystal self-assembly upon controlled solvent evaporation Review of scientific instruments 90(3), 036103 (2019) [10.1063/1.5082685] We present a sample environment for the investigation of nanoparticle self-assembly from a colloidal solution via controlled solvent evaporation using in situ small-angle X-ray scattering. Nanoparticles form ordered superlattices in the evaporative assembly along the X-ray transparent windows of a three-dimensional sample cell. [...] Published on 2019-03-18. Available in OpenAccess from 2020-03-18.: PDF PDF (PDFA); Restricted: PDF PDF (PDFA); 2019-01-0411:07 [PUBDB-2019-00039] Dissertation / PhD Thesis Gruebel, G. Structure and dynamics of glass-forming fluids 134 pp. (2018) [10.3204/PUBDB-2019-00039] = Dissertation, University of Hamburg, 2018 Colloidal dispersions are ubiquitous in our daily life and find numerous applications in industry and science. They are in particular used as a model system to study phase transitions in soft matter systems. [...] OpenAccess: PDF PDF (PDFA); External link: Fulltext
|
{}
|
# The Count of Monte Cristo
## Chapter 70
The Ball.
It was in the warmest days of July, when in due course of time the Saturday arrived upon which the ball was to take place at M. de Morcerf's. It was ten o'clock at night; the branches of the great trees in the garden of the count's house stood out boldly against the azure canopy of heaven, which was studded with golden stars, but where the last fleeting clouds of a vanishing storm yet lingered. From the apartments on the ground-floor might be heard the sound of music, with the whirl of the waltz and galop, while brilliant streams of light shone through the openings of the Venetian blinds. At this moment the garden was only occupied by about ten servants, who had just received orders from their mistress to prepare the supper, the serenity of the weather continuing to increase. Until now, it had been undecided whether the supper should take place in the dining-room, or under a long tent erected on the lawn, but the beautiful blue sky, studded with stars, had settled the question in favor of the lawn. The gardens were illuminated with colored lanterns, according to the Italian custom, and, as is usual in countries where the luxuries of the table -- the rarest of all luxuries in their complete form -- are well understood, the supper-table was loaded with wax-lights and flowers.
At the time the Countess of Morcerf returned to the rooms, after giving her orders, many guests were arriving, more attracted by the charming hospitality of the countess than by the distinguished position of the count; for, owing to the good taste of Mercedes, one was sure of finding some devices at her entertainment worthy of describing, or even copying in case of need. Madame Danglars, in whom the events we have related had caused deep anxiety, had hesitated about going to Madame de Morcerf's, when during the morning her carriage happened to meet that of Villefort. The latter made a sign, and when the carriages had drawn close together, said, -- "You are going to Madame de Morcerf's, are you not?"
"No," replied Madame Danglars, "I am too ill."
"You are wrong," replied Villefort, significantly; "it is important that you should be seen there."
"Do you think so?" asked the baroness.
"I do."
"In that case I will go." And the two carriages passed on towards their different destinations. Madame Danglars therefore came, not only beautiful in person, but radiant with splendor; she entered by one door at the time when Mercedes appeared at the door. The countess took Albert to meet Madame Danglars. He approached, paid her some well merited compliments on her toilet, and offered his arm to conduct her to a seat. Albert looked around him. "You are looking for my daughter?" said the baroness, smiling.
"I confess it," replied Albert. "Could you have been so cruel as not to bring her?"
"Calm yourself. She has met Mademoiselle de Villefort, and has taken her arm; see, they are following us, both in white dresses, one with a bouquet of camellias, the other with one of myosotis. But tell me" --
"Well, what do you wish to know?"
"Will not the Count of Monte Cristo be here to-night?"
"Seventeen!" replied Albert.
"What do you mean?"
"I only mean that the count seems the rage," replied the viscount, smiling, "and that you are the seventeenth person that has asked me the same question. The count is in fashion; I congratulate him upon it."
"And have you replied to every one as you have to me?"
"Ah, to be sure, I have not answered you; be satisfied, we shall have this lion;' we are among the privileged ones."
"Were you at the opera yesterday?"
"No."
"He was there."
"Ah, indeed? And did the eccentric person commit any new originality?"
"Can he be seen without doing so? Elssler was dancing in the Diable Boiteux;' the Greek princess was in ecstasies. After the cachucha he placed a magnificent ring on the stem of a bouquet, and threw it to the charming danseuse, who, in the third act, to do honor to the gift, reappeared with it on her finger. And the Greek princess, -- will she be here?"
"No, you will be deprived of that pleasure; her position in the count's establishment is not sufficiently understood."
"Wait; leave me here, and go and speak to Madame de Villefort, who is trying to attract your attention."
Albert bowed to Madame Danglars, and advanced towards Madame de Villefort, whose lips opened as he approached. "I wager anything," said Albert, interrupting her, "that I know what you were about to say."
"Well, what is it?"
"If I guess rightly, will you confess it?"
"Yes."
"On my honor."
"You were going to ask me if the Count of Monte Cristo had arrived, or was expected."
"Not at all. It is not of him that I am now thinking. I was going to ask you if you had received any news of Monsieur Franz."
"Yes, -- yesterday."
"What did he tell you?"
"That he was leaving at the same time as his letter."
"Well, now then, the count?"
"The count will come, of that you may be satisfied."
"You know that he has another name besides Monte Cristo?"
"No, I did not know it."
"Monte Cristo is the name of an island, and he has a family name."
"I never heard it."
"Well, then, I am better informed than you; his name is Zaccone."
"It is possible."
"He is a Maltese."
"That is also possible.
"The son of a shipowner."
"Really, you should relate all this aloud, you would have the greatest success."
"He served in India, discovered a mine in Thessaly, and comes to Paris to establish a mineral water-cure at Auteuil."
"Well, I'm sure," said Morcerf, "this is indeed news! Am I allowed to repeat it?"
"Yes, but cautiously, tell one thing at a time, and do not say I told you."
"Why so?"
"Because it is a secret just discovered."
"By whom?"
"The police."
"Then the news originated" --
"At the prefect's last night. Paris, you can understand, is astonished at the sight of such unusual splendor, and the police have made inquiries."
"Well, well! Nothing more is wanting than to arrest the count as a vagabond, on the pretext of his being too rich."
"Indeed, that doubtless would have happened if his credentials had not been so favorable."
"Poor count! And is he aware of the danger he has been in?"
"I think not."
"Then it will be but charitable to inform him. When he arrives, I will not fail to do so."
Just then, a handsome young man, with bright eyes, black hair, and glossy mustache, respectfully bowed to Madame de Villefort. Albert extended his hand. "Madame," said Albert, "allow me to present to you M. Maximilian Morrel, captain of Spahis, one of our best, and, above all, of our bravest officers."
"I have already had the pleasure of meeting this gentleman at Auteuil, at the house of the Count of Monte Cristo," replied Madame de Villefort, turning away with marked coldness of manner. This answer, and especially the tone in which it was uttered, chilled the heart of poor Morrel. But a recompense was in store for him; turning around, he saw near the door a beautiful fair face, whose large blue eyes were, without any marked expression, fixed upon him, while the bouquet of myosotis was gently raised to her lips.
The salutation was so well understood that Morrel, with the same expression in his eyes, placed his handkerchief to his mouth; and these two living statues, whose hearts beat so violently under their marble aspect, separated from each other by the whole length of the room, forgot themselves for a moment, or rather forgot the world in their mutual contemplation. They might have remained much longer lost in one another, without any one noticing their abstraction. The Count of Monte Cristo had just entered.
We have already said that there was something in the count which attracted universal attention wherever he appeared. It was not the coat, unexceptional in its cut, though simple and unornamented; it was not the plain white waistcoat; it was not the trousers, that displayed the foot so perfectly formed -- it was none of these things that attracted the attention, -- it was his pale complexion, his waving black hair, his calm and serene expression, his dark and melancholy eye, his mouth, chiselled with such marvellous delicacy, which so easily expressed such high disdain, -- these were what fixed the attention of all upon him. Many men might have been handsomer, but certainly there could be none whose appearance was more significant, if the expression may be used. Everything about the count seemed to have its meaning, for the constant habit of thought which he had acquired had given an ease and vigor to the expression of his face, and even to the most trifling gesture, scarcely to be understood. Yet the Parisian world is so strange, that even all this might not have won attention had there not been connected with it a mysterious story gilded by an immense fortune.
Meanwhile he advanced through the assemblage of guests under a battery of curious glances towards Madame de Morcerf, who, standing before a mantle-piece ornamented with flowers, had seen his entrance in a looking-glass placed opposite the door, and was prepared to receive him. She turned towards him with a serene smile just at the moment he was bowing to her. No doubt she fancied the count would speak to her, while on his side the count thought she was about to address him; but both remained silent, and after a mere bow, Monte Cristo directed his steps to Albert, who received him cordially. "Have you seen my mother?" asked Albert.
"I have just had the pleasure," replied the count; "but I have not seen your father."
"See, he is down there, talking politics with that little group of great geniuses."
"Indeed?" said Monte Cristo; "and so those gentlemen down there are men of great talent. I should not have guessed it. And for what kind of talent are they celebrated? You know there are different sorts."
"That tall, harsh-looking man is very learned, he discovered, in the neighborhood of Rome, a kind of lizard with a vertebra more than lizards usually have, and he immediately laid his discovery before the Institute. The thing was discussed for a long time, but finally decided in his favor. I can assure you the vertebra made a great noise in the learned world, and the gentleman, who was only a knight of the Legion of Honor, was made an officer."
"Come," said Monte Cristo, "this cross seems to me to be wisely awarded. I suppose, had he found another additional vertebra, they would have made him a commander."
"Very likely," said Albert.
"And who can that person be who has taken it into his head to wrap himself up in a blue coat embroidered with green?"
"Oh, that coat is not his own idea; it is the Republic's, which deputed David* to devise a uniform for the Academicians."
* Louis David, a famous French painter.
"Indeed?" said Monte Cristo; "so this gentleman is an Academician?"
"Within the last week he has been made one of the learned assembly."
"And what is his especial talent?"
"His talent? I believe he thrusts pins through the heads of rabbits, he makes fowls eat madder, and punches the spinal marrow out of dogs with whalebone."
"And he is made a member of the Academy of Sciences for this?"
"But what has the French Academy to do with all this?"
"I was going to tell you. It seems" --
"That his experiments have very considerably advanced the cause of science, doubtless?"
"No; that his style of writing is very good."
"This must be very flattering to the feelings of the rabbits into whose heads he has thrust pins, to the fowls whose bones he has dyed red, and to the dogs whose spinal marrow he has punched out?"
Albert laughed.
"And the other one?" demanded the count.
"That one?"
"Yes, the third."
"The one in the dark blue coat?"
"Yes."
"He is a colleague of the count, and one of the most active opponents to the idea of providing the Chamber of Peers with a uniform. He was very successful upon that question. He stood badly with the Liberal papers, but his noble opposition to the wishes of the court is now getting him into favor with the journalists. They talk of making him an ambassador."
"And what are his claims to the peerage?"
"He has composed two or three comic operas, written four or five articles in the Siecle, and voted five or six years on the ministerial side."
"Bravo, Viscount," said Monte Cristo, smiling; "you are a delightful cicerone. And now you will do me a favor, will you not?"
"What is it?"
"Do not introduce me to any of these gentlemen; and should they wish it, you will warn me." Just then the count felt his arm pressed. He turned round; it was Danglars.
"Ah, is it you, baron?" said he.
"Why do you call me baron?" said Danglars; "you know that I care nothing for my title. I am not like you, viscount; you like your title, do you not?"
"Certainly," replied Albert, "seeing that without my title I should be nothing; while you, sacrificing the baron, would still remain the millionaire."
"Which seems to me the finest title under the royalty of July," replied Danglars.
"Unfortunately," said Monte Cristo, "one's title to a millionaire does not last for life, like that of baron, peer of France, or Academician; for example, the millionaires Franck & Poulmann, of Frankfort, who have just become bankrupts."
"Indeed?" said Danglars, becoming pale.
"Yes; I received the news this evening by a courier. I had about a million in their hands, but, warned in time, I withdrew it a month ago."
"Ah, mon Dieu," exclaimed Danglars, "they have drawn on me for 200,000 francs!"
"Well, you can throw out the draft; their signature is worth five per cent."
"Yes, but it is too late," said Danglars, "I have honored their bills."
"Then," said Monte Cristo, "here are 200,000 francs gone after" --
"Hush, do not mention these things," said Danglars; then, approaching Monte Cristo, he added, "especially before young M. Cavalcanti;" after which he smiled, and turned towards the young man in question. Albert had left the count to speak to his mother, Danglars to converse with young Cavalcanti; Monte Cristo was for an instant alone. Meanwhile the heat became excessive. The footmen were hastening through the rooms with waiters loaded with ices. Monte Cristo wiped the perspiration from his forehead, but drew back when the waiter was presented to him; he took no refreshment. Madame de Morcerf did not lose sight of Monte Cristo; she saw that he took nothing, and even noticed his gesture of refusal.
"Albert," she asked, "did you notice that?"
"What, mother?"
"That the count has never been willing to partake of food under the roof of M. de Morcerf."
"Yes; but then he breakfasted with me -- indeed, he made his first appearance in the world on that occasion."
"But your house is not M. de Morcerf's," murmured Mercedes; "and since he has been here I have watched him."
"Well?"
"Well, he has taken nothing yet."
"The count is very temperate." Mercedes smiled sadly. "Approach him," said she, "and when the next waiter passes, insist upon his taking something."
"But why, mother?"
"Just to please me, Albert," said Mercedes. Albert kissed his mother's hand, and drew near the count. Another salver passed, loaded like the preceding ones; she saw Albert attempt to persuade the count, but he obstinately refused. Albert rejoined his mother; she was very pale.
"Well," said she, "you see he refuses?"
"Yes; but why need this annoy you?"
"You know, Albert, women are singular creatures. I should like to have seen the count take something in my house, if only an ice. Perhaps he cannot reconcile himself to the French style of living, and might prefer something else."
"Oh, no; I have seen him eat of everything in Italy; no doubt he does not feel inclined this evening."
"And besides," said the countess, "accustomed as he is to burning climates, possibly he does not feel the heat as we do."
"I do not think that, for he has complained of feeling almost suffocated, and asked why the Venetian blinds were not opened as well as the windows."
"In a word," said Mercedes, "it was a way of assuring me that his abstinence was intended." And she left the room. A minute afterwards the blinds were thrown open, and through the jessamine and clematis that overhung the window one could see the garden ornamented with lanterns, and the supper laid under the tent. Dancers, players, talkers, all uttered an exclamation of joy -- every one inhaled with delight the breeze that floated in. At the same time Mercedes reappeared, paler than before, but with that imperturbable expression of countenance which she sometimes wore. She went straight to the group of which her husband formed the centre. "Do not detain those gentlemen here, count," she said; "they would prefer, I should think, to breathe in the garden rather than suffocate here, since they are not playing."
"Ah," said a gallant old general, who, in 1809, had sung "Partant pour la Syrie," -- "we will not go alone to the garden."
"Then," said Mercedes, "I will lead the way." Turning towards Monte Cristo, she added, "count, will you oblige me with your arm?" The count almost staggered at these simple words; then he fixed his eyes on Mercedes. It was only a momentary glance, but it seemed to the countess to have lasted for a century, so much was expressed in that one look. He offered his arm to the countess; she took it, or rather just touched it with her little hand, and they together descended the steps, lined with rhododendrons and camellias. Behind them, by another outlet, a group of about twenty persons rushed into the garden with loud exclamations of delight.
|
{}
|
• Free Practice Test & Review
How would you score if you took the GMAT
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
• 1 Hour Free
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• Award-winning private GMAT tutoring
Register now and save up to $200 Available with Beat the GMAT members only code • Free Trial & Practice Exam BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Get 300+ Practice Questions 25 Video lessons and 6 Webinars for FREE Available with Beat the GMAT members only code • FREE GMAT Exam Know how you'd score today for$0
Available with Beat the GMAT members only code
• 5 Day FREE Trial
Study Smarter, Not Harder
Available with Beat the GMAT members only code
• Magoosh
Study with Magoosh GMAT prep
Available with Beat the GMAT members only code
• Free Veritas GMAT Class
Experience Lesson 1 Live Free
Available with Beat the GMAT members only code
# This topic has 3 expert replies and 0 member replies
AbeNeedsAnswers Master | Next Rank: 500 Posts
Joined
02 Jul 2017
Posted:
192 messages
Followed by:
2 members
1
#### OG All the cells in a particular plant
Sun Sep 17, 2017 3:14 pm
All of the cells in a particular plant start out with the same complement of genes. How then can these cells differentiate and form structures as different as roots, stems, leaves, and fruits? The answer is that only a small subset of the genes in a particular kind of cell are expressed, or turned on, at a given time. This is accomplished by a complex system of chemical messengers that in plants include hormones and other regulatory molecules. Five major hormones have been identified: auxin, abscisic acid, cytokinin, ethylene, and gibberellin. Studies of plants have now identified a new class of regulatory molecules called oligosaccharins.
Unlike the oligosaccharins, the five well-known plant hormones are pleiotropic rather than specific; that is, each has more than one effect on the growth and development of plants. The five has so many simultaneous effects that they are not very useful in artificially controlling the growth of crops. Auxin, for instance, stimulates the rate of cell elongation, causes shoots to grow up and roots to grow down, and inhibits the growth of lateral shoots. Auxin also causes the plant to develop a vascular system, to form lateral roots, and to produce ethylene.
The pleiotropy of the five well-studied plant hormones is somewhat analogous to that of certain hormones in animal. For example, hormones from the hypothalamus in the brain stimulate the anterior lobe of the pituitary gland to synthesize and release many different hormones, one of which stimulates the release of hormones from the adrenal cortex. These hormones have specific effects on target organs all over the body. One hormone stimulates the thyroid gland, for example, another the ovarian follicle cells, and so forth. In other words, there is a hierarchy of hormones. Such a hierarchy may also exist in plants. Oligosaccharins are fragments of the cell wall released by enzymes: different enzymes release different oligosaccharins. There are indications that pleiotropic plant hormones may actually function by activating the enzymes that release these other, more specific chemical messengers from the cell wall.
128. According to the passage, the five well-known plant hormones are not useful in controlling the growth of crops because
(A) it is not known exactly what functions the hormones perform
(B) each hormone has various effects on plants
(C) none of the hormones can function without the others
(D) each hormone has different effects on different kinds of plants
(E) each hormone works on only a small subset of a cell's genes at any particular time
192. The passage suggests that the place of hypothalamic hormones in the hormonal hierarchies of animals is similar to the place of which of the following in plants?
(A) plant cell walls
(B) the complement of genes in each plant cell
(C) a subset of a plant cell's gene complement
(D) the five major hormones
(E) the oligosaccharins
103. The passage suggests that which of the following is a function likely to be performed by an oligosaccharin?
(A) to stimulate a particular plant cell to become part of a plant's root system
(B) to stimulate the walls of a particular cell to produce other oligosaccharins
(C) to activate enzymes that release specific chemical messengers from plant cell walls
(D) to duplicate the gene complement in a particular plant cell
(E) to produce multiple effects on a particular subsystem of plant cells
131. The author mentions specific effects that auxin has on plant development in order to illustrate the
(A) point that some of the effects of plant hormones can be harmful
(B) way in which hormones are produced by plants
(C) hierarchical nature of the functioning of plant hormones
(D) differences among the best-known plant hormones
(E) concept of pleiotropy as it is exhibited by plant hormones
132. According to the passage, which of the following best describes a function performed by oligosaccharins?
(A) regulating the daily functioning of a plant's cells
(B) interacting with one another to produce different chemicals
(C) releasing specific chemical messengers from a plant's cell walls
(D) producing the hormones that cause plant cells to differentiate to perform different functions
(E) influencing the development of a plant's cells by controlling the expression of the cells' genes
133. The passage suggests that, unlike the pleiotropic hormones, oligosaccharins could be used effectively to
(A) trace the passage of chemicals through the walls of cells
(B) pinpoint functions of other plant hormones
(C) artificially control specific aspects of the development of crops
(D) alter the complement of genes in the cells of plants
(E) alter the effects of the five major hormones on plant development
Q128:B
Q129:D
Q130:A
Q131:E
Q132:E
Q133:C
### GMAT/MBA Expert
DavidG@VeritasPrep Legendary Member
Joined
14 Jan 2015
Posted:
2667 messages
Followed by:
120 members
1153
GMAT Score:
770
Fri Oct 13, 2017 10:23 am
Quote:
128. According to the passage, the five well-known plant hormones are not useful in controlling the growth of crops because
(A) it is not known exactly what functions the hormones perform
(B) each hormone has various effects on plants
(C) none of the hormones can function without the others
(D) each hormone has different effects on different kinds of plants
(E) each hormone works on only a small subset of a cell's genes at any particular time
In paragraph 2, we get this sentence about plant hormones: The five have so many simultaneous effects that they are not very useful in artificially controlling the growth of crops.
_________________
Veritas Prep | GMAT Instructor
Veritas Prep Reviews
Save $100 off any live Veritas Prep GMAT Course Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now! ### GMAT/MBA Expert DavidG@VeritasPrep Legendary Member Joined 14 Jan 2015 Posted: 2667 messages Followed by: 120 members Upvotes: 1153 GMAT Score: 770 Fri Oct 13, 2017 10:25 am Quote: 129. The passage suggests that the place of hypothalamic hormones in the hormonal hierarchies of animals is similar to the place of which of the following in plants? (A) plant cell walls (B) the complement of genes in each plant cell (C) a subset of a plant cell's gene complement (D) the five major hormones (E) the oligosaccharins The first line of paragraph 3: The pleiotropy of the five well-studied plant hormones is somewhat analogous to that of certain hormones in animal. We then get a discussion of hypothalamic hormones in animals. The answer is D _________________ Veritas Prep | GMAT Instructor Veritas Prep Reviews Save$100 off any live Veritas Prep GMAT Course
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
### GMAT/MBA Expert
DavidG@VeritasPrep Legendary Member
Joined
14 Jan 2015
Posted:
2667 messages
Followed by:
120 members
1153
GMAT Score:
770
Fri Oct 13, 2017 10:33 am
Quote:
130. The passage suggests that which of the following is a function likely to be performed by an oligosaccharin?
(A) to stimulate a particular plant cell to become part of a plant's root system
(B) to stimulate the walls of a particular cell to produce other oligosaccharins
(C) to activate enzymes that release specific chemical messengers from plant cell walls
(D) to duplicate the gene complement in a particular plant cell
(E) to produce multiple effects on a particular subsystem of plant cells
The second and third sentences of the first paragraph: How then can these cells differentiate and form structures as different as roots, stems, leaves, and fruits? The answer is that only a small subset of the genes in a particular kind of cell are expressed, or turned on, at a given time.
The point of the passage is to explain how a plant's regulatory system produces these discrete structures. We learn in the final paragraph the system works as follows: hormones --> enzymes ---> oligosaccharins --> specific effect on target structures. In other words, the oligosaccharins are the answer to the question of who allows cells to differentiate into structures such as roots. The answer is A
_________________
Veritas Prep | GMAT Instructor
Veritas Prep Reviews
Save \$100 off any live Veritas Prep GMAT Course
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
### Top First Responders*
1 Jay@ManhattanReview 64 first replies
2 GMATGuruNY 59 first replies
3 Rich.C@EMPOWERgma... 34 first replies
4 Brent@GMATPrepNow 24 first replies
5 ceilidh.erickson 6 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 Jeff@TargetTestPrep
Target Test Prep
132 posts
2 GMATGuruNY
The Princeton Review Teacher
94 posts
3 Brent@GMATPrepNow
GMAT Prep Now Teacher
88 posts
4 Scott@TargetTestPrep
Target Test Prep
85 posts
5 Max@Math Revolution
Math Revolution
82 posts
See More Top Beat The GMAT Experts
|
{}
|
# Strongly-Continuous linear functionals on $\mathcal{B}(H)$
Suppose $H$ is a complex Hilbert space and $$w: \mathcal{B}(H) \longrightarrow \mathbb{C}$$ is a bounded linear functional on $\mathcal{B}(H)$ such that $w$ is continuous even if $\mathcal{B}(H)$ is given the strong operator topology. I am supposed to prove that there exists $c>0$ and $h_1, \dots, h_n \in H$ such that for all $x \in \mathcal{B}(H)$, one has $$|w(x)| \leq c \sqrt{\sum_{i = 1}^n \|x(h_i)\|^2}.$$ I have no idea how to begin. To me, the strong operator topology is given by the family of seminorms $$\{\|x\|_h = \|x(h)\| : h \in H\}.$$
-
Hint: What do the open sets of the strong operator topology look like (specifically, what is a basis for this topology)? What does this say about $\{x : |w(x)| < 1\}$? – Nate Eldredge May 5 '12 at 5:09
Thanks, this was very helpful. – Jeff May 5 '12 at 6:24
@NateEldredge You could post your comment as answer. – Davide Giraudo May 5 '12 at 9:37
Hint: What do the open sets of the strong operator topology look like (specifically, what is a basis for this topology)? What does this say about $\{x:|w(x)|<1\}$?
|
{}
|
Seeking way to tile/pyramid 4 million images using gdal for geoserver WMS?
by PCL Last Updated July 11, 2019 21:22 PM
My scenario consist of 4 million geolocated images each measuring 1536x1536 pixels in size.
What is the most efficient way to pyramid these images?
My end goal is to serve these images using geoserver via WMS requests.
I have tried creating a virtual dataset of all images and putting them into gdal_retile.py. I have also tried merging each tile with a transparent "null" image that covers the globe then pyramid the result. Both methods seem like they will take forever and make me think there is a better way.
I also have the requirement of adding additional images in the future.
Is there an easy way to do this without having to regenerate everything again?
I am new to geoserver and gdal.
Tags :
If you are in a windows environment you can create a batch file:
@echo off
for /f %%A in ('dir "c:\your_Path\*.tif" /b/s') do (
echo %%A
"c:\path\to\GDAL\bin\gdaladddo" -ro %%A 2 4 8 16 32
)
This is a batch file that says "for this folder and every subfolder, every file with the '.tif' extension run gdaladdo". You will need to change c:\path\to\GDAL\bin to match your GDAL install location, you will also need to change .tif to the actual extension of the files.
Change the c:\your_path\ to where your folder of 5k is and run the batch file. You can run one per CPU thread in a separate cmd window. There are other options that can be added but let's just keep it simple.
important if any of the paths contain spaces you will need to quote them: "c:\path with spaces\files\" or DOS will spit the dummy, if the file names themselves contain spaces you will need to do something in python.
Michael Stimson
May 21, 2014 05:54 AM
Related Questions
Updated March 13, 2017 08:22 AM
Updated September 24, 2019 11:22 AM
Updated December 27, 2018 12:22 PM
Updated June 09, 2017 14:22 PM
Updated April 07, 2018 11:22 AM
|
{}
|
No robots beyond this line
Online communities are hot. Globally recognized examples are easy to give: websites like Facebook, LinkedIn and are very popular, manufacturers have online fora to have their customers support each other, newspapers let you leave comments on their articles on their websites and you can share everything with tools like Delicious, Digg and Reddit. This development on the Internet supports new possibilities which were unknown before. Of course this also counts for rogues. Spam is a commonly known phenomenon and global annoyance. Beside spamming unwanted messages by mail, spamming the comment boxes and fora is an issue web programmers have to deal with too. Spamming often is automated and this is a feature which can be used to counter spam. The goal is to identify a messenger being human or robot.
For this purpose the captcha was invented. Besides the fact that captcha sounds nice enough to be a buzz word it actually is short for Completely Automated Public Turing test to tell Computers and Humans Apart, although this is a bit contrived. This means that a captcha is a challenge response mechanism but it doesn’t need to be in the form of an image depicting distorted text which has to be copied in a text box which is the most common form of captchas. Creative new captchas can be found, like a transistor image which has to be read.
Technology
Wikipedia mentions a couple of features a captcha must have to qualify as one. A captcha is a challenge-response test between a system and a user of which
1. current software is unable to solve accurately,
2. most humans can solve, and
3. does not rely on the type of CAPTCHA being new to the attacker.
The first one remarks a captcha as temporal. This means that with increasing processing power and increasing insight in artificial intelligence challenges we now consider to be captchas might not be in the (near) future. Philosophically, this means that captchas are a temporary phenomenon because mankind will eventually be able to build robots which are at least as intelligent as humans are. But for now they’ll do.
The second one emphasises the differences we see between humans and robots. This is actually quite an interesting point because mankind actually admits its current limitations in its own intelligence being unable to write software which is able to solve ‘puzzles’ which are easy to solve for humans.
Which bridges to the last bullet on the list. Of course you could just add a simple checkbox labeled “Do not check this box if you are human”. No attacker would think of a spam protection this weak but because of that it might just work. The robot stumbling across your comment submission form does not expect such a protection and therefor cannot bypass it. Although this does not qualify as a captcha because the novelty of the protection will only make the attacker look into it to solve it in an instance. Of course a captcha can be a captcha when it is an innovative challenge but it should not rely on being unknown. Security through obscurity is not security at all.
Processing power and algorithms
As time passes by technology advances resulting in more processing power in both processor quantity and quality and more mathematical developments. These are the engine propelling artificial intelligence development. On the other hand, having this field of computer science developing the struggle for making captchas to tell humans and computers apart becomes harder and harder. Who knows when software becomes as advanced as to be able to not only solve puzzles or identify puzzle types but to be really intelligent and thereby be able to find ways to solve a puzzle without knowing the puzzle’s rules in advance? 20 years? 30? 5?!
Usefulness
Captcha’s can be useful too. The Recaptcha program for instance helps digitizing books by showing snippets scanned from books which they are unable to parse with their OCR software. This way the snippets are ‘decyphered’ by hundreds of people insuring accuracy and helping the system in which it is implemented to be bot-free.
Other examples of captchas might be usefull to the website’s theme such as a math class forum’s captcha challenging users with simple math like $3 + 5 =$ or $4 * 3 =$. Another example of such a situated captcha is Adafruit’s. Adafruit is a website and webshop on the Arduino, which is a do it yourself programmable breadboard. You’ll need to ‘read’ the resistor’s value in order to post a comment.
Adafruit's resistor captcha makes you slide the four sliders to match the color code on the resistor above.
VN:F [1.3.1_645]
Rating: 8.0/10 (1 vote cast)
|
{}
|
Everything that happens in the world can be described in some way. Our descriptions range from informal and causal to precise and scientific, yet ultimately they all share one underlying characteristic: they carry an abstract idea known as “information” about what is being described. In building complex systems, whether out of people or machines, information sharing is central for building cooperative solutions. However, in any system, the rate at which information can be shared is limited. For example, on Twitter, you’re limited to 140 characters per message. With 802.11g you’re limited to 54 Mbps in ideal conditions. In mobile devices, the constraints go even further: transmitting data on the network requires some of our limited bandwidth and some of our limited energy from the battery.
Obviously this means that we want to transmit our information as efficiently as possible, or, in other words, we want to transmit a representation of the information that consumes the smallest amount of resources, such that the recipient can convert this representation back into a useful or meaningful form without losing any of the information. Luckily, the problem has been studied pretty extensively over the past 60-70 years and the solution is well known.
First, it’s important to realize that compression only matters if we don’t know exactly what we’re sending or receiving beforehand. If I knew exactly what was going to be broadcast on the news, I wouldn’t need to watch it to find out what happened around the world today, so nothing would need to be transmitted in the first place. This means that in some sense, information is a property of things we don’t know or can’t predict fully, and it represents the portion that is unknown. In order to quantify it, we’re going to need some math.
Let’s say I want to tell you what color my car is, in a world where there are only four colors: red, blue, yellow, and green. I could send you the color as an English word with one byte per letter, which would require 3, 4, 5, or 6 bytes, or we could be cleverer about it. Using a pre-arranged scheme for all situations where colors need to be shared, we agree on the convention that the binary values 00, 01, 10, and 11 map to our four possible colors. Suddenly, I can use only two bits (0.25 bytes, far more efficient) to tell you what color my car is, a huge improvement. Generalizing, this suggests that for any set $\chi$ of abstract symbols (colors, names, numbers, whatever), by assigning each a unique binary value, we can transmit a description of some value from the set using $\log_2(|\chi|)$ bits on average, if we have a pre-shared mapping. As long as we use the mapping multiple times it amortizes the initial cost of sharing the mapping, so we’re going to ignore it from here out. It’s also worthwhile to keep this limit in mind as a max threshold for “reasonable;” we could easily create an encoding that is worse than this, which means that we’ve failed quite spectacularly at our job.
But, if there are additional constraints on which symbols appear, we should probably be able to do better. Consider the extreme situation where 95% of cars produced are red, 3% blue, and only 1% each for yellow and green. If I needed to transmit color descriptions for my factory’s production of 10,000 vehicles, using the earlier scheme I’d need exactly 20,000 bits to do so by stringing together all of the colors in a single sequence. But, given that by the law of large numbers, I can expect roughly 9,500 cars to be red, so what if I use a different code, where red is assigned the bit string 0, blue is assigned 10, yellow is assigned 110, and green 111? Even though the representation for two of the colors is a bit longer in this scheme, the total average encoding length for a lot of 10,000 cars decreases to 10,700 bits (1*9500 + 2*300 + 3*100 + 3*100), almost an improvement of 50%! This suggests that the probabilities for each symbol should impact the compression mapping, because if some symbols are more common than others, we can make them shorter in exchange for making less common symbols longer and expect the average length of a message made from many symbols to decrease.
So, with that in mind, the next logical question is, how well can we do by adapting our compression scheme to the probability distribution for our set of symbols? And how do we find the mapping that achieves this best amount of compression? Consider a sequence of $n$ independent, identically distributed symbols taken from some source with known probability mass function $p(X=x)$, with $S$ total symbols for which the PMF is nonzero. If $n_i$ is the number of times that the $i$th symbol in the alphabet appears in the sequence, then by the law of large numbers we know that for large $n$ it converges almost surely to a specific value: $\Pr(n_i=np_i)\xrightarrow{n\to \infty}1$.
In order to obtain an estimate of the best possible compression rate, we will use the threshold for reasonable compression identified earlier: it should, on average, take no more than approximately $\log_2(|\chi|)$ bits to represent a value from a set $\chi$, so by finding the number of possible sequences, we can bound how many bits it would take to describe them. A further consequence of the law of large numbers is that because $\Pr(n_i=np_i)\xrightarrow{n\to \infty}1$ we also have $\Pr(n_i\neq np_i)\xrightarrow{n\to \infty}0$. This means that we can expect the set of possible sequences to contain only the possible permutations of a sequence containing $n_i$ realizations of each symbol. The probability of a specific sequence $X^n=x_1 x_2 \ldots x_{n-1} x_n$ can be expanded using the independence of each position and simplified by grouping like symbols in the resulting product:
$P(x^n)=\prod_{k=1}^{n}p(x_k)=\prod_{i=1}^{S} p_i^{n_i}=\prod_{i=1}^{S} p_i^{np_i}$
We still need to find the size of the set $\chi$ in order to find out how many bits we need. However, the probability we found above doesn’t depend on the specific permutation, so it is the same for every element of the set and thus the distribution of sequences within the set is uniform. For a uniform distribution over a set of size $|\chi|$, the probability of a specific element is $\frac{1}{|\chi|}$, so we can substitute the above probability for any element and expand in order to find out how many bits we need for a string of length $n$:
$B(n)=-\log_2(\prod_{i=1}^Sp_i^{np_i})=-n\sum_{i=1}^Sp_i\log_2(p_i)$
Frequently, we’re concerned with the number of bits required per symbol in the source sequence, so we divide $B(n)$ by $n$ to find $H(X)$, a quantity known as the entropy of the source $X$, which has PMF $P(X=x_i)=p_i$:
$H(X) = -\sum_{i=1}^Sp_i\log_2(p_i)$
The entropy, $H(X)$, is important because it establishes the lower bound on the number of bits that is required, on average, to accurately represent a symbol taken from the corresponding source $X$ when encoding a large number of symbols. $H(X)$ is non-negative, but it is not restricted to integers only; however, achieving less than one bit per symbol requires multiple neighboring symbols to be combined and encoded in groups, similarly to the method used above to obtain the expected bit rate. Unfortunately, that process cannot be used in practice for compression, because it requires enumerating an exponential number of strings (as a function of a variable tending towards infinity) in order to assign each sequence to a bit representation. Luckily, two very common, practical methods exist, Huffman Coding and Arithmetic Coding, that are guaranteed to achieve optimal performance.
For the car example mentioned earlier, the entropy works out to about 0.35 bits, which means there is significant room for improvement over the symbol-wise mapping I suggested, which only achieved a rate of 1.07 bits per symbol, but it would require grouping multiple car colors into a compound symbol, which quickly becomes tedious when working by hand. It is kind of amazing that using only ~3,500 bits, we could communicate the car colors that naively required 246,400 bits (=30,800 bytes) by encoding each letter of the English word with a single byte.
$H(X)$ also has other applications, including gambling, investing, lossy compression, communications systems, and signal processing, where it is generally used to establish the bounds for best- or worst-case performance. If you’re interested in a more rigorous definition of entropy and a more formal derivation of the bounds on lossless compression, plus some applications, I’d recommend reading Claude Shannon’s original paper on the subject, which effectively created the field of information theory.
The Wiener filter is well known as the optimal solution to the problem of estimating a random process when it is corrupted by another additive process, using only a linear combination of values of the measured process. Mathematically, this means that the Wiener filter constructs an estimator of some original signal $x(t)$ given $z(t)=x(t)+n(t)$ with the property that $\|\hat{x}(t)-x(t)\|$ is minimized among all such linear estimators, assuming only that both $x(t)$ and $n(t)$ are stationary and have known statistics (mean, variance, power spectral density, etc.). When more information about the structure of $x(t)$ is known, different estimators may be easier to implement (such as a Kalman filter for signals with a recursive structure).
Such a filter is very powerful—it is optimal, after all—when the necessary statistics are available and the input signals meet the requirements, but in practice, signals of interest are never stationary (rarely even wide sense stationary, although it is a useful approximation), and their statistics change frequently. Rather than going through the derivation of the filter, which is relatively straightforward and available on Wikipedia (linked above), I’d like to talk about how to adapt it to situations that do not meet the filter’s criteria and still obtain high quality results, and then provide a demonstration on one such signal.
The first problem to deal with is the assumption that a signal is stationary. True to form for engineering, the solution is to look at only a brief portion of the signal and approximate it as stationary. This has the unfortunate consequence of preventing us from defining the filter once and reusing it; instead, as the measured signal is sliced into approximately stationary segments, we must estimate the relevant statistics and construct an appropriate filter for each segment. If we do the filtering in the frequency domain, then for segments of length N we are able to do the entire operation with two length N FFTs (one forward and one reverse) and $O(N)$ arithmetic operations (mostly multiplication and division). This is comparable to other frequency domain filters and much faster than the $O(N^2)$ number of operations required for a time domain filter.
This approach creates a tradeoff. Because the signal is not stationary, we want to use short time slices to minimize changes. However, the filter operates by adjusting the amplitude of each bin after a transformation to the frequency domain. Therefore, we want as many bins as possible to afford the filter high resolution. Adjusting the sampling rate does not change the frequency resolution for a given amount of time, because the total time duration of any given buffer is $f_{s}N$. So, for fixed time duration, the length of the buffer will scale inversely with the sampling rate, and the bin spacing in an FFT will remain constant. The tradeoff, then, exists between how long each time slice will be and how much change in signal parameters we wish to tolerate. A longer time slice weakens the stationary approximation, but it also produces better frequency resolution. Both of these affect the quality of the resulting filtered signal.
The second problem is the assumption that the statistics are known beforehand. If we’re trying to do general signal identification, or simply “de-noising” of arbitrary incoming data (say, for sample, cleaning up voice recorded from a cell phone’s microphone in windy area, or reducing the effects of thermal noise in a data acquisition circuit), then we don’t know what the signal will look like beforehand. The solution here is a little bit more subtle. The normal formulation of the Wiener filter, in the Laplace domain, is
$G(s)= \frac{S_{z,x}(s)}{S_{z}(s)}$
$\hat{X}(s)=G(s) Z(s)$
In this case we assume that the cross-power spectral density, $S_{z,x}(s)$, between the measured process $z(t)$ and the true process $x(t)$ is known, and we assume that the power spectral density, $S_{z}(s)$, of the measured process $z(t)$ is known. In practice, we will estimate $S_{z}(s)$ from measured data, but as the statistics of $x(t)$ are unknown, we don’t know what $S_{z,x}(s)$ is (and can’t measure it directly). But, we do know the statistics of the noise. And, by (reasonable) assumption, the noise and the signal of interest are independent. Therefore, we can calculate several related spectra and make some substitutions into the definition of the original filter.
$S_z(s)=S_x(s)+S_n(s)$
$S_{z,x}(s)=S_x(s)$
If we substitute these into the filter definition to eliminate S_x(s), then we are able to construct and approximation of the filter based on the (known) noise PSD and an estimate of the signal PSD (if the signal PSD were known, it’d be exact, but as our PSD estimate contains errors, the filter definition will also contain errors).
$G(s)=\frac{S_z(s)-S_n(s)}{S_z(s)}$
You may ask: if we don’t know the signal PSD, how can we know the noise PSD? Realistically, we can’t. But, because the noise is stationary, we can construct an experiment to measure it once and then use it later. Simply identify a time when it is known that there is no signal present (i.e. ask the user to be silent for a few seconds), measure the noise, and store it as the noise PSD for future use. Adaptive methods can be used to further refine this approach (but are a topic for another day). It is also worth noting that the noise does not need to be Gaussian, nor does it have any other restrictions on its PSD. It only needs to be stationary, additive, and independent of the signal being estimated. You can exploit this to remove other types of interference as well.
One last thing before the demo. Using the PSD to construct the filter like this is subject to a number of caveats. The first is that the variance of each bin in a single PSD estimate is not zero. This is an important result whose consequences merit more detailed study, but the short of it is that the variance of each bin is essentially the same as the variance of each sample from which the PSD was constructed. A remedy for this is to use a more sophisticated method for estimating the PSD by combining multiple more-or-less independent estimates, generally using a windowing function. This reduces the variance and therefore improves the quality of the resulting filter. This, however, has consequences related to the trade-off between time slice length and stationary approximation. Because you must average PSDs computed from (some) different samples in order to reduce the variance, you are effectively using a longer time slice.
Based on the assigned final project in ECE 4110 at Cornell University, which was to use a Wiener filter to de-noise a recording of Einstein explaining the mass-energy equivalence with added Gaussian white noise of unknown power, I’ve put together a short clip comparing the measured (corrupted) signal, the result after filtering with a single un-windowed PSD estimate to construct the filter, and the result after filtering using two PSD estimates with 50% overlap (and thus an effective length of 1.5x the no-overlap condition) combined with a Hann window to construct the filter. There is a clear improvement in noise rejection using the overlapping PSD estimates, but some of the short vocal transitions are also much more subdued, illustrating the tradeoff very well.
Be warned, the first segment (unfiltered) is quite loud as the noise adds a lot of output power.
Here is the complete MATLAB code used to implement the non-overlapping filter
% Assumes einsteindistort.wav has been loaded with
% Anything that can divide the total number of samples evenly
sampleSize = 512;
% Delete old variables
% clf;
clear input;
clear inputSpectrum;
clear inputPSD;
clear noisePSD;
clear sampleNoise;
clear output;
clear outputSpectrum;
clear weinerCoefficients;
% These regions indicate where I have decided there is a large amount of
% silence, so we can extract the noise parameters here.
noiseRegions = [1 10000;
81000 94000;
149000 160000;
240000 257500;
347500 360000;
485000 499000;
632000 645000;
835000 855000;
917500 937500;
1010000 1025000;
1150000 116500];
% Now iterate over the noise regions and create noise start offsets for
% each one to extract all the possible noise PSDs
noiseStarts = zeros(length(noiseRegions(1,:)), 1);
z = 1;
for k = 1:length(noiseRegions(:,1))
for t = noiseRegions(k,1):sampleSize:noiseRegions(k,2)-sampleSize
noiseStarts(z) = t;
z = z + 1;
end
end
% In an effort to improve the PSD estimate of the noise, average the FFT of
% silent noisy sections in multiple parts of the recording.
noisePSD = zeros(sampleSize, 1);
for n = 1:length(noiseStarts)
sampleNoise = d(noiseStarts(n):noiseStarts(n)+sampleSize-1);
noisePSD = noisePSD + (2/length(noiseStarts)) * abs(fft(sampleNoise)).^2 / sampleSize;
end
% Force the PSD to be flat like white noise, for comparison
% noisePSD = ones(size(noisePSD))*mean(noisePSD);
% Now, break the signal into segments and try to denoise it with a
% noncausal weiner filter.
output = zeros(1, length(d));
for k = 1:length(d)/sampleSize
input = d(1+sampleSize*(k-1):sampleSize*k);
inputSpectrum = fft(input);
inputPSD = abs(inputSpectrum).^2/length(input);
weinerCoefficients = (inputPSD - noisePSD) ./ inputPSD;
weinerCoefficients(weinerCoefficients < 0) = 0;
outputSpectrum = inputSpectrum .* weinerCoefficients;
% Sometimes for small outputs ifft includes an imaginary value
output(1+sampleSize*(k-1):sampleSize*k) = real(ifft(outputSpectrum, 'symmetric'));
end
% Renormalize and write to a file
output = output/max(abs(output));
wavwrite(output, r, 'clean.wav');
To convert this implementation to use 50% overlapping filters, replace the filtering loop (below "Now, break the signal into segments…") with this snippet:
output = zeros(1, length(d));
windowFunc = hann(sampleSize);
k=1;
while sampleSize*(k-1)/2 + sampleSize < length(d)
input = d(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize);
inputSpectrum = fft(input .* windowFunc);
inputPSD = abs(inputSpectrum).^2/length(input);
weinerCoefficients = (inputPSD - noisePSD) ./ inputPSD;
weinerCoefficients(weinerCoefficients < 0) = 0;
outputSpectrum = inputSpectrum .* weinerCoefficients;
% Sometimes for small outputs ifft includes an imaginary value
output(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize) = output(1+sampleSize*(k-1)/2:sampleSize*(k-1)/2 + sampleSize) + ifft(outputSpectrum, 'symmetric')';
k = k +1;
end
The corrupted source file used for the project can be downloaded here for educational use.
This can be adapted to work with pretty much any signal simply by modifying the noiseRegions matrix, which is used to denote the limits of "no signal" areas to use for constructing a noise estimate.
|
{}
|
# Linear Algebra Complex Numbers
The solutions to the equation $z^2-2z+2=0$ are $(a+i)$ and $(b-i)$ where $a$ and $b$ are integers. What is $a+b$?
I simplified and got $(z+1)(z+1) = -1$ and now I'm not sure where to go from there. I did this but I'm not sure.
$(a+i)^2 = a^2 - 1$
$(b-i)^2 = b^2 + 1$
$a+b=(a+i)+(b-i)=(a^2-1)+(b^2+1)=a^2+b^2$
• We know that if $a+i$ is a root, then $a-i$ is a root. Hence $a=b$. Then, $(a-i)+(a+i) = 2$. Or $(a-i)(a+i) = 2$. Why? As for your algebra, $(a+bi)^2 = a^2 + 2abi + b^2i^2 = (a^2-b^2)+2abi \neq a + bi$. Just try $a=1$, $b$ arbitrary. I recommend reviewing the textbook. – Christopher K Jan 31 '14 at 1:30
If you're going to solve this equation by factoring, etc., you do not want to factor and set equal to a non-zero number. That is, doing something like: $$(z+1)(z+1) = -1$$ does not point you in the right direction, because you can only find roots by factoring when you have the equation set equal to zero.
Also, please see Chris K's comments about the other parts of your algebra... in general, $(x+y)^2 \ne x^2 + y^2$. That is, $(a+i)^2 \ne a^2 + i^2$.
Re: The Problem Statement Because this is a quadratic with real coefficients, we know by the Fundamental Theorem of Algebra that it will have exactly $2$ solutions, where complex solutions occur in conjugate pairs.
We are given that the two solutions are $a+i$ and $b-i$. Since these are conjugate pairs, then $a=b$.
So, our two solutions are $a+i$ and $a-i$. Having one variable makes things simpler.
We also know that \begin{align} (z - (a+i))(z-(a-i)) &=z^2 + \left(-(a+i) - (a-i)\right)z + (a+i)(a-i) \\ &= z^2-2z+2 \end{align}
Equating coefficients, we find that $\left(-(a+i) - (a-i)\right) = -2$ and $(a+i)(a-i) = 2$.
Can you take it from there?
Since this is a polynomial with real coefficients, $z$ is a root if and only if $\overline z$, the complex conjugate of $z$, is. Since $\overline{a + i} = a - i$, we see that $a = b$ necessarily.
Then using that $a + i$ and $a - i$ are roots, we can factor the polynomial as
\begin{align*} z^2 + 2z + 2 &= \Big(z - (a + i)\Big)\Big(z - (a - i)\Big) \\ &= (z - a - i)(z - a + i) \\ &= (z - a)^2 - (i)^2 \\ &= z^2 - 2az + a^2 - 1 \end{align*}
Comparing coefficients, we see that $2 = -2a$ so that $a = -1$.
Fast way: By expanding $(z - \lambda)(z - \mu)$, we see that the product of the roots is the constant coefficient, and the sum of the roots is negative of the $z$-coefficient. Hence
$$a + b = \Big(a + i\Big) + \Big(b - i\Big) = -2$$
• What happens if it was a-i and b-i? – Karim B Jan 31 '14 at 4:12
• @KarimB As a result of the first line of my answer, it can't be that. – user61527 Jan 31 '14 at 4:16
Hint $\,\ a\!+\!b = (a\!+\!i)+(b\!-\!i)\, =\, \color{#c00}{\rm sum\,\ of\,\ roots}.\,$ By Vieta, this equals the negative of the coeff of $x$, explicitly $(x-\color{#c00}r)(x-\color{#c00}s)\, =\, x^2-(\color{#c00}{r\!+\!s})x+rs.$
|
{}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.