url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.math.ias.edu/seminars/abstract?event=16424
|
# Arithmetic Cohomology and Special Values of Zeta-Functions (after Geisser)
MOTIVIC COHOMOLOGY Topic: Arithmetic Cohomology and Special Values of Zeta-Functions (after Geisser) Speaker: Stephen Lichtenbaum Affiliation: Brown University Date: Thursday, February 22 Time/Room: 11:00am - 12:00pm/S-101
Geisser gives conjectured formulas for special values of zeta-functions of varieties over finite fields in terms of Euler characteristics of arithmetic cohomology (an improved version of Weil-etale cohomology). He then proves these formulas under the assumptions that a number of well-regarded conjectures are true.
|
2020-07-15 13:10:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168184399604797, "perplexity": 3595.57509076396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657167808.91/warc/CC-MAIN-20200715101742-20200715131742-00003.warc.gz"}
|
https://hst-docs.stsci.edu/wfc3ihb/chapter-6-uvis-imaging-with-wfc3/6-12-uvis-observing-strategies
|
# 6.12 UVIS Observing Strategies
## 6.12.1 Dithering Strategies
For imaging programs, STScI generally recommends that observers employ dithering patterns. Dithering refers to the procedure of moving the telescope by pre-determined amounts between individual exposures on a target. The resulting images are subsequently combined via post-observation processing techniques using software such as Drizzlepac.
Use of dithering can provide improved sampling of the point spread function (PSF) and better correction of undesirable artifacts in the images (e.g., hot pixels, cosmic rays, the UVIS channel’s inter-chip gap, and the UVIS “droplets”). Cosmic ray removal is more effective if more than 2 images are obtained, using CR-SPLIT exposures and/or dithers, especially for exposure times greater than 1000s. A sequence of offsets of a few pixels plus a fractional pixel in each coordinate is generally used to simultaneously remove hot pixels and cosmic rays and to sample the PSF. A larger offset along the image Y axis is needed to fill in the interchip gap in full-frame images (the WFC3-UVIS-MOS-DITH-LINE pattern uses a conservative step size of 2.4 arcsec). To ensure the best accuracy consider dithering to compensate for droplets (Section 6.11.5).
Larger offsets, up to sizes approaching the detector’s field of view, can also be used to create mosaics. However, as a result of geometric distortion (Appendix B), some objects shift by an integer number of rows (or columns), while others shift by an integer plus some fraction of a pixel. The PSF is not resampled in that dimension in the former case, but is resampled in the latter case. Where the exposures overlap, the PSF is thus better sampled for some objects than for others. If PSF sampling is important, a combination of mosaic steps and small dither steps should therefore be used. Note that, in practice, mosaic steps must be contained within a diameter ~130 arcsec or less (depending on the availability of guide stars in the region) to use the same guide stars for all exposures. The rms pointing repeatability is significantly less accurate if different guide stars are used for some exposures. (see Appendix B of the DrizzlePac Handbook.)
The set of Pattern Parameters in the observing proposal provides a convenient means for specifying the desired dither pattern of offsets. The pre-defined mosaic and dither patterns that have been implemented in APT to meet many of the needs outlined above are described in detail in the Phase II Proposal Instructions. The WFC3 patterns in effect in APT at the time of publication of this Handbook are summarized in Appendix C. Observers can define their own patterns to tailor them to the amount of allocated observing time and the desired science goals of the program. Alternatively, they can use POS TARGs to implement dither steps (Section 6.4.3). Observers should note that thermally driven drift of the image on the detector, typically 0.1 to 0.2 pixels per coordinate within one orbit (WFC3 ISR 2012-14), will limit the accuracy of execution of dither patterns.
Dither strategies for WFC3 are further discussed in WFC3 ISR 2010-09, which provides a decision tree for selecting patterns and combining them with subpatterns.
## 6.12.2 Parallel Observations
While the design of WFC3 precludes the simultaneous use of both the UVIS and IR channel, it is possible to use one or more of the other HST instruments in parallel with WFC3. Since each instrument covers a different location in the HST focal plane (see Figure 2.2), parallel observations typically sample an area of sky several arc minutes away from the WFC3 target. For extended targets such as nearby galaxies, parallel observations may be able to sample adjacent regions of the primary target. In other cases, the parallel observations may look at essentially random areas of sky.
For processing and scheduling purposes, HST parallel observations are divided into two groups: coordinated and pure.
coordinated parallel is an observation directly related to (i.e., coordinated with) a specific primary observation, such as in the extended galaxy example above. A pure parallel is an observation typically unrelated to the primary observation, for example, parallel imaging scheduled during long spectroscopic observations. The primary restriction on parallel observations, both coordinated and pure, is that they must not interfere with the primary observations: they may not cause the primary observations to be shortened; and they must not cause the stored-command capacity and data-volume limits to be exceeded. The proposal software (APT) enforces these rules and notifies the observer when a specified parallel is not permitted.
In order to prolong the life of the HST transmitters, the number of parallels acquired during each proposal cycle is limited. Proposers must provide clear and strong justification in order to be granted parallel observing time. Please refer to the HST Call for Proposals for current policies and procedures concerning parallels.
## 6.12.3 Spatial Scans
Spatial scanning of stellar images upon the UVIS detector creates the potential for astrometry of unprecedented precision. Two representative scientific examples are parallax measurement of Cepheid variable stars (program 12679, Riess P.I.) and the astrometric wobble of a stellar binary (program 12909, Debes P.I.). Results from non-proprietary data of program 12679 (Riess, priv. comm.) indicate that differential astrometry a few times less precise than that set by diffraction and Poisson statistics are attainable. For HST, a 2.4-m telescope, operating at 600 nm, the diffraction limit is Θ ~ λ/D = 51 mas. In the theoretical limit, astrometry in one dimension is approximately equal to the FHWM Θ divided by the signal to noise ratio, $\small{\sqrt{N}}$, where N is the number of photo-electrons recorded. If we adopt N equal to the full well of the UVIS CCD, ~64,000 e, times a trail of length 4000 pixels, i.e., N = 128 million e, then the theoretical astrometric limit is ~3 microarcsec per exposure. A more conservative estimate of ~13 microarcsec can be derived as follows: the nominal, state-of-the-art astrometric precision of a staring-mode exposure is ~0.01 pixel, so the astrometric precision of a 1000-pixel-long scan could be ~$\small{\sqrt{1000}}$ or ~30 times smaller, which, for the 40 mas WFC3 UVIS pixels, is 13 microarcsec. In 2012 the TAC recommended programs 13101 and 12909, which anticipated a per-exposure precision of 30 to 40 microarcsec. Some data analysis tools for spatial scans are currently being developed by the WFC3 team to aid users in reducing spatial scan data, but for the most part users should expect to develop their own analysis software to reduce images and obtain useful astrometric results. (See Riess et al. 2014.) Please contact the WFC3 help desk for more information on the analysis tools being developed.
Scans can be made at any angle, although users typically orient the scans approximately but not exactly parallel to rows or to columns of the detector. For example, in order to sample pixel phase, program 12679 prescribed an angle of 90.05 degrees; the extra 0.05 degrees corresponds to a shift of ~1 pixel every 1000 pixels along the trail. In the interest of observing efficiency, this program performed forward and reverse scans alternately. Observers are cautioned that there will be a spatial offset between forward and reverse scans in the scan direction. Forward scans are centered in the frame as predicted in the APT display; reverse scans are offset by an amount that is greater for greater scan rates.
Boustrophedonic (from the Greek, literally, “as an ox turns in plowing”) scans, are possible too. In boustrophedonic scans, a.k.a. serpentine scans, the user specifies a set of constant-speed scan lines separated by a specified angular distance, like rows in a farmer’s field. An example is illustrated in Figure 6.22. The advantage is that more scan lines are possible per exposure, which may be more efficient. The trajectory of such scans has been modeled (WFC3 ISR 2017-06).
Spatial scanning could in principle permit more precise photometry than staring mode, by collecting more photons and averaging over more detector pixels. However, actual photometric precision may not approach the theoretical limits due to at least two factors: 1) flat field errors (typically ~0.5%) and 2) shutter-timing non-repeatability (~0.004 s r.m.s. for the UVIS shutter). WFC3 ISR 2017-21 shows that the photometric repeatability of spatial scans is much better than staring mode – typically around 0.1-0.5% between visits. More preliminary results that will be published in a future ISR confirm this result. See Figure 6.23 for a comparison between photometry of staring mode observations of GRW70 versus spatial scans. These data are obtained for calibration programs that monitor the temporal stability of UVIS photometry.
Two examples of utilizing spatial scans for precision time-series photometry are program 14621 (P.I. Wang) and program 15129 (P.I. Burke). These programs take advantage of the orbit-to-orbit and visit-to-visit stability of the UVIS detector.
Attempts to obtain precise photometric time-series within a single exposure, by using the trailed image of a star to record its flux versus time, have not been successful, because the positional feedback loop of the FGS control introduces lateral and longitudinal displacements from an idealized, constant-velocity scan, which results in photometric “flicker” of a few per cent (Figure 6.24). Although differential photometry of two or more stars would mitigate the FGS-induced “flicker”, the two flat-field and shutter factors would remain.
For those preparing a phase II program description, we recommend WFC3 ISR 2012-08 and WFC3 ISR 2017-21. Also, IR imaging with spatial scanning is discussed in Section 7.10.4, and slitless spectroscopy with spatial scanning is discussed in Section 8.6. See Figure 8.9 for a diagram provided in APT to assist observers planning spatial scan observations.
Note: starting in Cycle 24, the Exposure Time Calculator (ETC) supports spatial scanning for UVIS and IR imaging and IR spectroscopy. (See WFC3 STAN issue 22.)
## 6.12.4 PSF Subtraction
UVIS imaging has been shown to be highly effective in detecting faint point sources near bright point sources (WFC3 ISR 2011-03). For a variety of narrow, medium, and wide filters, when a high signal-to-noise drizzled image of a star was scaled down by 10 magnitudes and shifted and added to the original image, the simulated faint companion could usually be seen for separations greater than 1.0 arcsec. Based on the annular signal-to-noise of the deep stellar image, 5 sigma detections of companions fainter by two magnitudes could be made at a separation of 0.1 arcsec. Theoretically, companions several magnitudes fainter could be detected at that separation in deeper images, but, in practice, variations in the PSF (point spread function) due to telescope breathing limit the detectability within about 0.3 arcsec of a bright star.
If observers want to use stellar images to subtract the PSF from a target comprised of a point source and an extended source to detect or measure the extended source, they should keep several points in mind:
• UVIS pixels undersample the PSF (Section 6.6), so the stellar and target exposures should be dithered to produce good sampling of the PSF.
• Position drift and reacquisition errors can broaden the PSF (WFC3 ISR 2009-32WFC3 ISR 2012-14).
• If a single guide star is used for a visit, roll angle drift causes a rotation of the target around that star, which in turn introduces a small translational drift of the target on the detector. In recent years, as gyroscopes have failed and been replaced, the typical roll angle drift rate has increased from 1.5 mas/sec to ~17 mas/sec, producing a translation of up to 60 mas (1.5 UVIS pixel) in 1000 sec.
• The characteristics of the PSF depend on the focus, which generally changes measurably during an orbit; its range in a particular orbit will not be known in advance (WFC3 ISR 2012-14, WFC3 ISR 2013-11).
• The characteristics of the PSF vary with location on the detector (e.g., see ACS ISR 2003-06, WFC3 ISR 2013-11); PSFs near the A amplifier on UVIS1 are noticeably elongated by astigmatism (WFC3 ISR 2013-11WFC3 ISR 2013-13).
• More than one exposure time may be needed to produce an image that is unsaturated in the core and has good signal-to-noise to the desired radius.
• For exposures shorter than about 10 seconds, the UVIS PSF will be affected by vibration of the shutter (WFC3 ISR 2009-20). In some cases, use of the APT exposure-level option BLADE=A may be justified (Section 6.11.4).
While Tiny Tim modeling is available for the WFC3 UVIS detector, it has not been optimized to reproduce observed PSFs. See Section 6.6.4 for a discussion of on-going work to provide PSF models to observers.
## 6.12.5 The Drift and Shift (DASH) Observing Strategy
The term DASH (for “drift-and-shift”, Momcheva et al., 2016) has been adopted to describe the observing strategy of taking a series of WFC3/IR exposures of many targets within one orbit while the telescope is being guided under gyroscope control, thus avoiding the overhead cost of acquiring a new pair of guide stars for every slew between targets of greater than about 2 arcmin. A WFC3/IR sample sequence comprised of short exposure times is selected to avoid image smearing within each time step, and the differential samples in one exposure are later aligned and combined to compensate for the greater drift due to gyroscope control. The technique was designed to allow users to carry out shallow large-scale mosaic observations with the WFC3/IR camera, but has since been adapted to efficiently observe a collection of bright targets within a field ~1 deg across with WFC3/IR subarray apertures (which have shorter time steps for a given sample sequence) and WFC3/UVIS subarray apertures. More detailed discussions of HST guiding and considerations for program design are given in Section 7.10.6. A brief discussion of WFC3/UVIS observations is given here.
Given that WFC3/UVIS is read-out as a traditional CCD, it is necessary to specify short exposure times in place of the non-destructive reads made in an IR sample sequence in a DASH program. Subarray apertures are used to increase the number of exposures that can be fit into the on-board buffer before a time-consuming serial buffer dump must be made. Note, however, that specifying small subarray apertures is risky, especially towards the end of an orbit, when the high drift rates could cause a target to fall outside the aperture. A mitigating strategy is to specify increasing subarray sizes as the orbit progresses, to allow for the uncertainty in predicting the drift. As discussed in Section 7.10.6, DASH visits should be limited to one orbit, and the exposures in an orbit should be grouped into a non-interruptible sequence container in APT. The first exposure should be taken using guide stars to start the observations with accurate positioning.
Examples of programs that use WFC3/UVIS (and WFC3/IR) subarrays to observe many targets in one orbit are GO programs 14648 and 15146 (P.I. Riess).
|
2020-10-29 08:16:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5270358324050903, "perplexity": 2888.3307837391967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107903419.77/warc/CC-MAIN-20201029065424-20201029095424-00290.warc.gz"}
|
http://mathhelpforum.com/statistics/50504-combinations-permutations-help.html
|
# Math Help - Combinations & Permutations Help
1. ## Combinations & Permutations Help
Hi, I just can't seem to figure this question out so here it is, any help would be greatly appreciated.
Using a standard deck of cards, how many bridge hands (13 cards) contain 5 spades, 2 hearts, 3 diamonds and 3 clubs?
2. Hello, gordomac!
Using a standard deck of cards, how many bridge hands (13 cards)
contain 5 spades, 2 hearts, 3 diamonds and 3 clubs?
${13\choose5}{13\choose2}{13\choose3}{13\choose3} \;=\;\frac{13!}{5!\,8!}\cdot\frac{13!}{2!\,11!}\cd ot\frac{13}{3!\,10!}\cdot\frac{13!}{3!\,10!}$
. . $= \;1287\cdot 78 \cdot 286 \cdot 286 \;=\;8,211,173,256$
|
2015-05-26 13:55:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5858606100082397, "perplexity": 812.7669036139437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928864.16/warc/CC-MAIN-20150521113208-00139-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2941683/binomial-distribution-in-case-of-twin-children
|
# Binomial distribution in case of twin children
I am working on this problem of binomial distribution from this book (exercise 3.9)-
The probability of a twin birth is approximately 1/90, and we can assume that an elementary school will have approximately 60 children entering kindergarten (three classes of 20 each). Explain how our "statistically impossible" event can be thought of as the probability of 5 or more successes from a binomial (60, 1/90).
Given solution - The value of n is taken 60 and x ranges from 5 to 60 with p = 1/90.
My approach 1
As far as I understand, here the question is about twins, which occur in pairs. So x = 1 means having 1 twin child which makes no sense. Similarly no odd value of x makes sense. All x should be even, so x should be 10, 12, 14,...., 60. But then the sum of probability distribution would not be equal to 1.
Approach 2
Since twins occur in pairs, and total children are 60, the value of n in binomial distribution should be taken as 30. Then x can range from 5 to 30 (so x = 1 will mean 1 twin pair) and this probability distribution will sum to 1.
Please tell me which approach is correct? Is the book's answer correct or one of my approaches?
• Seems like the exercise is asking you to calculate the probability "5 or more have a twin", and the assumption is they'd be entering together. So n=60, p=1/90. The author is explicit in giving you the distribution – David Peterson Oct 4 '18 at 5:02
$$P(N births| M children) = \sum_{N} \frac{P(M Children|N births)\times P(N births)}{P(M children)}$$
In which case $$P(M children)$$ is 1 if M = 60 and $$P(M Children|N births)$$ can be computed from the probability of twin births.
|
2019-07-24 00:04:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7856428027153015, "perplexity": 354.6195149778907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530246.91/warc/CC-MAIN-20190723235815-20190724021815-00089.warc.gz"}
|
https://byjus.com/physics/differences-between-enthalpy-and-entropy/
|
# Difference Between Enthalpy and Entropy
We know that the major difference between enthalpy and entropy is that even though they are part of a thermodynamic system, enthalpy is represented as the total heat content whereas the entropy is the degree of disorder.
In a closed system, $T.\Delta S=\Delta H$, which means that in absolute temperature with some change in entropy results in a change of enthalpy.
## Difference Between Enthalpy and Entropy
Enthalpy is the measure of total heat present in the thermodynamic system where the pressure is constant.
It is represented as $\Delta H=\Delta E+P\Delta V$ where, E is the internal energy.
Entropy is the measure of disorder in a thermodynamic system. It is represented as $\Delta S=\Delta Q/T$ where, Q is the heat content and T is the temperature.
Difference Between Enthalpy and Entropy
Enthalpy
Entropy
Enthalpy is a kind of energy
Entropy is a property
It is the sum of internal energy and flow energy
It is the measurement of randomness of molecules
It is denoted by symbol H
It is denoted by symbol S
It was termed by a scientist named Heike Kamerlingh Onnes
It was termed by a scientist named Rudolf Clausius
It unit is $Jmol^{-1}$
It unit is $JK^{-1}$
It related is applicable in standard conditions
It does not have any limits or conditions.
The system favour minimum enthalpy
The system favour maximum entropy
These were some difference between Enthalpy and Entropy. If you wish to find out more, download BYJU’S The Learning App.
RELATED ARTICLES:
|
2019-12-12 00:15:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6250765323638916, "perplexity": 1020.1134843507479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540534443.68/warc/CC-MAIN-20191212000437-20191212024437-00145.warc.gz"}
|
https://cs.stackexchange.com/tags/heaps/new
|
Last call to make your voice heard! Our 2022 Developer Survey closes in less than a week. Take survey.
# Tag Info
Accepted
### Find the smallest difference between two numbers in a DS in O(1) time
Use an AVL tree with each node having three additional entries $\min,\; \max$, and $\text{closest_pair} = (i,j)$, representing the minimum and maximum values of the tree rooted at that node. At the ...
1 vote
### Find the smallest difference between two numbers in a DS in O(1) time
I'm assuming that the sets of functions in big-Oh notation refer to the desired worst-case time complexity of the operations. A possible data structure consists of an AVL tree $T$ plus a priority ...
• 22.8k
1 vote
### Find the smallest difference between two numbers in a DS in O(1) time
I think this can be done using $(a,b)$-trees storing a bit more information in internal nodes: the minimum difference between two leaves in the subtree; the minimum and maximum values of a leaf in ...
• 7,154
Top 50 recent answers are included
|
2022-05-27 04:18:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4555175006389618, "perplexity": 710.4114831097552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662631064.64/warc/CC-MAIN-20220527015812-20220527045812-00513.warc.gz"}
|
https://msl.stanford.edu/bibliography/pierson_bio-inspired_2015
|
### Bio-inspired non-cooperative multi-robot herding
@inproceedings{pierson_bio-inspired_2015,
title = {Bio-inspired non-cooperative multi-robot herding},
isbn = {978-1-4799-6923-4},
url = {http://ieeexplore.ieee.org/document/7139438/},
abstract = {This paper presents a new control strategy to control a group of dog-like robots to drive a herd of noncooperative sheep-like agents to a goal region in the environment. The sheep-like agents, which may be biological or robotic, respond to the presence of the dog-like robots with a repelling potential field common in biological models of the behavior of herding animals. Our key insight in designing control laws for the dog-like robots is to enforce geometrical relationships that allow for the combined dynamics of the dogs and sheep to be mapped to a simple unicycle robot model. We prove convergence of a single sheep to a desired goal region using two or more dogs, and we propose a control strategy for the case of any number of sheep driven by two or more dogs. Simulations in Matlab and hardware experiments with Pololu m3pi robots demonstrate the effectiveness of our control strategy.},
language = {en},
urldate = {2020-09-15},
booktitle = {2015 {IEEE} {International} {Conference} on {Robotics} and {Automation} ({ICRA})},
publisher = {IEEE},
author = {Pierson, Alyssa and Schwager, Mac},
month = may,
year = {2015},
keywords = {cooperative\_planning},
pages = {1843--1849},
month_numeric = {5}
}
|
2022-08-14 10:18:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2663964033126831, "perplexity": 3937.3335290117766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572021.17/warc/CC-MAIN-20220814083156-20220814113156-00504.warc.gz"}
|
https://privacytools.seas.harvard.edu/publications/year/2012
|
# Publications by Year: 2012
2012
Yevgeniy Dodis, Adriana López-Alt, Ilya Mironov, and Salil Vadhan. 2012. “Differential Privacy with Imperfect Randomness.” In Proceedings of the 32nd International Cryptology Conference (CRYPTO 12), Lecture Notes on Computer Science, 7417: Pp. 497–516. Santa Barbara, CA: Springer-Verlag. Springer LinkAbstract
In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness R is “good enough” to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the γ-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias γ < 1 (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a γ-Santha-Vazirani source, for any γ < 1. This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (accurate and private) “SV-robust” mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.
Justin Thaler, Jonathan Ullman, and Salil P. Vadhan. 2012. “Faster Algorithms for Privately Releasing Marginals.” In Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Lecture Notes in Computer Science. Vol. 7391. Warwick, UK: Springer. DOIAbstract
We study the problem of releasing k-way marginals of a database D ∈ ({0, 1} d ) n , while preserving differential privacy. The answer to a k-way marginal query is the fraction of D’s records x ∈ {0, 1} d with a given value in each of a given set of up to k columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis (cf. Barak et. al., PODS ’07). We give an algorithm that runs in time dO(k√) and releases a private summary capable of answering any k-way marginal query with at most ±.01 error on every query as long as n≥dO(k√) . To our knowledge, ours is the first algorithm capable of privately releasing marginal queries with non-trivial worst-case accuracy guarantees in time substantially smaller than the number of k-way marginal queries, which is d Θ(k) (for k ≪ d).
Anupam Gupta, Aaron Roth, and Jonathan Ullman. 2012. “Iterative Constructions and Private Data Release.” In Theory of Cryptography - 9th Theory of Cryptography Conference, TCC 2012, Lecture Notes in Computer Science, 7194: Pp. 339-356. Taormina, Sicily, Italy: Springer. DOIAbstract
In this paper we study the problem of approximately releasing the cut function of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings. Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call iterative database construction algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs). We also give a non-interactive algorithm for efficiently releasing private synthetic data for graph cuts with error O(|V|1.5). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem.
Cynthia Dwork, Moni Naor, and Salil Vadhan. 2012. “The Privacy of the Analyst and the Power of the State.” In Proceedings of the 53rd Annual {IEEE} Symposium on Foundations of Computer Science (FOCS 12), Pp. 400–409. New Brunswick, NJ: IEEE. IEEE XploreAbstract
We initiate the study of "privacy for the analyst" in differentially private data analysis. That is, not only will we be concerned with ensuring differential privacy for the data (i.e. individuals or customers), which are the usual concern of differential privacy, but we also consider (differential) privacy for the set of queries posed by each data analyst. The goal is to achieve privacy with respect to other analysts, or users of the system. This problem arises only in the context of stateful privacy mechanisms, in which the responses to queries depend on other queries posed (a recent wave of results in the area utilized cleverly coordinated noise and state in order to allow answering privately hugely many queries). We argue that the problem is real by proving an exponential gap between the number of queries that can be answered (with non-trivial error) by stateless and stateful differentially private mechanisms. We then give a stateful algorithm for differentially private data analysis that also ensures differential privacy for the analyst and can answer exponentially many queries.
Michael Kearns, Mallesh Pai, Aaron Roth, and Jonathan Ullman. 2012. “Private Equilibrium Release, Large Games, and No-Regret Learning”. ArXiv VersionAbstract
We give mechanisms in which each of n players in a game is given their component of an (approximate) equilibrium in a way that guarantees differential privacy---that is, the revelation of the equilibrium components does not reveal too much information about the utilities of the other players. More precisely, we show how to compute an approximate correlated equilibrium (CE) under the constraint of differential privacy (DP), provided n is large and any player's action affects any other's payoff by at most a small amount. Our results draw interesting connections between noisy generalizations of classical convergence results for no-regret learning, and the noisy mechanisms developed for differential privacy. Our results imply the ability to truthfully implement good social-welfare solutions in many games, such as games with small Price of Anarchy, even if the mechanism does not have the ability to enforce outcomes. We give two different mechanisms for DP computation of approximate CE. The first is computationally efficient, but has a suboptimal dependence on the number of actions in the game; the second is computationally efficient, but allows for games with exponentially many actions. We also give a matching lower bound, showing that our results are tight up to logarithmic factors.
|
2019-11-21 22:05:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38656777143478394, "perplexity": 1425.2859334637023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670987.78/warc/CC-MAIN-20191121204227-20191121232227-00121.warc.gz"}
|
https://www.gerad.ca/fr/papers/G-91-37
|
Groupe d’études et de recherche en analyse des décisions
# The Congested Facility Location Problem
## Martin Desrochers, Patrice Marcotte et M Stan
Consider a network facility location problem where congestion arises at facilities, and is represented by delay functions that approximate the queuing process. We strive to minimize the sum of customers' transportation and waiting times, and facilities' fixed and variable costs. The problem is solved using a column generation technique within a Branch-and-Bound scheme. Numerical results are reported and a bilevel (user-optimized) formulation considered, among other extensions.
, 20 pages
|
2020-12-04 13:46:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8637837767601013, "perplexity": 3583.6288400626795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00651.warc.gz"}
|
http://openstudy.com/updates/4f162374e4b0d30b2d574235
|
## anonymous 4 years ago Let $$f$$ be a function satisfying $$f(x+y) = f(x) + f(y) \forall x,y$$ and if $$f(x) = x^2 g(x)$$ where $$g(x)$$ is a continuous function, then find $$f'(x)$$.
1. anonymous
i am going to make a guess, that $f'(x)=c$ some constant. not sure what that has to do with $f(x)=x^2g(x)$ but if i recall correctly the only function satisfying the first condition is $f(x)=cx$
2. anonymous
$f'(x)=2xg(x)+x^2g'(x)$
3. anonymous
@zed that is true for any f, right?
4. anonymous
i don't think there are any functions that satisfy $f(x+y)=f(x)+f(y)$ other than constant multiples
5. anonymous
Here, these are the options 1. g'(x) 2. g(0) 3. g(0) + g'(x) 3. 0
6. anonymous
multiple choice?
7. Zarkon
looks like zero to me
8. anonymous
yeah, but i am clueless
9. anonymous
really? why. i am fairly certain $f'(x)=c$ a constant
10. Zarkon
us the definition of the derivative
11. Zarkon
*use
12. Zarkon
(f(x+h)-f(x))/h (f(x)+f(h)-f(x))/h =f(h)/h
13. Zarkon
$f(h)=h^2g(h)$
14. Zarkon
$\frac{h^2g(h)}{h}=hg(h)$ take limit as h goes to zero...use squeeze theorem
|
2016-10-28 19:46:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9326490759849548, "perplexity": 1538.9639185173319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988725470.56/warc/CC-MAIN-20161020183845-00279-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://tex.stackexchange.com/questions/436497/b%C5%8Dten-wakiten-japanese-emphasis-symbol-in-platex/436561
|
# Bōten/Wakiten (Japanese emphasis symbol) in [p]LaTeX
This may be a rather obscure matter, but I recently attempted to find an easy way of emphasizing Japanese text using bōten/wakiten (furigana dots) in a document I am writing using [p]LaTeX. So far I have not found any answers on this matter, probably due to the obscurity of this issue and other limitations. Is there any clean way to do this (I could include visually similar marks as furigana, but this fails in the case of expressions that already contain furigana for instance)?
• Could you provide a small example document with some signs that you want to emphasize? Also an example of the desired output (e.g., a screenshot of your pdf with freehand drawing in a Paint program) would help. Jun 15 '18 at 10:53
The pxrubrica package, which originally aimed to provide JLReq-compliant ruby, is the best way to use boten and kenten. Here are a very basic usage and its output (the optional argument s changes the emphasis mark to the secondary one):
\documentclass{jsarticle}
\usepackage{pxrubrica}
\begin{document}
\kenten{圏点}と\kenten[s]{傍点}。
\end{document}
The package prepares several interfaces to customize the kenten features and please refer to the Japanese documentation if you are interested. The package also has an English documentation but it only mentions ruby.
|
2021-09-19 21:15:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5759376287460327, "perplexity": 1329.4756206279121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056900.32/warc/CC-MAIN-20210919190128-20210919220128-00692.warc.gz"}
|
https://tex.stackexchange.com/questions/567286/pdfstringdef-turns-accented-characters-into-octal-escape-sequence
|
# \pdfstringdef turns accented characters into octal escape sequence
The Question : How to make a glowing text?
@'Symbol 1', interesting, I encapsulated your script into a new command (named '\glow') so I could highlight text just like with '\hl{}'. However the result of '\glow{This is a \hl{test}}' was not satisfactory due to the fact that the yellow box was rewritten multiple time above the glowing effect, hiding it.
So I tried to modify your solution, print the original string first (say with '\hl{}'), use \pdfstringdef to remove the formatting and get a clean/plain string for the glowing string, then display it one last time above to get a clean text.
Yet \pdfstringdef turns accented characters into octal sequences as seen here :
How to strip a string of all formatting
Here's a little tweaked version with transparency to "respect" highlight or other background effects. Yet the problem remains for accented characters or even parenthesis and alike :
\documentclass{article}
\usepackage{tikz} % Graphique
\usepackage{transparent} % Transparence
%\usepackage[outline]{contour} % Contour
\usepackage{xcolor} % Couleurs par nom
\usepackage{soul} % \hl{}
\usepackage{hyperref} % \pdfstringdef
% =============================================================================
\makeatletter % Autorise la modification des macros système
% - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
% /!\ UTILISATION DE 'PDFSTRINGDEF' POUR 'NETTOYER' LE TEXTE
\newcommand{\glow}[2]{
% Début liste de 'nettoyage' (rajouter les commandes à 'nettoyer')
\pdfstringdefDisableCommands{\def\hl{}} % Nettoyage de '\hl{#1}'
% Fin liste de 'nettoyage'
\pdfstringdef\plainstr{#2} % Extraction chaine 'nettoyée'
% #2%i % Texte avec mise en forme (fond) NOK
\leavevmode % Mode horizontal (pas de passage à la ligne implicite)
\pgfsys@beginscope % = pdfliteral{q}
\rlap{#2}%i % Texte avec mise en forme (fond) 1x
\pgfsetroundjoin % = pdfliteral{1 j}
\pgfsetroundcap % = pdfliteral{1 J}
\pdfliteral{1 Tr}%i % no pgf alternative
\foreach\ind in {10, ..., 1}{%i
\pgfmathsetmacro\per{(11-\ind)*5}%i
\iffalse
% Couleur décroissante
\color{#1!\per}%
\else
% Transparence cumulée
\color{#1}%
\transparent{0.1}% % 10% max
\fi
\iftrue
% Contour 'léger'
\pgfsetlinewidth{\ind/2} % light
\else
% Contour 'épais'
\pgfsetlinewidth{(\ind/2)+1} % heavy
\fi
% \rlap{#2}%i % x fois COULEUR avec mise en forme NOK
\rlap{\plainstr}%i % x fois COULEUR sans mise en forme
}%i
\pgfsys@endscope % = pdfliteral{Q}
\plainstr % Texte sans mise en forme (dessus) NOIR
% #2%i % Texte avec mise en forme (dessus) NOIR
}
% - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\makeatother % Bloque la modification des macros système
% =============================================================================
\begin{document}
COMPILE AT LEAST TWICE !!!
This is a test.
This is another \hl{test}.
\glow{green}{This one is supposed to glow without highlight.}
\glow{cyan}{This one is also supposed to \hl{glow} preserving the highlight.}
pdfstringdefDisableCommands and pdfstringdef removes the formatting.
Then I overwrite the glowing string with a clean black string without hl{}.
\glow{orange}{But accented characters are a hell to deal with.}
\glow{red}{Bùt àccéntèd charaçters are a hell to deal wïth.}
Even parenthesis gets "destroyed" in the process...
\end{document}
Btw I'm trying to mimic Word's text glowing function :
Any solution ?
PS : I cannot add the tags 'latex', or 'glow' or 'pdfstringdef' unless I have 300 "reputation"
EDIT : with the solution provided, which work in general (accented characters are rendered correctly) I get a "feedback" effect now. I should investigate. "Not that elementary, my dear Watson".
New contributor
Kochise is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct.
• well yes that is the purpose of \pdfstringdef: it encode the string in a way needed inside a pdf. But please make your question self contained with a proper minimal example. I don't want to wander through various links trying to figure out what you are trying to do. – Ulrike Fischer yesterday
• Edited, hope it helps to understand the issue. And no, I don't need octal representation when I use an UTF-8 file with \usepackage[utf8]{inputenc}, especially from something called \pdfstringdef or I would have used something like \getoctalsequence which is more explicit in its intentions. – Kochise yesterday
• If you just want to expand some text to a string try \text_expand:n – Phelype Oleinik yesterday
• \pdfstringdef is the wrong command. It is meant to produce a pdf string. Drop the idea to use it for your goal. – Ulrike Fischer yesterday
• Probably, yet I still need something to strip all formatting to get a raw/plain string. Many people have the same need yet no clear answer to that.\\ I'll try with 'Phelype Oleinik' hint... (gosh, the hell to line feed in a comment) – Kochise yesterday
Try with \text_purify:n (requires a rather current text system!)
\documentclass{article}
\usepackage{tikz} % Graphique
\usepackage{transparent} % Transparence
%\usepackage[outline]{contour} % Contour
\usepackage{xcolor} % Couleurs par nom
\usepackage{soul} % \hl{}
\usepackage{hyperref} % \pdfstringdef
% =============================================================================
\makeatletter % Autorise la modification des macros système
% - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\ExplSyntaxOn
\cs_set_eq:NN\textpurify\text_purify:n
\ExplSyntaxOff
% /!\ UTILISATION DE 'PDFSTRINGDEF' POUR 'NETTOYER' LE TEXTE
\newcommand{\glow}[2]{
% Début liste de 'nettoyage' (rajouter les commandes à 'nettoyer')
%\pdfstringdefDisableCommands{\def\hl{}} % Nettoyage de '\hl{#1}'
% Fin liste de 'nettoyage'
\edef\plainstr{\textpurify{#2}} % Extraction chaine 'nettoyée'
% #2%i % Texte avec mise en forme (fond) NOK
\leavevmode % Mode horizontal (pas de passage à la ligne implicite)
\pgfsys@beginscope % = pdfliteral{q}
\rlap{#2}%i % Texte avec mise en forme (fond) 1x
\pgfsetroundjoin % = pdfliteral{1 j}
\pgfsetroundcap % = pdfliteral{1 J}
\pdfliteral{1 Tr}%i % no pgf alternative
\foreach\ind in {10, ..., 1}{%i
\pgfmathsetmacro\per{(11-\ind)*5}%i
\iffalse
% Couleur décroissante
\color{#1!\per}%
\else
% Transparence cumulée
\color{#1}%
\transparent{0.1}% % 10% max
\fi
\iftrue
% Contour 'léger'
\pgfsetlinewidth{\ind/2} % light
\else
% Contour 'épais'
\pgfsetlinewidth{(\ind/2)+1} % heavy
\fi
% \rlap{#2}%i % x fois COULEUR avec mise en forme NOK
\rlap{\plainstr}%i % x fois COULEUR sans mise en forme
}%i
\pgfsys@endscope % = pdfliteral{Q}
\plainstr % Texte sans mise en forme (dessus) NOIR
% #2%i % Texte avec mise en forme (dessus) NOIR
}
% - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
\makeatother % Bloque la modification des macros système
% =============================================================================
\begin{document}
COMPILE AT LEAST TWICE !!!
This is a test.
This is another \hl{test}.
\glow{green}{This one is supposed to glow without highlight.}
\glow{cyan}{This one is also supposed to \hl{glow} preserving the highlight.}
pdfstringdefDisableCommands and pdfstringdef removes the formatting.
Then I overwrite the glowing string with a clean black string without hl{}.
\glow{orange}{But accented characters are a hell to deal with.}
\glow{red}{Bùt àccéntèd charaçters are a hell to deal wïth.}
Even parenthesis gets "destroyed" in the process...
\end{document}
• Seems like that's the solution, yet a tricky one though :) I tried the '\text_expand:n' path but couldn't anything to work under pdflatex. How many latex distros are out there by the way ? Seems like more than Linux distros... – Kochise yesterday
• Wow, can't even upvote your answer until I get "15 reputation". What kind of unwelcoming place is that ? Seriously ? – Kochise yesterday
• I have windows, but this shouldn't matter, as long as it is current (which on linux could mean that you need to install a vanilla texlive). And it is not more tricky than your pdfstringdef version, both commands are quite similar (and \pdfstringdef is internally a very tricky and long command). And don't worry to much about the voting system, it is not really important if there is an upvote or not. – Ulrike Fischer yesterday
• Ok, thanks for the explanations, I'm still rather new to the LaTeX inner working, it needs a good refresh/update because it's "tricky" to say the least. Heard something about LaTeX3 is in the pipes though... – Kochise yesterday
• Btw the Word's rendering of the similar functionality is still superior IMHO, will have to understand how to create layers with alpha channel to get something more consistent without all these color bleeding due to transparent superposition :/ – Kochise yesterday
|
2020-10-19 21:38:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6362560987472534, "perplexity": 4769.133688989086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107866404.1/warc/CC-MAIN-20201019203523-20201019233523-00500.warc.gz"}
|
https://www.physicsforums.com/threads/length-contraction-experiment-results.849117/
|
# Length contraction experiment results
1. Dec 21, 2015
### name123
I was looking at the barn door paradox, http://math.ucr.edu/home/baez/physics/Relativity/SR/barn_pole.html
But supposing that instead of a barn, there was a piece of measuring apparatus which was in two halves, with a gap in between through which the pole would pass. One half of the measuring apparatus has a laser at each end (these correspond to the barn doors), and that the beam of each would be broken by the pole as it passed. The other half (the receiver) has a light detector at each end, each of which detects the beam from one of the lasers. These detectors are linked by fibre optic cable to a NOR gate that is situated in between them, equidistant from each in the measuring equipments rest frame, and the NOR gate is connected to a light bulb, such that if in the rest frame of the measuring apparatus you were to break both beams simultaneously there would be a later point in time where the NOR gate simultaneously did not receive a signal from either light detector, and would turn the light on. At first glance I would expect the light to not be expected to come on by an observer from the measuring equipments rest frame but be expected to come on from an observer from the poles rest frame, but I realise that would mean there would be an experimental difference, so was wondering what the expected results would be? Would it be something like the NOR gate not being equidistant from each detector in the poles rest frame?
2. Dec 21, 2015
### Ibix
The NOR gate is equidistant from the barn doors. However, since it is moving in any frame except its rest frame, it is not equidistant from the places the doors occupied when the rod went through them. Thus the signals from the detectors are emitted at different times, but take different amounts of time to reach the NOR gate. They end up arriving simultaneously.
3. Dec 21, 2015
### name123
So are you saying that if there was an observer at the NOR gate (in the measuring equipment's rest frame), and an observer in the middle of the pole (in the pole's rest frame) that if they passed each other at t = t' = 0 that the observer on the pole would measure one detector to be closer to it at t' = 0 than the other detector (both detectors are at rest with respect to the NOR gate)?
4. Dec 21, 2015
### Samy_A
I wonder if one can look at this this way:
In the measuring equipment rest frame, the signals all arrive at the same location, the same x.
The Lorentz transform for the time variable is $t'=\gamma (t-vx)$.
So in the frame of the pole, the order in which the signals will arrive will be the same as in the other frame, as for two events at x with times $t_1$ and $t_2$, $t_1'-t_2'=\gamma (t_1-t_2)$. So the lamp will not come on in the frame of the pole either.
5. Dec 21, 2015
### Ibix
No - I explcitly said that the NOR gate is equidistant from the barn doors. This is true in all inertial frames. However, the only frame in which the rod triggers both sensors simultaneously is the rest frame of the barn. In all other frames, the sensors are triggered at different times, and the barn is moving.
According to the rod observer, then, at the time t=t'=0, the sensor at the back of the barn has already tripped. But the barn is moving. Both the NOR gate and the sensor that tripped have moved since it tripped - so the NOR gate is more than half the (contracted) length of the barn from the place where the sensor tripped. A little while later, the sensor at the front of the barn trips. But the barn and the NOR gate are moving, so at any time after it trips, the place where it tripped is between the sensor and the NOR gate.
To summarise, the NOR gate is always equidistant from the sensors. It is not equidistant from the place where one of the sensors was at one time and where the other sensor was at another time. The difference in place and time always compensate so that the NOR gate receives signals from the sensors simultaneously, whatever the frame.
6. Dec 21, 2015
### JVNY
The original post suggests that the one would not expect the light to come on in the measuring equipment's frame. If the relative speed is great enough that the pole is length contracted in the measuring equipment's frame to less than the distance between the two sensors in that frame then the suggestion is correct. The light will not come on, because the sensors will not be tripped at the same time in the measuring apparatus' frame. Therefore the light will not come on in the pole's frame either, so there is no experimental difference.
It is true that the sensors will be tripped at the same time in the pole's frame. But this is not relevant. You have programmed the light system to turn on the light based on events that are simultaneous in the measuring apparatus' frame, not those that are simultaneous in the pole's frame.
If the length-contracted pole is nonetheless long enough (despite its contraction) that it does trip both sensors simultaneously in the apparatus' frame then it will cause the light to come on. Again, this is because the system is programmed to turn on the light based on simultaneity in that frame.
7. Dec 21, 2015
### name123
Ok, thanks I think I get it. From http://math.ucr.edu/home/baez/physics/Relativity/SR/barn_pole.html as I understand it, adapting it to the light sensor example where there was an observer at the NOR gate (in the measuring equipment's rest frame), and an observer in the middle of the pole (in the pole's rest frame) that passed each other at t = t' = 0. At t = 0 from the observer at the NOR gate's perspective the light sensors are 40m apart (20m away in either direction) , but the pole is less than 40m (less than 20m in either direction) so that neither end of the pole is breaking a light beam, but at t' = 0 where the observer in the middle of the pole is opposite the observer at the NOR gate, the light sensors are 10m in either direction, but the pole is 40m in either direction, and so both light beams would be broken simultaneously from that perspective.I assume by the phrase "because the system isn't programmed" you mean that (from the pole's pespective) the NOR gate doesn't fire because the light down the fibre optics wouldn't progress along the fibre optic cable at the same rate because the fibre optic cable is moving in one direction (from the poles perspective) at near the speed of light. So the light signal progresses quicker down the 10m of fibre optic cable from the head sensor than it progresses down the 10m of fibre optic cable from the tail sensor (from the pole's perspective). And it would be the same even if it was some lever system, as the "push" or whatever couldn't be observed to propagate faster than the speed of light (which it would be from the pole's) perspective if it took the same time to travel from the head sensor to the NOR gate as from the tail sensor to the NOR gate. I assume even if the signal were to propagate very slowly, so that it's propagation speed plus the velocity would still be less than the speed of light that it would still be a similar situation.
8. Dec 22, 2015
### JVNY
By "programmed" I meant that you set up the NOR gate system in a particular way. The gate is placed equidistant from the sensors. The gate turns the light on if two events occur. First, the gate receives a signal from each side at the same place at the same time. Second, at that same place the gate subsequently receives a signal from only one side.
I think that your description of what happens is correct with one exception. You specify more than 50% length contraction (80m own length pole contracted to less than 40m in the apparatus frame), so the sensors are less than 10m away in each direction from the pole observer you describe (not 10m). Also, I think like you that other signals would work the same way, but perhaps the other posters can comment on this.
Last edited: Dec 22, 2015
|
2018-05-25 05:53:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7658661603927612, "perplexity": 447.87126388228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867041.69/warc/CC-MAIN-20180525043910-20180525063910-00091.warc.gz"}
|
https://deepai.org/publication/quantile-fourier-transform-quantile-series-and-nonparametric-estimation-of-quantile-spectra
|
DeepAI
# Quantile Fourier Transform, Quantile Series, and Nonparametric Estimation of Quantile Spectra
A nonparametric method is proposed for estimating the quantile spectra and cross-spectra introduced in Li (2012; 2014) as bivariate functions of frequency and quantile level. The method is based on the quantile discrete Fourier transform (QDFT) defined by trigonometric quantile regression and the quantile series (QSER) defined by the inverse Fourier transform of the QDFT. A nonparametric spectral estimator is constructed from the autocovariance and cross-covariance functions of the QSER using the lag-window (LW) approach. Various quantile smoothing techniques are employed further to reduce the statistical variability of the estimator across quantiles, among which is a new technique called spline quantile regression (SQR). The performance of the proposed estimation method is evaluated through a simulation study.
03/08/2019
### Nonparametric smoothing for extremal quantile regression with heavy tailed distributions
In several different fields, there is interest in analyzing the upper or...
10/16/2019
### A Semi-Parametric Estimation Method for the Quantile Spectrum with an Application to Earthquake Classification Using Convolutional Neural Network
In this paper, a new estimation method is introduced for the quantile sp...
02/09/2021
### Nonparametric C- and D-vine based quantile regression
Quantile regression is a field with steadily growing importance in stati...
08/22/2019
### Efficient Capon-Based Approach Exploiting Temporal Windowing For Electric Network Frequency Estimation
Electric Network Frequency (ENF) fluctuations constitute a powerful tool...
01/17/2019
### A Multi-Level Simulation Optimization Approach for Quantile Functions
Quantile is a popular performance measure for a stochastic system to eva...
10/24/2022
### Stochastic Features of Purified Time Series
A new approach to calculating the finite Fourier transform is suggested ...
03/24/2019
### Conservation of the t-digest Scale Invariant
A t-digest is a compact data structure that allows estimates of quantile...
## 1 Introduction
The concept of quantile spectra and cross-spectra was introduced in Li (2012, 2014) as a result of an asymptotic analysis of the quantile periodograms and cross-periodograms constructed from trigonometric quantile regression. Given
stationary time series with continuous marginal distribution functions , let be the bivariate distribution functions of these series and be their bivariate level-crossing rates. Then, the quantile spectra and cross-spectra of these series at a quantile level take the form
Sjj′(ω,α):=ηj(α)ηj′(α)∞∑τ=−∞rjj′(τ,α)exp(−iτω)(0≤ω<2π), (1)
where
rjj′(τ,α) := 1−12α(1−α)ϕjj′(F−1j(α),F−1j′(α),τ) = 1α(1−α){Fjj′(F−1j(α),F−1j′(α),τ)−α2}, ηj(α) := √α(1−α)/˙Fj(F−1j(α)).
These quantile spectra and cross-spectra are analogous to the ordinary power spectra and cross-spectra in the sense that the
take place of the standard deviations and the
take place of the ordinary autocorrelation and cross-autocorrelation functions (Brockwell and Davis 1992). Because the coincide with the ordinary autocorrelation and cross-autocorrelation functions of the indicator processes , the quantile spectra and cross-spectra in (1) are closely related to the spectral analysis methods for indicator processes (Davis and Mikosch 2009; Dette et al. 2015; Baruník and Kley 2019). Quantile-frequency analysis (QFA) is destined to explore the properties of quantile spectra and cross-spectra as bivariate functions of and (Li 2020).
Estimating the quantile spectra and cross-spectra defined by (1) is not as straightforward as estimating the ordinary spectra and cross-spectra of indicator processes. In this paper, we propose a method that takes advantage of the concept of quantile discrete Fourier transform (QDFT) introduced in Li (2014).
The gist of the proposed method is as follows: First, we use the solutions of trigonometric quantile regression to construct the QDFT for each observed series on a set of quantile levels; then, we compute the inverse Fourier transform of the QDFT to produce sequences in the time domain, called quantile series (QSER), for each quantile level; and finally, we use the sample autocovariance and cross-covariance functions of the QSER, called quantile autocovariance and cross-covariance functions (QACF), to construct a nonparametric estimator of the quantile spectra and cross-spectra in (1) by following the conventional lag-window (LW) approach. Furthermore, we explore several smoothing techniques to control the statistical variability of the spectral estimator across quantiles and thereby achieve additional reduction of estimation error. These techniques include conventional smoothing splines and a new technique called spline quantile regression (SQR). The latter employs splines to represent the coefficients in the trigonometric quantile regression and imposes penalties for their roughness as functions of .
The remainder of this paper is organized as follows. In Section 2, we describe the QDFT, SQER, and QACF. In Section 3, we introduce the LW spectral estimator and discuss the techniques of quantile smoothing. In Section 4, we provide the result of a simulation study on the performance of the proposed estimation method. Concluding remarks are given in Section 5. In addition, we provide the details of the SQR technique in Appendix I, a summary of R functions for the proposed method in Appendix II, and additional results of the simulation study in Appendix III.
## 2 Quantile Fourier Transform and Quantile Series
Let () be time series of length , and let be the Fourier frequencies. For each , consider the following trigonometric quantile regression solution at quantile level :
(2)
where is the objective function of quantile regression (Koenker 2005). In addition, for (i.e., when is even), let
{^β1,j(π,α),^β2,j(π,α)} := argminβ1,β2∈R n∑t=1ρα(yj(t)−β1−β2cos(πt)), (3) ^β3,j(π,α) := 0,
and for (i.e., ), let
^β1,j(0,α) := ^β2,j(0,α) := ^β3,j(0,α) := 0.
Based on these trigonometric quantile regression solutions, we define the quantile discrete Fourier transform (QDFT) of the th series at quantile level as
Zj(ωv,α):=⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩n^β1,j(0,α)v=0,n^β2,j(π,α)v=n/2 (if n is even),(n/2){^β2,j(ωv,α)−i^β3,j(ωv,α)}otherwise, (4)
where . This definition of QDFT is motivated by the fact that the ordinary DFT can be constructed in the same way by replacing with the objective function of least-squares regression.
It is easy to see that the sequence is conjugate symmetric:
Zj(ωv,α)=Z∗j(ωn−v,α)(v=1,…,⌊(n−1)/2⌋). (5)
Therefore, in order to compute the QDFT, one only need to solve the quantile regression problems (2)–(3) for (i.e., for when
is odd and
when
is even); the conjugate symmetry provides the values of QDFT for the remaining frequencies. Linear programming algorithms such as those implemented by the
rq function in the R package ‘quantreg’ (Koenker 2005) can be employed to compute the quantile regression solutions efficiently, including parallelization for different frequencies.
Now, consider the inverse Fourier transform of the QDFT for each :
xj(t,α):=1nn−1∑j=0Zj(ωv,α)exp(itωv)(t=1,…,n). (6)
We call this sequence the quantile series (QSER) of at quantile level . Note that the QSER is a real-valued time series due to the conjugate symmetry in (5). Also note that the sample mean of the QSER, , coincides with , which is the -quantile of , because by definition.
Based on the QDFT in (4), the (first kind) quantile periodogram of at level (Li 2012) can be written as
Qjj(ωv,α):=n−1|Zj(ωv,α)|2(v=0,1,…,n−1), (7)
and the quantile cross-periodogram of and () at level (Li 2014) can be written as
Qjj′(ωv,α):=n−1Zj(ωv,α)Z∗j′(ωv,α)(v=0,1,…,n−1). (8)
This way of expressing the quantile periodogram and cross-periodogram (QPER) in terms of the QDFT is consistent with the conventional definition of periodogram and cross-periodogram in terms of the ordinary DFT (Brockwell and Davis 1992).
From the QSER in (6) we obtain the sample autocovariance and cross-covariance functions
γjj′(τ,α):=1nn∑t=τ+1{xj(t,α)−¯xj(α)}{xj′(t−τ,α)−¯xj′(α)}(τ=0,1,…,n−1). (9)
We call these functions the quantile autocovariance and cross-covariance functions (QACF) at level . It is easy to show that the usual relationship between the ordinary autocovariance functions and the ordinary periodograms holds true for the QACF and the QPER:
Q(ωv,α)=∑|τ|
where and . This relationship provides the basis for the construction of a nonparametric estimator of quantile spectra and cross-spectra.
Note that the Fourier transform of QPER was employed in Chen, Sun, and Li (2021) to construct spectral estimators for univariate time series, which produces instead of . The alias may be negligible in applications where the QACF decays quickly in and where only a first few autocovariances with small are used in subsequent modeling and analysis. The alias-free QACF is produced by (9) through the QDFT and QSER.
## 3 Lag-Window Spectral Estimator
Let be a nonnegative even function with bandwidth parameter . An example of such functions is the Tukey-Hanning window (Priestley 1981)
WM(τ):=12(1+cos(πτ/M))I(|τ|≤M). (11)
Inspired by the conventional lag-window (LW) approach (Priestley 1981) in light of the relationship (10), we propose the following estimator,
^SLW(ω,α):=∑|τ|
for estimating the quantile spectral matrix .
In this LW estimator, the bandwidth parameter controls the statistical variability with respect to for fixed . Further control of the statistical variability with respect to can be accomplished by what we call quantile smoothing (QS). We consider three approaches to quantile smoothing distinguished by where it takes place in the process.
In the first approach, we first compute the LW estimate on the quantile-frequency grid , and then apply a smoother to the sequence for each fixed . This constitutes what we call the LWQS estimator. In the second approach, we first apply the smoother to the QDFT sequence for each fixed , and then use the QSER and QACF from the quantile-smoothed QDFT to construct what we call the QSLW estimator according to (12). Although any smoother can be used in these estimators, we focus on two spline-based smoothers (Wahba 1975) in our experiment: the smooth.spline function in the R package ‘stats’ for its simplicity of computation (R Core Team 2022), and the gamm function in the R package ‘mgcv’ for its capability of handling correlated data (Wood 2022).
The third approach tackles the problem at the root by employing the spline quantile regression (SQR) method described in Appendix I to produce a smoothed version of quantile regression solution. The SQR is a penalized quantile regression method where the coefficients of the regressor are represented as spline functions of and penalized for their roughness in terms of the -norm of second derivatives. The SQR problem can be reformulated as a linear program (LP) and solved efficiently by a modified version of the rq.fit.fnb function in the R package ‘quantreg’ (Koenker 2005). Given the solutions from SQR, we first compute what we call the spline QDFT (SQDFT) according to (4) and then use the QSER and QACF from the SQDFT to construct what we call the SQRLW estimator according to (12). The smoothness of this estimator is controlled by a smoothing parameter . See Appendix I for details.
To measure the accuracy of spectral estimation, we employ the Kullback-Leibler divergence
KLD(^S|S):=1L⌊(n−1)/2⌋L∑ℓ=1⌊(n−1)/2⌋∑v=1{tr(^S(ωv,αℓ)S−1(ωv,αℓ))−log|^S(ωv,αℓ)||S(ωv,αℓ)|−m}.
This spectral measure is closely related to Whittle’s likelihood (Priestley 1981) and has been used as similarity metrics for time series clustering and classification (Kakizawa, Shumway, and Tanaguch 1998).
## 4 A Simulation Study
To investigate the performance of the estimation method outlined in the previous section, we use a set of simulated data with and . The first series, , is a nonlinear mixture of these components , , and :
{u4(t):=ψ1(u1(t))×u1(t)+(1−ψ1(u1(t)))×u2(t),y1(t):=ψ2(u4(t))×u4(t)+(1−ψ2(u4(t)))×u3(t), (13)
where and . The second series, , is a delayed copy of :
y2(t):=u3(t−10). (14)
The three components , , and
are zero-mean unit-variance autoregressive (AR) processes, satisfying
u1(t) = a11u1(t−1)+ϵ1(t), u2(t) = a21u2(t−1)+ϵ2(t), u3(t) = a31u3(t−1)+a32u3(t−2)+ϵ3(t),
where , , and with , , and where , , and
are mutually independent Gaussian white noise. In other words,
is a low-pass series with spectral peak at frequency , is a high-pass series with spectral peak at frequency , and is a band-pass series with spectral peak at frequency .
Figure 1 shows the quantile spectrum and cross-spectrum of the series in (13)-(14) evaluated at and . These spectra are the ensemble mean of quantile periodograms and cross-periodograms from 5000 Monte Carlo runs.
Figure 3 shows the series from one of the simulation runs. The corresponding quantile periodogram and cross-periodogram are shown in Figure 3. Figures 5 and 5 contain the image plot of the QSER of these series at all quantile levels and the time series plot of selected QSER at three quantile levels , , and (see Appendix III for the QACF plots at these quantile levels). It is interesting to observe the stretched values of the QSER, upward for and downward for .
Figure 6 shows the LW spectral estimates obtained from the series in Figure 3. These estimates are constructed according to (12) using the Tukey-Hanning window (11) with . They can be viewed as a smoothed version of the raw quantile periodogram and cross-periodogram in Figure 3 with respect to the frequency variable. The KLD of these estimates equals 0.198.
Figure 7 shows the LWQS estimates obtained by applying quantile smoothing to the LW estimates in Figure 6 using the smooth.spline function with the smoothing parameter chosen by the generalized cross-validation (GCV) criterion. The resulting KLD equals 0.194. In this case, the KLD is reduced slightly, but the effect of quantile smoothing is barely noticeable when compared to Figure 6.
A better result is shown in Figure 8. These estimates are also obtained by applying smooth.spline to the LW estimates in Figure 6, but the smoothing parameter spar is set to 0.9 instead of being determined by GCV. This results in smoother appearances when compared to the estimates in Figures 6 and 7. The KLD is reduced significantly from 0.198 and 0.194 to 0.109.
A closer examination of the LW estimates reveals strong positive correlations across quantiles. These correlations are not handled effectively by smooth.spline with GCV. To take the correlations into account, we use the gamm function in the ‘mgcv’ package (Wood 2022). This function jointly estimates the smoothing splines and the parameters of a user-specified correlation model while retaining GCV for smoothing parameter selection.
Figure 9 shows the result of applying gamm to the LW estimates in Figure 6 assuming the correlation structure of an AR(1) process. The KLD of these estimates equals 0.130, which is a significant improvement over smooth.spline with GCV. This improvement is achieved at a higher computational cost: a 100-fold increase in computing time when compared to smooth.spline. Computation can be accelerated by parallelization for different frequencies.
Figure 10(a) and Table 3 provide a more comprehensive assessment of the LWQS estimator using smooth.spline and gamm. The results are based on 1000 Monte Caro runs. As shown in Figure 10(a), smooth.spline with GCV offers a slight improvement over no quantile smoothing; a significant improvement can be made by setting spar manually with a range of choices. Table 3 confirms the superiority of gamm over smooth.spline for the LWQS estimator when the smoothing parameter is selected by GCV.
A similar assessment is given by Figure 10(b) and Table 3 for the QSLW estimator where quantile smoothing is performed on the QDFT sequences instead of the LW estimates. Figure 10(b) shows that GCV is not particularly effective for the QSLW estimator using smooth.spline but significantly smaller estimation errors can be produced for a range of choices of spar. A comparison with Figure 10(a) reveals that the best result of SQLW is inferior to the best result of LWQS. Table 3 shows that gamm is not as effective for the QSLW estimator as it is for the LWQS estimator, which may be partly attributed to increased variability. See Appendix III for the plots of QSLW estimates obtained from the series in Figure 3 using smooth.spline and gamm.
Finally, consider the SQR approach to quantile smoothing. Figure 11 shows an example of trigonometric parameters obtained from SQR (see Appendix I for details). Apparently, with a suitable choice of the smoothing parameter , the SQR method is able to achieve the effect of smoothing across quantiles. Due to joint estimation with additional parameters, the computing time incurs a 25-fold increase when compared to the one-quantile-a-time solutions using rg.
Figure 12 shows the SQRLW estimates produced by SQR with for the series in Figure 3. The KLD of these estimates equals 0.155. This is superior to the results of the LW estimates in Figure 6 and the LWQS estimates in Figure 7, but inferior to the results of the LWQS estimates in Figures 8 and 9.
Figure 13 and Table 3 compare the KLD of the SQRLW estimator with unweighted and weighted penalty in SQR for different choices of based on 1000 Monte Carlo runs. Similar to the QSLW estimator using smooth.spline, the SQRLW estimator is able to reduce the estimation error of the unsmoothed LW estimator for a range of choices of ; however, it is unable to achieve the best results produced by the LWQS estimator. With weighted penalty, the performance does not deteriorate as rapidly as it does with unweighted penalty when becomes too large. The unweighted SQR is more effective for other choices of .
## 5 Concluding Remarks
In this paper, we propose a nonparametric method for estimating the quantile spectra and cross-spectra introduced through trigonometric quantile regression in Li (2012; 2014). This method is based on the quantile discrete Fourier transform (QDFT) defined by the trigonometric quantile regression and the quantile series (QSER) defined by the inverse Fourier transform of the QDFT. The autocovariance and cross-covariance function of the QSER facilitates the construction of a lag-window (LW) spectral estimator for the quantile spectra and cross-spectra. While the window function controls the statistical variability of this estimator across frequencies, we consider three approaches to further reduction of the statistical variability across quantiles. These approaches include the application of smoothing spline techniques to the LW estimates or the QDFT sequences and the use of a new technique called spline quantile regression (SQR) which produces smoothed quantile regression solutions. All these approaches lead to improved spectral estimates. Among these methods, applying a smoother directly to the LW estimates turns out to be more effective, provided the strong positive correlations across quantiles are taken into account. It remains a challenge to automate the smoothing process with greater effectiveness and computational efficiency.
## References
Andriyana, Y., Gijbels, I., and Verhasselt, A. (2014) P-splines quantile regression estimation in varying coefficient models. Test, 23, 153–194. DOI 10.1007/s11749-013-0346-2.
Brockwell, P., and Davis, R. (1991) Time Series: Theory and Methods, 2nd edn, section 11.6. New York: Springer.
Baruník, J., and Kley, T. (2019) Quantile coherency: A general measure for dependence between cyclical economic variables. Econometrics Journal, 22, 131–152.
Berkelaar, M. (2022) Package ‘lpSolve’. https://cran.r-project.org/web/packages/lpSolve/lpSolve.pdf.
Chen, T., Sun, Y., and Li, T.-H. (2021) A semiparametric estimation algorithm for the quantile spectrum with an application to earthquake classification using convolutional neural network.
Computational Statistics & Data Analysis, 154, 107069.
Davis, R., and Mikosch, T. (2009) The extremogram: A correlogram for extreme events. Bernoulli, 15, 977–1009.
Dette, H., Hallin, M., Kley, T. and Volgushev, S. (2015) Of copulas, quantiles, ranks and spectra: an -approach to spectral analysis. Bernoulli, 21, 781–831.
Kakizawa, Y., Shumway, R. and Tanaguchi, M. (1998) Discrimination and clustering for multivariate time series. Journal of the American Statistical Association, 93, 328–340.
Koenker, R. (2005) Quantile Regression. Cambridge, UK: Cambridge University Press.
Koenker, R. and Ng, P. (2005) A Frisch-Newton algorithm for sparse quantile regression. Acta Mathematicae Applicatae Sinica, 21, 225–236.
Koenker, R., Ng, P., and Portnoy, S. (1994) Quantile smoothing splines. Biometrika, 81, 673–680.
Li, T.-H. (2012) Quantile periodograms. Journal of the American Statistical Association, 107, 765–776.
Li, T.-H. (2014) Time Series with Mixed Spectra. Boca Raton, FL: CRC Press.
Li, T.-H. (2020) From zero crossings to quantile-frequency analysis of time series with an application to nondestructive evaluation. Applied Stochastic Models for Business and Industry, 36, 1111–1130
Portnoy, S., and Koenker, R. (1997) The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators. Statistical Science, 12, 279–300.
Priestley, M. (1981) Spectral Analysis and Time Series, p. 443. New York: Academic Press.
R Core Team (2022) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/.
Wahba, G. (1975) Smoothing noisy data with spline functions. Numerische Mathematik, 24, 383–393.
Wood, S. (2022) Package ‘mgcv’. https://cran.r-project.org/web/packages/mgcv/mgcv.pdf.
## Appendix I: Spline Quantile Regression
Let be a sequence of observations and be the corresponding values of a -dimensional regressor. Given an increasing sequence of quantile levels, the spline quantile regression (SQR) problem can be stated as
^β(⋅):=argminβ(⋅)∈F{L∑ℓ=1n∑t=1ραℓ(yt−xTtβ(αℓ))+L∑ℓ=1cℓ∥¨β(αℓ)∥1}, (15)
where is the functional space spanned by cubic -spline basis functions for some and the are penalty parameters.
The SQR problem in (15) is different from the problem of quantile smoothing splines considered by Koenker, Ng, and Pornoy (1994) where splines are used to represent nonparametric functions of the independent variable. It is also different from the problem considered by Andriyana, Gijbels, and Verhasselt (2014) where splines are used to represent regression coefficients as functions of time.
In (15), the -norm of second derivatives is employed as the roughness measure of in order to retain the LP characteristics of the original quantile regression problem (Koenker 2005). The -norm, popular for spline smoothing (Wahba 1975), can also be employed in (15), but this makes the problem a quadratic program and we will not consider it here. Furthermore, the quantile-dependent penalty parameters in (15) are able to accommodate the simple case or the more sophisticated case , where is a suitable weight sequence controlling the excessive variability of quantile regression at very high or very low quantiles, e.g., .
Let denote the set of spline basis functions. Then, any function
β(α):=[β1(α),…,βp(α)]T∈F
can be expressed as
β(α)=Φ(α)θ,orβj(α)=K∑k=1ϕk(α)θjk=ϕT(α)θj(j=1,…,p),
where
θ := [θT1,…,θTp]T∈RpK,θj := [θj1,…,θjK]T∈RK(j=1,…,p), Φ(α) := diag{ϕT(α),…,ϕT(α)p times}∈Rp×pK,ϕ(α) := [ϕ1(α),…,ϕK(α)]T∈RK.
With this notation, the problem (15) can be restated as
^θ:=argminθ∈RpK{L∑ℓ=1n∑t=1ραℓ(yt−xTtΦ(αℓ)θ)+L∑ℓ=1cℓ∥¨Φ(αℓ)θ∥1} (16)
and
^β(α):=Φ(α)^θ. (17)
Like the ordinary QR problem (Koenker 2005), the SQR problem (16) can be reformulated as LP:
(18) argmin(γ,δ,u1,v1,r1,s1,…,uL,vL,rL,sL)∈Rd+ L∑ℓ=1{αℓ1Tnuℓ+(1−αℓ)1Tnvℓ+1Tprℓ+1Tpsℓ} s.t.{XΦ(αℓ)(γ−δ)+uℓ−vℓ=y(ℓ=1,…,L),cℓ¨Φ(αℓ)(γ−δ)−(rℓ−sℓ)=0(ℓ=1,…,L),
where and are the -dimensional and
-dimensional vectors of 1’s,
is the ordinary regression design matrix, and is the total number of decision variables to be optimized. Among the decision variables in (18), and are primary variables which determine the desired solution (17) with
^θ:=^γ−^δ. (19)
The remaining variables , , , and are auxiliary variables which are introduced just for the purpose of linearizing the objective function in (16).
In the canonical form, the LP problem (18) can be expressed as
min{cTξ|Aξ=b;ξ∈R2pK+2nL+2pL+}, (20)
where
c:=[0Tp,0Tp,α11Tn,(1−α1)1Tn,1Tp,1Tp,…,αL1Tn,(1−αL)1Tn,1Tp,1Tp]T,
ξ:=[γT,δT,uT1,vT1,rT1,sT1,…,uTL,vTL,rTL,sTL]T,
A:=⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣XΦ(α1)−XΦ(α1)In−In00⋮⋮⋱XΦ(αL)−XΦ(αL)In−In00c1¨Φ(α1)−c1¨Φ(α1)00−IpIp⋮⋮⋱cL¨Φ(αL)−cL¨Φ(αL)00−IpIp⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦,
and
b:=[yT,…,yTL times,0Tp,…,0TpL times]T.
This problem can be solved numerically by standard LP solvers available in many software packages such as the lp function in the R package ‘lpSolve’ (Berkelaar 2022).
Associated with the primal LP problem (20) is a dual LP problem of the form
max{bTλ|ATλ≤c;λ∈RnL+pL}, (21)
where may be interpreted as the Lagrange multiplier for the equality constraints in (20). By partitioning according to the structure of such that , the inequalities can be written more explicitly as
L∑ℓ=1{ΦT(αℓ)XTλℓ+cℓ¨ΦT(αℓ)λL+ℓ}≤0p,−L∑ℓ=1{ΦT(αℓ)XTλℓ+cℓ¨ΦT(αℓ)λL+ℓ}≤0p,
λℓ≤αℓ1n,−λℓ≤(1−αℓ)1n(ℓ=1,…,L), −λL+ℓ≤1p,λL+ℓ≤1p(ℓ=1,…,L).
These inequalities are equivalent to
L∑ℓ=1{ΦT(αℓ)XTλℓ+cℓ¨ΦT(αℓ)λL+ℓ}=0p, λℓ∈[αℓ−1,αℓ]n(ℓ=1,…,L), λL+ℓ∈[−1,1]p(ℓ=1,…,L).
By a change of variables,
ζℓ:=λℓ+(1−αℓ)1n,ζL+ℓ:=12(λL+ℓ+1p),
we obtain
bTλ=L∑ℓ=1yTλℓ=L∑ℓ=1yTζℓ−L∑ℓ=1(1−αℓ)yT1n=bTζ+constant,
L∑ℓ=1{ΦT(αℓ)XTλℓ+cℓ¨ΦT(αℓ)λL+ℓ} = L∑ℓ=1{ΦT(αℓ)XTζℓ+2cℓ¨ΦT(αℓ)ζL+ℓ} −L∑ℓ=1{(1−αℓ)ΦT(αℓ)XT1n+cℓ¨ΦT(αℓ)1p}, λℓ∈[αℓ−1,αℓ]n ↔ ζℓ∈[0,1]n, λL+ℓ∈[−1,1]p ↔ ζL+ℓ∈[0,1]p.
Therefore, the dual LP problem (21) is equivalent to
max{bTζ|DTζ=a;ζ∈[0,1]nL+pL}, (22)
where
D := [ΦT(α1)XT,…,ΦT(αL)XT,2c1¨ΦT(α1),…,2cL¨ΦT(αL)]T, a := L∑ℓ=1{(1−αℓ)ΦT(αℓ)XT1n+cℓ¨ΦT(αℓ)1p}.
This dual formulation facilitates the use of a modified version of the rq.fit.fnb function in the ‘quantreg’ package to solve the SQR problem. The rq.fit.fnb function was developed by Portnoy and Koenker (1997) for the ordinary QR problem , which has a dual formulation of the form . To solve the SQR problem, we modified this function along the lines of rq.fit.lasso (Koenker, Ng, and Portnoy 1994). In this modified version, we replaced the response vector by , the design matrix by , and the right-hand-side vector in the equality constraints by ; we also set the initial value of to .
To justify this method of solving the SQR problem, it suffices to show that the SQR problem has a primal-dual formulation that conforms to the requirement of the underlying interior point algorithm of Portnoy and Koenker (1997). Toward that end, let
θ:=γ−δ,z:=[uT1,…,uTL,2sT1,…,2sTL]T,w:=[vT1,…,vTL,2rT1,…,2rTL]T.
With this change of variables, we can rewrite the equality constraints in (20) as
Dθ+z−w=b. (23)
Under these constraints, we have and . Substituting these expressions in (20) yields
cTξ = L∑ℓ=1{αℓ1Tnuℓ+(1−αℓ)1Tnvℓ+1Tprℓ+1Tpsℓ} = L∑ℓ=1{αℓ1Tnuℓ+(1−αℓ)1Tn(XΦ(αℓ)θ+uℓ−y)+1Tp(cℓ¨Φ(αℓ)θ+sℓ)+1Tpsℓ} = L∑ℓ=1{(1−αℓ)1TnXΦ(αℓ)θ+cℓ1T
|
2023-02-02 20:48:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8815061450004578, "perplexity": 1767.4648897478503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00640.warc.gz"}
|
http://www.research.lancs.ac.uk/portal/en/publications/spectroscopic-properties-of-luminous-lyman-emitters-at-z-approx-6--7-and-comparison-to-the-lymanbreak-population(32319d66-a50b-4bdd-b646-642d1a7e8603).html
|
Home > Research > Publications & Outputs > Spectroscopic properties of luminous Lyman-α em...
### Electronic data
• 1706 (1).06591v1
Rights statement: This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Monthly Notices of the Royal Astronomical Society following peer review. The definitive publisher-authenticated version Jorryt Matthee, David Sobral, Behnam Darvish, Sérgio Santos, Bahram Mobasher, Ana Paulino-Afonso, Huub Röttgering, Lara Alegre; Spectroscopic properties of luminous Ly α emitters at z ≈ 6–7 and comparison to the Lyman-break population, Monthly Notices of the Royal Astronomical Society, Volume 472, Issue 1, 21 November 2017, Pages 772–787, https://doi.org/10.1093/mnras/stx2061 is available online at: https://academic.oup.com/mnras/article/472/1/772/4082107/Spectroscopic-properties-of-luminous-Ly%CE%B1-emitters
Accepted author manuscript, 3.89 MB, PDF document
## Spectroscopic properties of luminous Lyman-α emitters at $z \approx 6 - 7$ and comparison to the Lyman-break population
Research output: Contribution to journalJournal article
Published
Close
Journal publication date 21/11/2017 Monthly Notices of the Royal Astronomical Society 1 472 16 772-787 Published 12/08/17 English
### Abstract
We present spectroscopic follow-up of candidate luminous Ly{\alpha} emitters (LAEs) at $z=5.7-6.6$ in the SA22 field with VLT/X-SHOOTER. We confirm two new luminous LAEs at $z=5.676$ (SR6) and $z=6.532$ (VR7), and also present HST follow-up of SR6. These sources have luminosities L$_{\rm Ly\alpha} \approx 3\times10^{43}$ erg s$^{-1}$, very high rest-frame equivalent widths of EW$_0\gtrsim 200$ {\AA} and narrow Ly{\alpha} lines (200-340 km s$^{-1}$). VR7 is the most UV-luminous LAE at $z>6.5$, with M$_{1500} = -22.5$, even brighter in the UV than CR7. Besides Ly{\alpha}, we do not detect any other rest-frame UV lines in the spectra of SR6 and VR7, and argue that rest-frame UV lines are easier to observe in bright galaxies with low Ly{\alpha} equivalent widths. We confirm that Ly{\alpha} line-widths increase with Ly{\alpha} luminosity at $z=5.7$, while the Ly$\alpha$ lines of faint LAEs become broader at $z=6.6$, potentially due to reionization. We find a large spread of up to 3 dex in UV luminosity for $>L^{\star}$ LAEs, but find that the Ly{\alpha} luminosity of the brightest LAEs is strongly related to UV luminosity at $z=6.6$. Under basic assumptions, we find that several LAEs at $z\approx6-7$ have Ly{\alpha} escape fractions $\gtrsim100$ %, indicating bursty star-formation histories, alternative Ly$\alpha$ production mechanisms, or dust attenuating Ly$\alpha$ emission differently than UV emission. Finally, we present a method to compute $\xi_{ion}$, the production efficiency of ionising photons, and find that LAEs at $z\approx6-7$ have high values of log$_{10}(\xi_{ion}$/Hz erg$^{-1}) \approx 25.51\pm0.09$ that may alleviate the need for high Lyman-Continuum escape fractions required for reionization.
### Bibliographic note
This is a pre-copy-editing, author-produced PDF of an article accepted for publication in Monthly Notices of the Royal Astronomical Society following peer review. The definitive publisher-authenticated version Jorryt Matthee, David Sobral, Behnam Darvish, Sérgio Santos, Bahram Mobasher, Ana Paulino-Afonso, Huub Röttgering, Lara Alegre; Spectroscopic properties of luminous Ly α emitters at z ≈ 6–7 and comparison to the Lyman-break population, Monthly Notices of the Royal Astronomical Society, Volume 472, Issue 1, 21 November 2017, Pages 772–787, https://doi.org/10.1093/mnras/stx2061 is available online at: https://academic.oup.com/mnras/article/472/1/772/4082107/Spectroscopic-properties-of-luminous-Ly%CE%B1-emitters
|
2020-08-07 05:06:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6259249448776245, "perplexity": 11345.566681819064}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737152.0/warc/CC-MAIN-20200807025719-20200807055719-00384.warc.gz"}
|
https://www.physicsforums.com/threads/joint-probability-mass-function.802439/
|
# Joint probability mass function
1. Mar 10, 2015
### zyQuzA0e5esy2y
2 coins
Ω = {HH, HT, TH, TT}
0 = Tails
PX,Y(x,y) = P({TT} n {HH}) for (x,y) = (0,0)
P({HH, HT,TH} n {HH}) for (x,y) = (1,0)
P({TT} n {TT, TH, HT}) for (x,y) = (0,1)
P({HH,HT,TH} n {TH,HT,TT}) for (x,y) = (1,1)
0 otherwise
PX,Y(x,y) = P(0) for (x,y) = (0,0)
P({HH}) for (x,y) = (1,0)
P({TT}) for (x,y) = (0,1)
0 otherwise
PX,Y(x,y) = 0.25 for (x,y) = (1,0)(0,1)
0.50 for(x,y) = (1,1)
0 otherwise
That's basically the lay out i was given.
im not sure how to read for(x,y) = (0,0) and the others similar to it.
Solved it thank you, if anyone is curious i forgot to mention that X = 0 if the outcome equals to TT and if X = 1 for the outcome if it equals to HH HT TT..
Y = 0 if the outcome is equal to HH
Y = 1 if the outcome is equal to HT TT TH
Last edited: Mar 10, 2015
2. Mar 15, 2015
### Stephen Tashi
If your question is about how to read notation, the way you have presented the notation isn't coherent. For example, I think you are using "n" to mean the intersection symbol $\cap$.
If this notation comes from a problem, I suggest you post the problem in the homework section and quote the entire problem exactly.
Perhaps you can use LaTex: https://www.physicsforums.com/help/latexhelp/
|
2017-09-24 08:37:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7115415930747986, "perplexity": 2960.23502092012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689900.91/warc/CC-MAIN-20170924081752-20170924101752-00247.warc.gz"}
|
https://codereview.stackexchange.com/questions/194035/c-opengl-debug-utility
|
# C++ OpenGL Debug Utility
Edit: A follow up post can be found here.
So I've started a c++ project, coming from Java / C# there are many obvious differences.
Below is an example of a class I've been working on:
.h
#pragma once
#include <vector>
#include <GL/glew.h>
#include <glm/glm.hpp>
class GLDebug {
private:
struct Line {
glm::vec3 p0, p1;
};
private:
std::vector<Line> m_lines;
public:
GLDebug();
~GLDebug();
public:
void drawLine(const glm::vec3& p0, const glm::vec3& p1);
void onRender(const glm::mat4& projMatrix, const glm::mat4& viewMatrix);
};
.cpp
#include "GLDebug.h"
#include <glm/gtc/type_ptr.hpp>
GLDebug::GLDebug() { }
GLDebug::~GLDebug() { }
void GLDebug::drawLine(const glm::vec3& p0, const glm::vec3& p1) {
m_lines.push_back({ p0, p1 });
}
void GLDebug::onRender(const glm::mat4& projMatrix, const glm::mat4& viewMatrix) {
glMatrixMode(GL_PROJECTION);
glMatrixMode(GL_MODELVIEW);
glBegin(GL_LINES);
for (auto& line : m_lines) {
glVertex3fv(glm::value_ptr(line.p0));
glVertex3fv(glm::value_ptr(line.p1));
}
glEnd();
m_lines.clear();
}
I've written this basic OpenGL debug utility class based on one I wrote in Java previously. Now it seems fine from what I can tell, but with my limited C++ knowledge I'm wondering if anything I'm doing is redundant / unnecessary and/or performance impacting.
One other thing, I know I can use compiler directives such as #ifdef _DEBUG etc. If I wanted this class to only exist if compiling in debug mode, Obviously I can't wrap the whole class in the ifdef as that would break other code that is calling functions in the GLDebug class. So I was thinking of wrapping the functions internals instead. Is that a bad approach, I could use them everywhere else as well but that seems kinda bloated and less manageable.
• What is the purpose of this class? What are you debugging? It has an array of Lines, but removes them after every render. I don't think this can be reasonably reviewed as it is, since it doesn't do anything of significance. It doesn't even keep track of the context its drawing into. And you can't make the class go away in non-debug builds if it does all of your drawing. I'm having trouble making sense of what you're actually trying to accomplish here. Can you elaborate? – user1118321 May 10 '18 at 1:21
• I think you’ve done a great job of picking up C++ language rather than writing Java in C++. – JDługosz May 10 '18 at 2:47
• @user1118321 This class isn't used for drawing everything, rather just rendering debug lines/frustums/transforms/etc in immediate mode (passing what needs to be drawn every frame). For actual model rendering etc, I'll be using VAOs/VBOs with a proper mesh class etc. This class essentially will serve no purpose in "release" mode. – Hex May 10 '18 at 13:08
Just a couple points (also well done properly qualifying namespaces, a rare thing to see).
• #pragma once is non-standard. Meaning it will probably work but is not guaranteed to. Bjarne himself recommends against using it.
• Not following your multiple private and public approach. Usually you have a section for each without repeating the keywords.
• It makes more sense to order your interface from public to private as people using it don't want to look at the internals of your class, they want to know what methods they can use.
• You don't do anything in either your constructor or your destructor. In this case it makes sense to let the compiler take care of them.
• Avoid declaring more than one variable per line
• First off thanks for the response! As for the multiple private/public I was planning on adding more functionality to the class with a mix of private/public functions/member variables etc, I figure having multiple public/private would allow better layout content by providing a public/private for each "section". (Though I should note, I'm coming from a Java/C# background where having a mix of public / private through out the class is common, so if this is an abnormal approach in C++ let me know.) As for the constructor and destructor, I just roughed them in for future use. – Hex May 9 '18 at 19:15
• @HexCrown (1) Abnormal might be a bit much but it certainly is an unusual approach. Most people will use the two section way. (2) No problem but from a reviewers point of view it's simply something to point out. I hope to see your complete class up for review when it's done. – yuri May 9 '18 at 19:26
• I'm new to this site, would I post the completed class when done as an edit or as a separate post? Also one of the things I was hoping to get feedback on was the struct Line {...}; being used as it is, is that the best way to go about encapsulating the data to be held in the vector, or is there a more generic/elegant approach, it seems kinda off to me. – Hex May 9 '18 at 19:53
• @HexCrown (1) Simply post it as a new question. You can mention that it's a follow-up question to one of your previous ones and link to it. (2) The way you use it looks fine to me however I just noticed you actually declare two variables in one line which should be avoided. (I'll update this in the answer) – yuri May 9 '18 at 20:02
• I've made a follow up post as you suggested, it can be found here: codereview.stackexchange.com/questions/194124/… – Hex May 10 '18 at 17:11
|
2019-05-25 10:36:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29507601261138916, "perplexity": 1736.263878543474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00523.warc.gz"}
|
http://math.stackexchange.com/questions/120981/evaluating-int-01-sqrt1x2-text-dx
|
# Evaluating $\int_{0}^{1} \sqrt{1+x^2} \text{ dx}$
I'm learning integral. Here is my homework:
$$\int_0^1 \sqrt{1+x^2}\;dx$$
I think this problem solve by change $x$ to other variable. Can you tell me how please. (just direction how to solve)
thanks :)
-
You could start by setting $x=\tan\theta$. Then $dx=\sec^2\theta\,d\theta$ and the square root simplifies nicely. – David Mitra Mar 16 '12 at 16:12
Also, $\frac{d}{dx} \sinh^{-1} x = \dfrac{1}{\sqrt{x^2+1}}.$ – user2468 Mar 16 '12 at 16:18
If the function under the integral sign has form $f(ax^2+bx+c)$ in which $ax^2+bx+c$ has no root then you should be setting $x=A\tan\theta +B$. – Takasima Senko Mar 16 '12 at 16:19
This reminds me of an other integral – AD. Mar 22 '12 at 17:02
Since the integrand is a function of $x$ and $\sqrt{ax^{2}+bx+c}$ another option is to use the Euler substitution $\sqrt{ax^{2}+bx+c}=\pm \sqrt{a}x\pm t$, with $a>0$. Choosing $\sqrt{1+x^{2}}=t-x$, squaring both sides and solving for $x$, we obtain $x=\frac{t^{2}-1}{2t}$ and $dx=\frac{ t^{2}+1}{2t^{2}}dt$. The integrand becomes an easily integrable rational fraction of $t$ $$\begin{equation*} \sqrt{1+\left( \frac{t^{2}-1}{2t}\right) ^{2}}\frac{t^{2}+1}{2t^{2}}=\frac{1 }{4}\frac{\left( t^{2}+1\right) ^{2}}{t^{3}}=\frac{1}{2t}+\frac{1}{4t^{3}}+\frac{1}{4}t. \end{equation*}$$ So $$\begin{eqnarray*} \int_{0}^{1}\sqrt{1+x^{2}}dx &=&\int_{1}^{\sqrt{2}+1}\left( \frac{1}{2t}+\frac{1}{4t^{3}}+\frac{t}{4}\right) dt \\ &=&\left. \frac{1}{2}\ln t-\frac{1}{8t^{2}}+\frac{1}{8}t^{2}\right\vert _{1}^{\sqrt{2}+1}. \end{eqnarray*}$$
Added: I've checked the final result:
$$\begin{eqnarray*} \left. \frac{1}{2}\ln t-\frac{1}{8t^{2}}+\frac{1}{8}t^{2}\right\vert _{1}^{\sqrt{2}+1} &=&\frac{\ln \left( \sqrt{2}+1\right) }{2}-\frac{1}{8\left( \sqrt{2} +1\right) ^{2}}+\frac{1}{8}\left( \sqrt{2}+1\right) ^{2}-0 \\ &=&\frac{\ln \left( \sqrt{2}+1\right) }{2}+\frac{\sqrt{2}}{2}. \end{eqnarray*}$$
-
Edit in response to a downwote. – Américo Tavares Apr 6 '13 at 13:39
Integrate by parts to reduce to table integral: $$\int_0^1 \sqrt{1+x^2} \mathrm{d} x = \left. x \sqrt{1+x^2} \right|_0^1 - \int_0^1 \frac{x^2 {\color\green{+1-1}}}{\sqrt{1+x^2}} \mathrm{d} x = \sqrt{2} - \int_0^1 \sqrt{1+x^2} \mathrm{d} x + \int_0^1 \frac{\mathrm{d} x}{\sqrt{1+x^2}}$$ Now solving the equation for $\int_0^1 \sqrt{1-x^2} \mathrm{d} x$, and using table anti-derivative $\int \frac{\mathrm{d} x}{\sqrt{1+x^2}} = \operatorname{arcsinh}(x)$: $$\int_0^1 \sqrt{1+x^2} \mathrm{d} x = \frac{1}{2} \left( \sqrt{2} + \operatorname{arcsinh}(1) \right) = \frac{1}{2} \left( \sqrt{2} + \log(1+\sqrt{2}) \right) \approx 1.1478$$
-
If we choose to take a (purely) trigonometric route, we might start like this: \eqalign{ x&=\tan\theta\\ dx&=\sec^2\theta\,d\theta\\ I&=\int_0^1\sqrt{1+x^2}\,dx\\ &=\int_0^\frac{\pi}{4}\sec^3\theta\,d\theta\\ } However at this stage, we actually need the substitution $t=\sec\theta+\tan\theta$, believe it or not: \eqalign{ t&=\sec\theta+\tan\theta\\ dt&=\left(\sec\theta\tan\theta+\sec^2\theta\right)\,d\theta =t\sec\theta\,d\theta\\ \frac{dt}{t}&=\sec\theta\,d\theta } And then we need some trigonometry inspirations: \eqalign{ t & = \sec\theta+\tan\theta = \frac{1+\sin\theta}{\cos\theta} = \frac{\cos\theta}{1-\sin\theta} \\ t-\frac1t & = \frac{\cos\theta}{1-\sin\theta} - \frac{\cos\theta}{1+\sin\theta} = 2\tan\theta \\ \tan\theta & = \frac12 \left( t-\frac1t \right) = \frac12 \left( t-t^{-1} \right) \\ \tan^2\theta & = \frac{t^2+t^{-2}}{4} - \frac12 \\ \sec^2\theta & = \frac{t^2+t^{-2}}{4} + \frac12 } We can can then proceed as follows: \eqalign{ I & = \int_0^\frac{\pi}{4}\sec^3\theta\,d\theta = \int_1^{1+\sqrt{2}} \left( \frac{t^2+t^{-2}}{4}+\frac12 \right)\,\frac{dt}{t} \\& = \frac14 \int_1^{1+\sqrt{2}} \left(t+t^{-3}\right)\,dt + \frac12 \int_1^{1+\sqrt{2}} t^{-1}\,dt \\& = \frac18\left[t^2-t^{-2}\right]_1^{1+\sqrt2} + \frac12\left[\ln t \right]_1^{1+\sqrt2} \\& = \frac{\sqrt2+\ln{\left(1+\sqrt2\right)}}{2} } where it is helpful to notice that $$\left(1+\sqrt2\right)^{-1} = \frac{1}{1+\sqrt2} \cdot \frac{1-\sqrt2}{1-\sqrt2} = \frac{1-\sqrt2}{-1}$$ so that $$\left(1+\sqrt2\right)^{-2} = \left(1-\sqrt2\right)^2.$$
-
Since we are adding multiple answers, here is one using the Differentiation under the integral sign technique.
If $$F(a) = \int_{0}^{a} f(a,x) \text{ dx}$$ then, under suitable hypotheses,
$$F'(a) = f(a,a) + \int_{0}^{a} \frac{\partial f(a,x)}{\partial a} \text{ dx}$$
(A more general version is here: Wiki page on Differentiating under integral sign)
Now let
$$F(a) = \int_{0}^{a} \sqrt{a^2 + x^2} \text{ dx} \tag{1}$$
A substitution $x = at$ shows us that
$$F(a) = a^2 \int_{0}^{1} \sqrt{1 + t^2} \text{ dt} = Ka^2$$
(we are trying to find the value of $K$).
Thus we must have that $$F'(a) = 2Ka \tag{2}$$
Now go back to (1) and use the techinque of differentiating under the integral sign.
We get
$$F'(a) = \sqrt{2} a + \int_{0}^{a} \frac{a}{\sqrt{a^2 + x^2}} \text{ dx} = \sqrt{2}a + a \sinh^{-1}(x/a)|_0^a = a (\sqrt{2} + \sinh^{-1}(1) - \sinh^{-1}(0))$$
Compare this with (2) and we get the value of $K$.
A similar approach was used here: Definite integral: $\displaystyle\int^{4}_0 (16-x^2)^{\frac{3}{2}} dx$
-
The way I've seen this done before starts as bgins did with the substitution
$$x=\tan\theta,\quad dx=\sec^2\theta \; d\theta$$
$$\int \sqrt{1+x^2} \; dx=\int\sec^3\theta \; d\theta$$
From here, we use integration by parts.
$$u=\sec\theta,\quad du=\sec\theta\tan\theta \; d\theta$$
$$dv=\sec^2\theta dx,\quad v=\tan\theta$$
$$\int\sec^3\theta d\theta=\sec\theta\tan\theta-\int\sec\theta\tan^2\theta \; d\theta$$
$$\int\sec\theta\tan^2\theta d\theta=\int\sec\theta(\sec^2\theta-1) \; d\theta=\int\sec^3\theta d\theta-\int\sec\theta d\theta$$
Combining, we get
$$\int\sec^3\theta d\theta=\sec\theta\tan\theta-\int\sec^3\theta \; d\theta+\int\sec\theta d\theta$$
$$2\int\sec^3\theta d\theta=\sec\theta\tan\theta+\int\sec\theta \; d\theta$$
$$\int\sec^3\theta \; d\theta=\frac12\sec\theta\tan\theta+\frac12\ln|\sec\theta+\tan\theta|+C$$
At which point you could substitute back for $x$ or, since your case is a definite integral, evaluate for the corresponding vaules of $\theta$.
Let's try another method just for kicks. We'll try to get rid of that square root with the substitution
$$x=\frac{e^y}2-\frac{e^{-y}}2,\quad dx=\frac{e^y}2+\frac{e^{-y}}2 \; dy$$
$$\int\sqrt{1+x^2}dx=\int\sqrt{1+\left(\frac{e^y}2-\frac{e^{-y}}2\right)^2}\left(\frac{e^y}2+\frac{e^{-y}}2\right) \; dy$$
$$=\int \left(\frac{e^y}2+\frac{e^{-y}}2\right)\sqrt{1+\frac{e^{2y}}4+\frac{e^{-2y}}4-\frac12} \; dy=$$
$$\int \left(\frac{e^y}2+\frac{e^{-y}}2\right)\sqrt{\frac{e^{2y}}4+\frac{e^{-2y}}4+\frac12} \; dy$$
But the term outside the parentheses is the square root of the term under the radical. So we have
$$\int \left(\frac{e^{2y}}4+\frac{e^{-2y}}4+\frac12\right) \; dy = \frac{e^{2y}}8-\frac{e^{-2y}}8+\frac y2+C$$
Only problem now is back-substitution is painful.
$$\frac{e^{2y}}8-\frac{e^{-2y}}8+\frac y2+C=\frac12\left(\frac{e^y}2+\frac{e^{-y}}2\right)\left(\frac{e^y}2-\frac{e^{-y}}2\right)+\frac y2+C$$
$$=\frac{x\sqrt{1+x^2}}2+\frac12\ln(x+\sqrt{1+x^2})+C$$
-
Before we proceed let us recall some basics on the hyperbolic functions $$\sinh x=\frac{e^x-e^{-x}}{2}\qquad \text{and }\qquad \cosh x=\frac{e^x+e^{-x}}{2}$$ Below are some relations that are easy to prove $$\cosh^2 x - \sinh^2x = 1\\ \cosh2x=\cosh^2 x + \sinh^2 x\qquad \sinh2x=2\cosh x\sinh x\\ (\cosh x)' =\sinh x\qquad\qquad (\sinh x)' =\cosh x$$
Next, we turn to the integral of the OP.
Let us make the substitution $x=\sinh t$, then $$\int_0^1\sqrt{1+x^2}dx=\int_0^{\rm{arcsinh 1}}\sqrt{1+\sinh^2t}\cosh t\,dt\\ \qquad =\int_0^{\rm{arcsinh 1 }}\cosh^2 t\,dt =\frac12 \int_0^{\rm{arcsinh 1 }}1+\cosh 2 t\,dt\\ =\frac12\left({\rm{arcsinh}}1 + \frac{\sinh2({\rm{arcsinh}} 1)}{2}\right)$$ Now, a calculation based on the definition shows that ${\rm{arcsinh}}x=\log(\sqrt{1+x^2}+x)$, and hence, after using the definition of $\sinh$ once more we arrive at $$\int_0^1\sqrt{1+x^2}dx=\frac{\log(\sqrt{2}+1)}{2} + \frac{\sinh(2\log(\sqrt{2}+1))}{4}\\ = \frac{\log(\sqrt{2}+1)}{2} + \frac{(\sqrt{2}+1)^2 -\frac{1}{(\sqrt{2}+1)^2}}{8} = \frac{\log(\sqrt{2}+1)}{2} + \frac{\sqrt{2}}{2}\\$$
-
The Wikipedia article titled Integral of secant cubed explains how that integral comes from $$\int \sqrt{a^2+x^2} \; dx$$ and explains that that arises in rectification of the parabola, rectification of the Archimedean spiral, and quadrature of the helicoid. Then it explains how to find the integral by two different methods.
-
As a different aproach. You can go usual trigonemetric integral way via transform $x=iu$
$$\int \sqrt{1+x^2}\;dx=i\int \sqrt{1-u^2}\;du$$
and then $u=\sin(p)$
$$i\int \sqrt{1-u^2}\;du=i\int \sqrt{1-\sin(p)^2} \cos(p) \; dp = i\int ]\cos^2(p) \; dp=i\int \frac{\cos(2p)+1}{2}\;dp=i\frac{\sin(2p)}{4}+i\frac{p}{2}+c=i\frac{\sin(p)\cos(p)}{2}+i\frac{p}{2}+c=i\frac{u \sqrt{1-u^2}}{2}+i\frac{\arcsin(u)}{2}+c$$
$$\int \sqrt{1+x^2}\;dx=i\int \sqrt{1-u^2}\;du=i\frac{u \sqrt{1-u^2}}{2}+i\frac{\arcsin(u)}{2}+c$$$$=\frac{x \sqrt{1+x^2}}{2}+i\frac{\arcsin(-ix)}{2}+c$$
$$\int _0^1 \sqrt{1+x^2}\;dx=\frac{1 \sqrt{2}}{2}+i\frac{\arcsin(-i)}{2}+c - \left(i\frac{\arcsin(0)}{2}+c\right)$$
we need to find $\arcsin(-i)$
$\arcsin(-i)=k$
$\sin(k)=-i$
$\sin(k)= \dfrac{e^{ik}-e^{-ik}}{2i}=-i$
$e^{ik}-e^{-ik}=2$
$e^{ik}=z$
$z^2-2z-1=0$
$z=1+\sqrt{2}$ /// we will need $\ln(z)$ , thus ignore negative root $1-\sqrt{2}$
$ik=\ln(z)=\ln(1+\sqrt{2})$
$k=\dfrac{\ln(1+\sqrt{2})}{i}$
$$\int _0^1 \sqrt{1+x^2}\;dx=\frac{\sqrt{2}}{2}+\frac{\ln(1+\sqrt{2})}{2}$$
Note: I don't claim that it is easy way to find the solution. I just wanted show different approach to the problem. Sometimes we can use complex numbers in such kind problems too.
-
|
2015-07-31 05:13:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974019527435303, "perplexity": 774.577372601613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988051.33/warc/CC-MAIN-20150728002308-00147-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/is-my-justification-acceptable.927699/
|
# I Is my justification acceptable?
1. Oct 6, 2017
### Tio Barnabe
I wanted to argue that $$\frac{d}{d \cos \theta} \sin \theta, \ \theta \in [0, \pi]$$ should be ignored in the given interval, because integration of it leads to a divergent integral. Is this acceptable as a reason?
2. Oct 7, 2017
### Staff: Mentor
First off, $\frac{d}{d \cos \theta} \sin \theta$ is pretty unwieldy, as $\sin(\theta)$ isn't a function of $\cos(\theta)$ at first glance. However, you could write the derivative as $\frac d {d(\cos(\theta))} (\pm \sqrt{1 - \cos^2{\theta}})$, and then use the chain rule. So, no, I don't see that it's reasonable to ignore it.
Second, integration and differentiation are different operations, so the fact that the integral of some function on some interval is divergent doesn't have any bearing here.
3. Oct 7, 2017
### FactChecker
In addition to what @Mark44 said, the idea that something can be ignored because it is divergent is wrong. If it was very small compared to other terms, that would be different, but being too large makes it impossible to ignore.
4. Oct 7, 2017
### NFuller
Just to further elaborate @Mark44's point, it isn't very hard to actually calculate the derivative and show that it does exist. So to drive home the point, here is the result.
Using the transformation $\xi(\theta)=\text{cos}(\theta)$
$$\text{sin}(\theta)=\pm\sqrt{1-\xi^{2}}$$
so
$$\frac{d}{d\text{cos}(\theta)}\text{sin}(\theta)=\pm\frac{d}{d\xi}\sqrt{1-\xi^{2}}=\pm\frac{\xi}{\sqrt{1-\xi^{2}}}=\pm\frac{\text{cos}(\theta)}{\text{sin}(\theta)}=\pm\text{cot}(\theta)$$
5. Oct 7, 2017
### Tio Barnabe
Thank you to everyone
|
2018-03-17 17:12:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7414290308952332, "perplexity": 649.3524465348382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645248.22/warc/CC-MAIN-20180317155348-20180317175348-00532.warc.gz"}
|
http://mathoverflow.net/questions/79160/relating-eigenvectors-of-two-self-adjoints-operators
|
# Relating eigenvectors of two self-adjoints operators
Suppose I have a self-adjoint operator $\mathbf{L}$ which I seperate in two parts which are themselves self-adjoint. I write this in terms of their eigenvalues/eigenvectors:
$\mathbf{v} \Lambda \mathbf{v}^T = \mathbf{v}_1 \Lambda_1 \mathbf{v}^T_1 + \mathbf{v}_2 \Lambda_2 \mathbf{v}^T_2$
The two parts can also be written as
$\mathbf{v}_1 \Lambda_1 \mathbf{v}^T_1= DK_1D^T$
$\mathbf{v}_2 \Lambda_2 \mathbf{v}^T_2= DK_2D^T$
with $K_1$ and $K_2$ both symmetric, $D$ is skew-symmetric. Suppose $K_{1,2}$ are formed by the vector products $\mathbf{b}\mathbf{b}^T$ and $\mathbf{b_\bot}\mathbf{b}^T_\bot$ respectively.
How do I connect the eigenvectors $\mathbf{v_1}$ to $\mathbf{v_2}$? My guess is that $\mathbf{v_1}(i)^T\mathbf{v_2}(i)=0, \quad \forall\\, i$, but I don't know how to proof it.
K1,2 are formed by the vector products bbT and b⊥bT⊥ respectively and b and b⊥ are perpendicular to each other. 1) No, they can be written as such, no need for proof there.
So $D\textbf{b}\textbf{b}^TD^T$ has eigenvectors unrelated to the eigenvectors of $D \textbf{b}\bot \textbf{b}^T_\bot D^T$ ?
-
I am sorry but I got a bit confused by your notation. This may be standard but I have not encountered it. Could you explain a bit more what is what? More precisely: what is your space? What objects are there ($\Lambda$...)? What is $v_1(i)$? Thanks. – András Bátkai Oct 26 '11 at 13:31
Hi András, thanks for reading :) . $\Lambda$ is a diagonal matrix filled with the eigenvalues, $\mathbf{v}$ is a matrix which columns are formed by the eigenvectors. $\mathbf{v}(i)$ is the $i^{th}$ eigenvector. My main question is basically what $\mathbf{b}\mathbf{b}^T$ versus $\mathbf{b_\bot}\mathbf{b}^T_\bot$ means for the difference between $\mathbf{v}_1$ and $\mathbf{v}_2$. – Bramiozo Oct 26 '11 at 13:53
## 1 Answer
So your question seems to be : what is the connection between the eigenvectors of $A_1=Dbb^TD^T$ and $A_2=Db_\perp b_\perp^TD^T$ ?
Well it's easy to find these eigenvectors. First case : $Db,Db_\perp$ linearly independant.
Then the eigenspaces of $A_1$ are ${\mathbb R}Db$, and $(Db)^\perp$, and similarly for $A_2$. Since $D$ is skew-symmetric, in particular it does not preserve orthogonality and there is no connection between the eigenvectors of $A_1$ and $A_2$. The second case is obvious.
-
Thanks Fabien. I was wondering, suppose we reverse it and state that the set of eigenvectors $\mathbf{R}$ is the summation of two distinct parts, say $\mathbf{R}=\mathbf{R}_1+\mathbf{R}_2$ where each column represents an eigenvector. Now I want that $\mathbf{R}_1(i)\cdot\mathbf{R}_2(i)=0,\, \forall i$ where $i$ indicates a specific eigenvector $\mathbf{R}(i)$ and of course $\mathbf{R}(i)=\mathbf{R}_1(i)+\mathbf{R}_2(i)$. (Also suppose that the eigenvectors are normalised.) – Bramiozo Aug 7 '12 at 15:28
What kind off requirement would be in place for $\mathbf{R}_1$ and $\mathbf{R}_2$ for this to be true? – Bramiozo Aug 8 '12 at 9:21
## protected by Community♦Aug 19 '14 at 23:58
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
2016-07-02 07:40:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8329351544380188, "perplexity": 224.7619297186122}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00171-ip-10-164-35-72.ec2.internal.warc.gz"}
|
https://latex.org/forum/viewtopic.php?f=45&t=7271&p=28197
|
## LaTeX forum ⇒ Graphics, Figures & Tables ⇒ "Call-chain" diagram using xymatrix
Information and discussion about graphics, figures & tables in LaTeX documents.
natskvi
Posts: 7
Joined: Wed Dec 09, 2009 11:59 pm
### "Call-chain" diagram using xymatrix
I am trying to produce a simple call-chain diagram (similar to a filesystem diagram) that looks like
folder.| ---> subfolder | ---> subsubfolder
The arrows can be curved or angled, I don't care.
My attempts so far culminated in the following:
\begin{center}\makebox{\xymatrix@@!0@@R=2pc@@C=2pc{ {\phantom{\tt{XX}}} \ar@@/_1pc/[rd] & \tt{HF\_ship\_sim} \\ & {\phantom{\tt{XX}}} \ar@@/_1pc/[rd] & \tt{AddWakeField} \\ & & {\phantom{\tt{XX}}} & \tt{CalcWakeFieldVelocity}}}\end{center}
Which results in
diagram.jpg (7.67 KiB) Viewed 2228 times
There is clearly a major horizontal alignment problem, and a minor vertical alignment problem. What is a better way to do it? I found the dirtree package, but it doesn't allow to display arrows.
Thanks.
localghost
Site Moderator
Posts: 9204
Joined: Fri Feb 02, 2007 12:06 pm
Provide a minimal working example (MWE) that produces the output as shown in the picture. Attach the according log file. And please upload attachments to the forum server.
Best regards
Thorsten¹
LaTeX Community Moderator
¹ System: openSUSE 42.2 (Linux 4.4.52), TeX Live 2016 (vanilla), TeXworks 0.6.1
natskvi
Posts: 7
Joined: Wed Dec 09, 2009 11:59 pm
Here is MWE that produces the output:
\documentclass[11pt]{article}\usepackage[all]{xy}\begin{document} \xymatrix@!0@R=2pc@C=2pc{{\phantom{\tt{XX}}} \ar@/_1pc/[rd] & \tt{HF\_ship\_sim} \\& {\phantom{\tt{XX}}} \ar@/_1pc/[rd] & \tt{AddWakeField} \\& & {\phantom{\tt{XX}}} & \tt{CalcWakeFieldVelocity}}\end{document
Basically, I guess it's a usage issue (knowing how to use XY-Pic package). The problem is that arrows are drawn from the middle of the bottom of the "from" entry, which is why I use \phantom{XX} entries for the source and destination of the arrows as "placeholders" for the actual text stored in the subsequent entry on the same row, and shifted to the left, except it's not quite working...
If I incorporate the text lines in the diagram not as separate entries but rather between \save and \restore and tweak their position via +<x,y> as follows
\documentclass[11pt]{article}\usepackage[all]{xy}\begin{document} \xymatrix@!0@R=2pc@C=2pc{ {\phantom{XX}} \ar@/_1pc/[rd] \save[]+<0.8cm,-0.07cm>*\txt<8pc>{\tt{HF\_ship\_sim}}\restore \\ & {\phantom{XX}} \ar@/_1pc/[rd] \save[]+<0.9cm,0cm>*\txt<8pc>{\tt{AddWakeField}}\restore \\ && {\phantom{XX}} \save[]+<1.4cm,0.02cm>*\txt<8pc>{\tt{CalcWakeFieldVelocity}}\restore}\end{document}
then the output looks much better
betterimage.jpg (10.5 KiB) Viewed 2209 times
except that it cannot be centered as a figure, because apparently the text between the \save and \restore is not taken into account when computing the dimensions of the enclosing box for purposes of centering.
natskvi
Posts: 7
Joined: Wed Dec 09, 2009 11:59 pm
I found a package, xytree
http://www.ctan.org/tex-archive/macros/latex/contrib/xytree/xytree-doc-en.pdf
that can also draw the kind of hierarchical trees that I need. I hope it supports arrows and curved lines.
natskvi
Posts: 7
Joined: Wed Dec 09, 2009 11:59 pm
(I keep answering my own questions...)
The following code does the trick:
\begin{figure}[htbp] \begin{center} \centerline{ \yytree[3]{ \yynode[1]{\tt{HF\_ship\_sim}} \\ \xyconnect[->](R,L){0,1} & \yynode[1]{\tt{AddWakeField}{\phantom{p}}} \\ & \xyconnect[->](R,L){0,1} & \yynode{\tt{CalcWakeFieldVelocity}}} } \caption{Call chain for subroutine} \end{center}\end{figure}
finalversiona.jpg (7.66 KiB) Viewed 2209 times
yytree macro doesn't draw arrows, but we can place an arrow directly on top of the horizontal portion of the broken line using the command
\xyconnect[->](R,L){0,1}
Also note that {\phantom{p}} is necessary for consistent vertical alignment of the second entry because it doesn't have any letter (such as g, j, p, q, y) that extends below the baseline.
Happy New Year!
|
2020-02-25 05:55:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9958634376525879, "perplexity": 6558.775336971457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146033.50/warc/CC-MAIN-20200225045438-20200225075438-00466.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/46403-whats-domain.html
|
# Math Help - Whats the domain?????
1. ## Whats the domain?????
2. The domain is all the $x$ that make $x(x-18)>0$.
3. I know that..but i need the domain in interval notation of that function
4. Originally Posted by arbolis
The domain is all the $x$ that make $x(x-18)>0$.
5. Ok. Note that $x(x-18)$ vanishes when x equals 0 or 18. Note also that $y=x(x-18)=x^2-18x$ is an equation of a parabola with a global minimum. That mean that $x(x-18)\geq 0$ when $x \in (-\infty,0]$ U $[18,+\infty,)$, that's your domain.
EDIT: I'm sorry in my first post, it should be "The domain is all the $x$ that make $x(x-18)\geq 0$."
|
2014-08-01 14:28:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9624072909355164, "perplexity": 520.6670955708765}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510274987.43/warc/CC-MAIN-20140728011754-00337-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/31064/saving-the-most-lives-without-butterfly-effects
|
# Saving the most lives without butterfly effects
I have a time machine and I want to do good with it. I want to save lives.
But I'm worried any change that I could make could have larger and larger ripple effects that change history drastically causing even worse atrocities.
This hasn't been done before now so it is unknown if a change that effects my past or my ancestors could erase me or my chance to use this marvellous machine. I need to limit the large scale global ripples or the butterfly effect of my trip.
My ancestors have lived in an isolated part of the world for the last 100 years but I dare not go back farther than 100 years, for fear of messing with my own past.
I could try to stop terrible wars or kill powerful terrible men but then I know nothing about how the wars and people that come after them will change. I also bear the risk that one of those I save will be a great genius or leader and change history but there is no way around that so I have to accept it.
I can send myself back in a one way trip to any place on the surface of the earth in the last 100 years with up to 200 lbs of equipment available in the world today.
Where and when should I go what should I do, to save the most lives without sending large changes rippling through world history?
• How well does the time machine move groups of people? – AndyD273 Dec 8 '15 at 7:33
• @AndyD273 The time machine only moves me one person and is a one way trip. I can act for years after arriving but there is no way back. – sdrawkcabdear Dec 8 '15 at 7:38
• @sdrawkcabdear One way to explain the rules in your comment is to say that time is not looping, but rather you are pushing a new branch of history -- the function of the time machine is to pinch one thread of time forward, the index point of the beginning of one single light cone of potentiality is yanked forward like a sweater yarn pulled on a nail and pushed back in the wrong place. This could be Superman-scienced away plausibly enough that its discovery by the character in the story could be shocked once he realizes that he's not saving lives, he's seeing different echoes of them. – zxq9 Dec 8 '15 at 9:22
• You can't go back in time to save lives and not drastically alter history. (at least it is very very very unlikely) This is potentially idea generation but is in the same vein as many of our questions to I am voting to leave it open. – James Dec 8 '15 at 16:18
• What is your limit on the butterfly effect? People disagree on what it takes to send large changes rippling through history. Some stories require you to kill someone in power to have large ripples. Others believe the mere act of breathing is sufficient. – Cort Ammon - Reinstate Monica Dec 8 '15 at 17:58
Where and when should I go what should I do, to save the most lives without sending large changes rippling through world history?
You should go behind your time machine, and pull the power plug out of the wall.
Predicting the effects of causality with time travel is a wicked problem, or a problem with "incomplete, contradictory, and changing requirements that are often difficult to recognize."
But why does that mean you shouldn't go back in time? After all, there's a chance you might save some lives!
Let's look at the two extremes:
# Saving maximal lives: 1.2 billion saved
How many lives could you save? According to the World Health Organization, here are the leading causes of death, worldwide:
Source: WHO
An estimated 56 million people died worldwide in 2012, most of them by natural causes (e.g., diseases are natural causes. Bullets are not.).
So assuming you do not have a cure for old age or heart disease in your kit yet, let's very optimistically say, maximum, you could have saved ~24 million people in 2015. I have no idea how you'd do that, but there it is.
Now, the population of the world has increased dramatically in the past 100 years, from about 1.5 billion to 7.3 billion. I'll assume the death count scales with population (a bit dubious, but more in-depth analysis would be prohibitively difficult). That means you could save about 1/300 of the population, maximum.
Your best bet might be to bring a printed record of every accident, murder, terrorist attack (with documentation on the attackers), etc. So, best case, assuming everyone listens, and doesn't cause other accidents in the process of avoiding the original ones, you could save 1/300 of the population of every year.
That would add up to about 1.2 billion. (See note 1)
# Worst-case scenario: negative 7.3 billion
The Cold War was a very precarious time in world history. There were some well-known close calls, and possibly more that have yet to be declassified and published. However, if your far-reaching life-saving efforts were to cause any "ripples" that affected any of these events (or instigated others), and the US and Soviets unleashed their arsenals, the explosions, fallout, and potential nuclear winter could end life as we know it.
Sure, there might be a few survivors (hopefully your "isolated" kinsfolk have a good fallout shelter and plenty of supplies), but it's an unfortunately plausible worst case.
# But what's likely?
As mentioned, causality is extremely hard to predict.
To quote a strangely apt observation from WOPR (WarGames) on the Cold War, which seems to apply to this question: "a strange game. The only winning move is not to play."
In other words, I don't know, but I hope this answer helps to put some bounds on the possibilities.
Footnotes:
1. I arrived at the 1.2 billion figure by coming up with a rough plot of world population growth, dividing $y$ by 300, and calculating the area under that curve between 1915 and 2015.
• "hopefully your isolated kinsfolk have a good fallout shelter and plenty of supplies" If they don't, then the protagonist is never born, the time travel never happens the way the protagonist did it, and the die-off never happens that way. – a CVn Dec 8 '15 at 15:08
If all you are concerned about is changes in the present, you can avoid butterfly effects by not going back very far. 1-2 days tops.
You see on the news terrorists blew up a plane last night, so you go back to yesterday morning and take recorded footage of the news reports to the FBI.
When they finally let you talk to someone and they don't believe you, make them a deal that when it happens they'll give you evidence to give past him.
Wait the day for the event to either happen or be stopped, make contact, then take what he gives you to past him.
Repeat this 1 day loop until the terrorists are foiled and arrested before hurting people.
Wait till the next disaster and take it to your contact, which will be easier this time since you already earned his trust.
edit:
You'll probably only be able to save a few people at a time, and it's mostly letting the authorities do the work, but that's ok, since people's lives will be saved.
How to turn it into a dystopia story:
So our hero has been going back to stop major incidents for a while, and the law enforcement community is looking really good, having foiled several major attacks around the globe.
The hero's contact calls him up one day, says "hey, you've helped us out a lot with stopping these major events, but you could do more, if you want. Last week across the country there were 200 murders, 57 rapes, and a bunch of other crimes we didn't do anything about. How'd you like to save those people too? I'll just give you a flash drive, you go back a week, deliver it, and we'll put you up in a nice hotel with room service."
Sounds really good, huh?
Now we have pre-crime, along with a host of other ways that the system could be abused.
• I feel like this would make a good crime show: a time traveler that travels back in time by a few days and informs an agent of an upcoming crime/disaster that will happen soon, and then they have to work together to stop it from happening. – Mike.C.Ford Dec 8 '15 at 9:23
• Like the show "7 Days"? – JDługosz Dec 8 '15 at 13:32
• @JDługosz I don't know, never heard of that one. I picture it being really iterative. So he hears about another attack, takes back whatever information he can get and delivers it to the people that can do something. The attack still happens because of unknown factors, so he collects the new information and takes it back, over and over until they get it right and the bad guys are caught, saving lives. Unless you have all the intelligence the first time, multiple trips are probably unavoidable. – AndyD273 Dec 8 '15 at 13:58
• Like how DIY problems can be classified as "will need 3 trips to the hardware store" even if you plan ahead as best you can. – JDługosz Dec 8 '15 at 14:00
• Note that the OP said it's a one-way, one-time trip to the past, without any ability to jump back and forth. – a CVn Dec 8 '15 at 15:09
I think you could only save a maximum of one or two people.
Saving large numbers of people would increase the potential for changing the world too much. Changing the course of the Titanic for example would put roughly an extra 1500 people into the timestream to cause chaos.
On the negative front there wouldn't be any need to track icebergs and more shipping could have been affected. Especially given the international rules around lifeboats following that disaster.
|
2019-12-07 09:40:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1903291642665863, "perplexity": 1460.03564540259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540497022.38/warc/CC-MAIN-20191207082632-20191207110632-00120.warc.gz"}
|
https://zbmath.org/?q=an:0972.14015
|
# zbMATH — the first resource for mathematics
Torsors and rational points. (English) Zbl 0972.14015
Cambridge Tracts in Mathematics. 144. Cambridge: Cambridge University Press. viii, 187 p. (2001).
Contents: 1. Introduction;
Part one: Torsors:
2. Torsors: general theory; 3. Examples of torsors; 4. Abelian torsors;
Part two: Descent and Manin obstruction:
5. Obstructions over number fields; 6. Abelian descent and Manin obstruction; 7. Abelian descent on conic bundle surfaces; 8. Non-abelian descent on bielliptic surfaces; 9. Homogenous spaces and non-abelian cohomology; References; Index.
Let $$X$$ be a smooth projective variety defined over a number field $$k$$. One of the basic questions about rational points on $$X$$ is whether the so-called Hasse principle holds, i.e. whether the existence of points over each completion $$k_v$$ of $$k$$ (or equivalently, the nonvoidness of the set $$X({\mathbb{A}}_k)\neq 0$$ for $${\mathbb{A}}_k$$ the adele ring of $$k$$) implies $$X(k)\neq\emptyset$$. It has arisen already in the classical case of curves of genus 1 that this question can be attacked by studying torsors, or principal homogeneous spaces over $$X$$, for the following reason. Suppose $$f: Y\to X$$ is a torsor under a (linear algebraic) group $$G$$. Then of course any $$k$$-point of $$Y$$ projects onto a $$k$$-point of $$X$$, but even if the fibre $$Z=Y_P$$ over a point $$P\in X(k)$$ contains no $$k$$-point, there is a well-known twisting operation producing a torsor $$Y_Z\to X$$ under a “twisted” group $$G_Z$$ (equal to $$G$$ if $$G$$ is abelian) whose fibre over $$P$$ already contains one. This constitutes a link between the behaviour of rational points on $$X$$ and those on the $$Y_Z$$; in particular, the absence of rational, or even adelic points on the $$Y_Z$$ constitutes an obstruction to the Hasse principle on $$X$$, and in many cases this can be checked by a finite amount of computation.
In the already mentioned case of genus 1 curves $$G$$ is finite abelian, the torsors are the so-called $$n$$-coverings and one gets the classical theory of descent on elliptic curves.
By examining F. Châtelet’s classical work on the arithmetic of the surfaces named after him, J.-L. Colliot-Thélène and J.-J. Sansuc discovered towards the end of the 1970’s that in the case of rational varieties an equally fruitful theory can be developed by studying torsors under tori [cf. Duke Math. J. 54, 375-492 (1987; Zbl 0659.14028)]. Their descent theory was powerful enough to settle the question of the uniqueness of the so-called Manin obstruction in many cases. This obstruction is defined as follows: Define a pairing $$X({\mathbb{A}}_k)\times \text{Br}(X)\to {\mathbb{Q}}/{\mathbb{Z}}$$ (where $$\text{Br}(X)$$ is the cohomological Brauer group of $$X$$) by evaluating elements of $$\text{Br}(X)$$ at each component and then taking the sum of local invariants (which is known to be finite). By global class field theory, the diagonal image of $$X(k)$$ in $$X({\mathbb{A}}_k)$$ is then contained in the subset $$X({\mathbb{A}}_k)^{\text{Br}}$$ of adeles annihilated by the above pairing and thus the emptiness of $$X({\mathbb{A}}_k)^{\text{Br}}$$ is an obstruction to the Hasse principle if $$X({\mathbb{A}}_k)$$ itself is nonempty. The obstruction is said to be the only one if $$X({\mathbb{A}}_k)^{\text{Br}}=\emptyset$$ is equivalent to $$X(k)=\emptyset$$. In the case of a rational variety $$X$$, J.-L. Colliot-Thélène and J.-J. Sansuc were able to characterise the set $$X({\mathbb{A}}_k)^{\text{Br}}$$ as the set of adelic points coming from so-called universal torsors under the Néron-Severi torus of $$X$$, thereby reducing the uniqueness question of the Manin obstruction to the existence of universal torsors with adelic points, a question which is effectively solvable in many concrete situations.
Almost thirty years later, A. N. Skorobogator presented [Invent. Math. 135, 399-424 (1999; Zbl 0951.14013)] the first unconditional example of a variety (in fact a bielliptic surface) where the Manin obstruction does not suffice to explain the failure of the Hasse “principle; subsequently, D. Harari and the author developed [Non-abelian cohomology and rational points” (to appear)] a conceptual framework for explaining the new counter-example, relating it to torsors under non-abelian groups.
The book under review offers a clear and polished account of the most important results in the field, with emphasis on the contents of the three works cited above. The two opening chapters, which may be of independent interest, present a nice collection of basic facts and examples about torsors which can hardly be found together elsewhere. Then the main results of the theory of Colliot-Thélène and Sansuc are presented in an elegant and streamlined manner thanks to the insights offered by subsequent developments and also to the use of derived categories which simplifies several proofs. Afterwards, the general theory is applied to treat specific classes of varieties, such as smooth compactifications of tori, the first case successfully handled by Colliot-Thélène and Sansuc, as well as several types of conic bundle surfaces whose study has been pioneered by Swinnerton-Dyer. The book concludes by explaining some aspects of the theory of Harari and the author, with applications to the interpretation of the author’s counter-example mentioned above and of Borovoi’s work on the uniqueness of the Manin obstruction for homogeneous spaces under semisimple simply connected algebraic groups.
An attractive feature of the book is the healthy balance between the abstract and the concrete, in that the author does not refrain from using hard machinery for building up the conceptual framework but as a counterpoint works out several examples in detail. Thus, due to its reasonably self-contained nature and the careful choice of topics, the book provides an excellent account of the subject for the non-expert. As for the experts, they will probably learn less here since most of the material appeared in original sources in a rather similar form, but they will certainly appreciate disposing of a neat and handy reference and may gain stimulus for further applications of the descent method in diophantine geometry.
##### MSC:
14G05 Rational points 14-02 Research exposition (monographs, survey articles) pertaining to algebraic geometry 14G25 Global ground fields in algebraic geometry
|
2021-10-20 13:20:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7568323612213135, "perplexity": 392.07586655329027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00690.warc.gz"}
|
https://byjus.com/question-answer/a-physical-process-in-which-a-substance-reacts-with-oxygen-to-give-off-heat-is-called-combustion-true-or-false-also-write-the-false-statements-in-their-correct-form-2/
|
Question
# A physical process in which a substance reacts with oxygen to give off heat is called combustion
A
True
No worries! We‘ve got your back. Try BYJU‘S free classes today!
B
False
Right on! Give the BNAT exam to get a 100% scholarship for BYJUS courses
Open in App
Solution
## The correct option is B FalseCombustion reaction:The chemical process is one where the reaction occurs between fuel and oxygen to give off heat and is called combustion.Heat and light energy are released during this reaction. The flame then develops from the energy of heat and light.The Combustion reaction formula is$\mathrm{Hydrocarbon}+\mathrm{oxygen}\to \mathrm{Heat}\mathrm{energy}$Chemical changes in combustion:Combustion is a chemical process where substances are converted to two or other new products.As combustion of a hydrocarbon produces a new compound and it is irreversible so, it is a chemical process.The physical change does not occur in the combustion process as it is a reversible one and no new products will be formed in case of physical change. So, combustion is a chemical change.Examples of combustion processes are the burning of coal or wood, Car and buses burning petrol or diesel to run. Therefore, the given statement is false.
Suggest Corrections
0
|
2023-02-06 21:43:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.487417072057724, "perplexity": 2680.1333250171188}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500365.52/warc/CC-MAIN-20230206212647-20230207002647-00469.warc.gz"}
|
http://openstudy.com/updates/4d67f6cc5f368b0b7c6fb6b0
|
## anonymous 5 years ago How do you integrate e^2x sin(3x)dx?
1. bahrom7893
Use integration by parts. Btw when is this due?
2. anonymous
its a revision, we cant figure it out
3. bahrom7893
No i just meant i have to do my own hw too, so do u need this like right away or by tonite or by 2morro mornin?
4. bahrom7893
ill just do it now
5. anonymous
we would just like to know what to do with the sin(3x)
6. anonymous
thanks!
7. bahrom7893
ima write this out on paper and email it to u. what's ur email?
8. anonymous
moca341@hotmail.com .. thank you :)
9. bahrom7893
np
10. bahrom7893
almost done...
11. anonymous
great!
12. bahrom7893
i will take separate pix and email them one by one, the whole solution won't fit lol
13. anonymous
thats fine! haha
14. bahrom7893
okay emailed it
15. bahrom7893
ask me if u have any questions, either here or by email and Fan if I helped!
16. bahrom7893
did u get it?
17. anonymous
It helped a lot, but i dont really get it!, ill try to figure it out! thanks a lot!
18. bahrom7893
no, ask me which part don't you understand?
19. bahrom7893
I think it might the place where I let that one integral be capital i
20. anonymous
i dont even understand the first line, dont worry about it!
21. bahrom7893
Oh okay so first I used integration by parts. See the stuff in the circle: I said Let u = e^(2x), then what is du/dx? du/dx = derivative of u = 2e^(2x), multiply both sides by dx to get: du = 2e^(2x)dx
22. anonymous
i dont see where the 1/3 cos (3x) comes from
23. bahrom7893
Oh okay that's just the integral of Sin(3x)
24. bahrom7893
I will work it out here: After that I said let dv/dx = Sin(3x), well then what's V? v is the integral of dv/dx with respect to x. V = Integral(dv/dx, dx) $v = \int\limits_{}^{}Sin(3x)dx$
25. bahrom7893
Here you have to use a simple substitution: Let a = 3x; then da = 3 dx. Now you have 3x in your integral, but you still need a 3 in front of dx. To do so, multiply and divide by 3, same as multiplying by 1
26. anonymous
ok, i get it, thanks a lot!
27. bahrom7893
$v = \int\limits_{}^{}Sin(3x)*(3/3)*dx = (1/3) *\int\limits_{}^{}Sin(3x)*3*dx$
28. bahrom7893
Now you have both a and da; your integral simplifies to: $(1/3)\int\limits_{}^{}Sin(a)*da$
29. bahrom7893
that's it then integrate as u would a regular sin and in the end replace a by 3x
|
2016-09-27 08:46:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8548834323883057, "perplexity": 3337.5159639593426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660996.20/warc/CC-MAIN-20160924173740-00090-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/276824/cows-and-bulls-game-in-javascript
|
# Cows and bulls game in JavaScript
I've been working on this game the last 2 days. The game is working but I need a feedback for my code. Can it be made simpler? Sorry if my code is very complicated or not well written: I am in the learning process!
I feel that I could save some code and have more clear code for others to read
Gameplay looks so: you enter your guess code, computer compares it with the secret code and gives you two clues: numbers of "bulls" and "cows". What does this mean? A bull is a digit which is present in both the codes in the same position. And a cow is a digit which is present in both the codes in the different position. For example, if the secret code is 2056 and you ask 9516, an answer will be "one bull and one cow" (but you won't know which digit is a bull and which digit is a cow). That's all!
const prompt = require("prompt-sync")({ sigint: true });
// storing variables
let rulesMessage = Gameplay looks so: you enter your guess code, computer compares it with the secret code and gives you two clues: numbers of "bulls" and "cows". What does this mean? A bull is a digit which is present in both the codes in the same position. And a cow is a digit which is present in both the codes in the different position. For example, if the secret code is 2056 and you ask 9516, an answer will be "one bull and one cow" (but you won't know which digit is a bull and which digit is a cow). That's all!;
let cheerMessage = [
"Just a friendly reminder that I believe in you.",
"I predict a big win at the next guess",
"Crossing my fingers for you! Go, go, go",
];
let level = "";
let findTheUser = prompt("what's your name: ");
//1st function generate a random number without repetitive number the length of the number is given from the user
function getRandomNumberNoRepeat(level) {
let numberPick = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
return numberPick
.sort(() => Math.random() - 0.5)
.join("")
.slice(0, level);
}
// 2nd function specifies the level of the game the level value will be used in the 1st function
function levelSelector() {
level = prompt("Choose your level easy, medium, hard or extreme: ");
if (level.toLowerCase() === "easy") {
level = 4;
console.log(\nSo easy level then guess a ${level} digit number!!Good luck${findTheUser});
} else if (level.toLowerCase() === "medium") {
level = 6;
console.log(\nSo medium level then guess a ${level} digit number!!Good luck${findTheUser});
} else if (level.toLowerCase() === "hard") {
level = 7;
console.log(\nSo hard level then guess a ${level} digit number!!Good luck${findTheUser});
} else if (level.toLowerCase() === "extreme") {
level = 9;
console.log(\nSo extreme level then guess a ${level} digit number!!you are brave${findTheUser});
} else {
level = false;
console.log(
"\nthe level should be: easy, medium, hard or extreme! check for typos!"
);
}
return level;
}
// 3rd function will ask the user if knows the rules
let question = prompt(Do you know the rules of the game ${findTheUser} Y/N: ); console.clear(); if (question.toUpperCase() === "N") { console.log(\n${rulesMessage}\n\nLet´s go ${findTheUser}\n); } else { console.log(\nLet´s go${findTheUser});
}
}
// 4th function generate a random message from the array:cheerMessage
function randomMessage() {
return cheerMessage[Math.floor(Math.random() * cheerMessage.length)];
}
// 5th function ask the user if he want to play again if not greeting message
function playAgain() {
let playAgainTheGame = "";
while (
!(
playAgainTheGame.toUpperCase() === "Y" ||
playAgainTheGame.toUpperCase() === "N"
)
) {
playAgainTheGame = prompt("Do you want to play again? Y/N: ");
if (playAgainTheGame.toUpperCase() === "Y") {
return start();
} else if (playAgainTheGame.toUpperCase() === "N") {
console.log(\n Thanks for playing ${findTheUser} ); return }else{ console.log( Please check the input for Y for yes and N for no,any other input is not valid );} } } // 6th function checks for repetitive numbers in users input if it's true it will ask again for valid number // also it will check if the length of the secret number is the same as from the user if not it will ask again for the valid number function validGuess(guess, randomNumberNoRepeat) { let noRepeatInputCheck = guess.split("").sort((a, b) => a - b); let hasDuplicates = false; for (let k = 0; k < noRepeatInputCheck.length - 1; k++) { if (noRepeatInputCheck[k] === noRepeatInputCheck[k + 1]) { hasDuplicates = true; } } if (hasDuplicates === true) { console.log("Check your number!!Remember each number should be unique"); return false; } if (randomNumberNoRepeat.length !== guess.length) { console.log(Not valid number,you need${level} digit number!!!Let's go ${findTheUser}); return false; }if (!/^\d+$/.test(guess)) {
console.log(only numbers are allowed!!!!Let's go ${findTheUser}); return false } return true; } // 7th function pass as parameter the level of the user every time the while loop runs it count as an attempt function playTheGame(level) { let secretNumber = getRandomNumberNoRepeat(level); let attempts = 0; console.log("\n"); while (true) { attempts++; guess = prompt("Number: "); // if the guess is the same number as the secretNumber then you won message will be print and also the attempts if (secretNumber === guess) { console.log( attempts === 1 ? you won at your first attempt well done${findTheUser}
: You won after ${attempts} attempts very well done${findTheUser}
);
break;
}
// calling the function that check for the user input if follow the rules of the game
if (!validGuess(guess, secretNumber)) {
continue;
}
// check and count the secretNumber if has same number as the users input
// if the number is in the same position then it will count a bull
// if the numbers is included in the guess but in different position it will count as a cow
let cows = 0;
let bulls = 0;
for (let i = 0; i < secretNumber.length; i++) {
for (let j = 0; j < guess.length; j++) {
if (secretNumber[i] === guess[j]) {
if (i === j) {
bulls++;
} else {
cows++;
}
} else continue;
}
}
// if the user dont find anything then it will print a random cheer message
if (bulls === 0 && cows === 0) {
console.log("\n", randomMessage(), "\n");
}
console.log(
cows === 1 ? You found ${cows} cow && : You found${cows} cows &&,
bulls === 1 ? You found ${bulls} bull : You found${bulls} bulls
);
}
}
// calling the functions and start the game
function start() {
let level = levelSelector();
while (level === false) {
level = levelSelector();
}
playTheGame(level);
playAgain();
}
start();
• is this just Mastermind? Jun 2 at 6:34
• //2nd function ... - there is no function named 2nd ( or 3rd, 4th ....) Jun 2 at 6:36
• yeah it is the same game Jun 2 at 8:28
• i am trying to keep order for the functions!!is this wrong ???what could improve my comments for more readable code ??? Jun 2 at 8:29
• Eventually the comments will be wrong because code changes over time. Why is precise function listing order relevant? Beethoven symphonies? OK. Grouping functions logically is good but why must they be in this precise order? If not, why label them like that? If order matters then there is likely something seriously wrong with the code. The last time listing order mattered was circa 1971 COBOL, and then coders made the order part of the name re: "127-Do-Something" Jun 2 at 15:19
|
2022-07-02 05:46:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2535057067871094, "perplexity": 4751.704633747883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00488.warc.gz"}
|
https://wikieducator.org/Cubic_identity_1
|
# Cubic identity 1
Objective
To verify $(a+b)^3 = a^3+3a^2b+3ab^2+b^3$ using unit cubes
27 Unit cubes
## What is a unit cube?
A unit cube is nothing but a cube all of whose sides are 1 unit long as shown below.
This is a unit cube
1. We say its dimension is 1 X 1 X 1.
2. The volume of a 3-dimensional unit cube is 1 cubic unit.
3. By joining unit cubes we can form cubes and cuboids of varied dimension.
|
2021-01-25 06:20:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6789743900299072, "perplexity": 1030.2152083454089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703565376.63/warc/CC-MAIN-20210125061144-20210125091144-00763.warc.gz"}
|
https://www.illustrativemathematics.org/EE
|
#### 6.EE.A.2.a. Write expressions that record operations with numbers and with letters standing for numbers. For example, express the calculation “Subtract $y$ from 5” as $5 - y$.
• No tasks yet illustrate this standard.
#### 6.EE.A.2.b. Identify parts of an expression using mathematical terms (sum, term, product, factor, quotient, coefficient); view one or more parts of an expression as a single entity. For example, describe the expression $2 (8 + 7)$ as a product of two factors; view $(8 + 7)$ as both a single entity and a sum of two terms.
• No tasks yet illustrate this standard.
#### 8.EE.A.2. Use square root and cube root symbols to represent solutions to equations of the form $x^2 = p$ and $x^3 = p$, where $p$ is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that $\sqrt{2}$ is irrational.
• No tasks yet illustrate this standard.
#### 8.EE.C.7.a. Give examples of linear equations in one variable with one solution, infinitely many solutions, or no solutions. Show which of these possibilities is the case by successively transforming the given equation into simpler forms, until an equivalent equation of the form $x = a$, $a = a$, or $a = b$ results (where $a$ and $b$ are different numbers).
• No tasks yet illustrate this standard.
#### 8.EE.C.7.b. Solve linear equations with rational number coefficients, including equations whose solutions require expanding expressions using the distributive property and collecting like terms.
• No tasks yet illustrate this standard.
#### 8.EE.C.8.b. Solve systems of two linear equations in two variables algebraically, and estimate solutions by graphing the equations. Solve simple cases by inspection. For example, $3x + 2y = 5$ and $3x + 2y = 6$ have no solution because $3x + 2y$ cannot simultaneously be $5$ and $6$.
• No tasks yet illustrate this standard.
|
2018-10-22 19:48:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6925297379493713, "perplexity": 744.2661952486346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583515375.86/warc/CC-MAIN-20181022180558-20181022202058-00538.warc.gz"}
|
https://jzhao.xyz/thoughts/Rhizome-Research-Log/
|
jzhao.xyz
^
/|\
/|\
|
Rhizome Research Log
Last updated July 3, 2022
I think research logs tend to generally focus too much on what one did rather than what one felt. This log aspired to have a healthy mix of both.
# July
# July 2-3rd
• An ‘aha’ moment caught in 4k… watch me try to figure out why asynchronous and partially synchronous system models aren’t the same thing (s/o Sebastien for being so kind and patient). This was super satisfying!
# July 1st
• Internet went out today halfway through watching lectures :((
• Spent a bunch of time just reading books + thinking
• More notes from Tim Roughgarden’s foundation course on PKI, BB, impossibility theorems, etc.
# June
# June 30th
• Settling into a better work rhythm I think.
• Food here is surprisingly expensive but groceries is still miles cheaper than just getting Uber Eats everyday.
• Have a sudden urge to work on my personal site but I will ignore that for the time being…
• Sebastien sent a YouTube playlist on the foundations of blockchains that have some sections which seem highly relevant. Slowly making my way through these
# June 29th
• Finally wrapped up school! Anson is headed back to Arizona today too :((
• Living together has been a fun dance of trying to balance our energy levels, but felt very much like a team throughout. I’m really glad I chose to prioritize relationship, truly some moments over the past month where I was like “wow, is this real”. It feels like I’m selectively giving deep attention in-turn to the things I care most about.
• Now is the era to just fully focus my attention on research and this project though
• I think this finally means that the vast majority of my waking hours will be on research. Uninstalled a game I was spending way too much time playing.. it is grind time!!
# June 27-28th
• Nearing the end of my literature review era. Still need to go through Braid/Redwood, SSB, Yjs, and Hypercore inner-workings.
• Thinking it might be good to do a general overview of CRDTs before delving any further
# June 26th
• Belinda and Athena from Incepto told me about an SF writer event which happens every week and I’m currently at it right now. So many people here are just working on such really cool things and I’m excited to potentially have this space as incredibly condensed resesarch + thinking time. I think this is a great forcing function every Sunday to just… orient myself for the week and get shit done.
• Talked with some really really cool people at a birthday party in SF which were surprisingly receptive and interested in my work. Will definitely follow up on these conversations.
• More research on CouchDB and other database replication mechanisms to see what I can learn from it
# June 25th
• HackLodge meetup today, also met up with Spencer and Liam. Talked lots about the project then realized I haven’t spent much time just… sitting down and grinding out work.
• A decent chunk of it is 1) summer courses taking up much more time than I expected them to and 2) wanting to meet people in SF and spend time with Anson while she is still in SF… priorities priorities
• To borrow words from Anson, it’s “hermit time”. I feel like I am definitely behind schedule in terms of what I wanted to get done by this point of summer and I need to put in some serious work and thinking into this project.
# June 24th
• Reading about Hyper Hyper Space, doesn’t seem to place a big deal of emphasis on finality which seems important for a large chunk of applications.
• Open questions:
• Append-only log or append-only Merkle-DAG? Leaning more towards log still for easy understandability + debug even though Merkle-DAGs are more expressive (and battletested in blockchains and git)
# June 20th - 23rd
• Reading about VDFS’ (specifically Alluxio) and
• Open Questions
• Handling cases where data > storage availability
• Checkpoint heuristics: when to checkpoint? especially important if Rhizome is to run indefinitely
• “Lineage chains can grow very long in a long-running system like Alluxio, therefore the checkpointing algorithm should provide a bound on how long it takes to recompute data in the case of failures”
• Settling into new place, we cleaned out the garage (which is where I am staying) and made it somewhat liveable?? Took a lot of work, the previous tenant didn’t even properly move out which was a stressor for a little while
• Because there is no proper heating/cooling, sometimes I literally work with the garage door open for good circulation which gets me weird looks from the neighbours but it’s fun
• Incepto people have all been super nice and they are all working on/exploring cool things. I get a little distracted sometimes just working in the garage so it’s really nice I can just hop over to the hackerspace in the house to get some more focused work done.
# June 16 - 19th
• Interact Retreat! Lots of good conversations about the work I’m doing which has been super clarifying for what type of explanation gets through to certain types of people
• Generally find framing it in terms of net neutrality but applied to data gets a lot of people excited about it, as well as meaningfully explaining + differentiating from Tim Berners-Lee’s Solid project and how Rhizome focuses on addressing main retro points from major p2p protocols.
# June 14 - 15th
• Mostly trying to answer questions around how decentralized marketplaces for demand work, looking at Golem and Orchid
• Lots of moving around (moved from Tempe to SF, about to head to Interact retreat!)
# June 11-12th
• Roadtrip with Anson! Much needed break to get a mental break and reset
# June 10th
• Spicy day today… Jack Dorsey just announced TBD working on Web5, supposedly an extra decentralized web platform ( https://twitter.com/jack/status/1535314738078486533)
• web5 seems to focus on the philosophy side a lot more than actual usability
• Very similar to WebID except anchored on bitcoin (lots of interesting stuff using Sidetree)
• Feel like a little boat in a big ocean where huge battleships drift by every now and then
• Makes me doubt what I can really do as this small little boat
• But reminded that steering my own little boat gives me agency as to what I can explore and do
• The little boat that could
# June 9th
• Lots of research, mostly around FOAF, LDP, RDF
• Looked more into decentralized marketplaces like Raiden and Orchid to see how they handle payments
• Mostly just reading articles and specifications, your average day of research
# June 8th
• Got my first grant rejection from Emergent Ventures today :((
• Feeling.. kinda numb? I feel like grand scheme of things it doesn’t matter but this is the first hard no that I’ve gotten
• Spent some time looking for some other grants but my conclusion is that I should spend more time getting shit done before asking for more funding.
• I have enough in savings to last me until end of summer but it means I’ll have to start contracting during the school year which isn’t ideal, but gives me pure focused time this summer to just do research.
• Onwards!
• Lots of really great bits from Browser Co’s piece on Optimizing for Feelings
• “Anything new is by nature without precedent — meaning, without data to know whether it will work or not. So when we approach building new things, we don’t optimize for metrics. We optimize for feelings”
• “How do you feel when you finally step foot in your own living room, after weeks away from home? When you plop down on your own bed, or whip up a meal in your own kitchen? It conjures up a specific feeling, doesn’t it? That’s because these spaces are a reflection of you — created by you, for you. Software can feel the same way if individuals have agency and sovereignty over what is on their screens.”
# June 4th - June 7th
• Getting back into a working groove after moving again, Arizona is ridiculously hot. Made the dumb mistake of walking to the grocery instead of taking transit lol
• Learned more about underlying datastructures of IPFS including CIDs
• More notes on DHTs and Kademlia in particular
# June 1st - June 3rd
• Had a call with a few others folks working adjacent to decentralized infrastructure and people seemed pretty excited about the proposal! It was the first time in the past month that I felt pretty confident about the project when talking about this with others, definitely a personal milestone :)
# May
# May 28th - May 29th
• Attending friends’ graduation for the past few days, crazy to think that this will be the last time I see some of these friends for a long time.
• Worked on thinking about and polishing my grant proposal, finally getting to some phrasings that resonate and sound good
# May 27th
• Finishing up miniraft, added tests for voting and fixed up some workflow stuff to auto-test and publish documentation!!! It’s published now on GitHub :))
• Notes on DID which seems particularly applicable to the notion of identity + identity documents
• Once again had a breakdown :)) Constantly feel like I’m not doing enough and that time is slipping between my fingertips…
# May 21st - May 22nd
• Packing + flights! I am now in Vancouver for the next week :))
• Hectic flying experience… didn’t get much done
# May 20th
• Chatted with Justin Glibert who gave some very piercing advice
• What is the most you can cut from your current proposal and have it still be meaningful?
• via negativa: essentially the study of what not to do
• In action, it is a recipe for what to avoid, what not to do—subtraction, not addition.
• You can’t know what is going to work but also you know there are things that are obviously not.
• Don’t try to think you are a god and reinvent everything from scratch. Don’t catch NIH syndrome.
• You only have 10 beautiful idea tokens in your life you want to do it so you should just do it
• Don’t just do the plumbing and make stuff you already are good at if you’re trying to learn
• If this is something you just want to work on (true in this case) then work on it with your full heart
• Not being harsh because it’s a bad idea
• But rather I don’t want you to waste your time. This is your last summer without ‘real-world’ responsibilities. I would trade so much to be in your position right now.
• I am being harsh so that you spend your time wisely and don’t do something stupid.
• Technical thoughts
• Is Rhizome actually a generalized form of state channels?
• EVM + Solidity on top of little chains between people
• Minecraft on top of this to build engines like https://www.worldql.com/
# May 19th
• Proposal re-writes + more research today, got a lot done in office today and still had time to head to Central Park to read… a great day all things considered.
• Open questions from today’s reading + writing:
• How do identity ‘clusters’ or organizations/groups of people work? How are they represented?
• Perhaps instead of having separate instantiation of your identity on fixed set of apps, we can have the same identity with separate instantiations of the app?
• Who runs cloud peers?
• Have a global marketplace where people can list/sell spare compute and storage
• Who does the compute?
• Most apps are lightweight to run on people’s own devices
• The main reason we’ve needed massive datastores and compute centers in the first place is because large companies have centralized billions of people’s data into their own servers
• Cloud peers can offload and perform heavy lifting if necessary
• More meditations on identity and data
• Thinking about how data exists only as relations between things… how do we preserve this?
• ‘Data’ is data in the context of that user (or group of users) using that specific application
• Learned about the concept of petnames in more depth today and there’s a really cool way of thinking about identity here perhaps
• Almost all of the contexts in which we collaborate are not global. The you I know is likely different from the you your family knows. Identity should be relational rather than standalone?
# May 18th
• Grant writing + Verses proposal wrangling
• Had Anson tear apart my proposal today
• It was so incredibly helpful to get that level of honest feedback but I just feel in the dumps right now LOL I need to figure out how to untie my own self-worth from my work
• I expect something similar will happen when I meet with Justin.. and many more times this summer
• Good feedback is equal parts bitter and sweet
• Bitter in that it tells you the harsh truths that few have the courage to
• Sweet in that they truly care enough and have enough faith to point harsh truths out
• “When you’re screwing up and nobody says anything to you anymore that means they’ve given up on you…you may not want to hear it but your critics are often the ones telling you they still love you and care about you and want to make you better.” ― Randy Pausch, The Last Lecture
# May 17th
• Went to NYC to work at the Thrive Capital office with some Interact folks and wow… the difference being outside and in a good working environment makes is ridiculous.
• Migrated all the tracing stuff out of server.rs and log.rs into its own file. Makes the code a lot cleaner to work with.
• Deleted transport.rs (and moved the contents into tests/common.rs) now that it is no longer a part of the server. Realizing now I’ll probably need to do another refactor of the transport layer to support simulating network partitions, dropping packets, etc. so I have more surface area to test with.
• Talked with Sebastien who has been doing independent research for almost a decade now. Mentioned that I was really feeling like I was in the depths of the Valley of Despair and he just laughed and said “that was me 10 years ago and I still feel that way.” Horrifying but also weirdly comforting? He gave me some advice and thoughts (mostly with regards to independent research but honestly a lot of sage life advice too):
• In independent research, one often pendulums between two brains that drive your day-to-day
1. Brain 1: I want to make change in the world, I want to ship and build
2. Brain 2: I want to understand why this works the way it does
• It is almost always Brain 2 thinking that leads to incredibly high payoffs in clarity and increased conviction.
• Still, breaking things into legible pieces is important. If not for other people, for yourself to have small wins.
• Don’t build for the sake of building, build as a by-product of understanding
• Don’t get trapped in the mindset of having every little thing you do fit perfectly in your grand master plan.
• It is sufficient to do things to learn and to understand (even if just about yourself)
• Don’t have conviction that you are right because that will lead to disappointment. Have conviction that you will learn regardless.
• He flew out every weekend from SF to San Diego just to attend a lecture from a professor he really liked and he said it was worth every flight.
• Often times, it is one core principle that if followed to its natural conclusion/end will result in a fundamental perspective shift (e.g. quantum mechanics).
• What is that core principle that sits at the heart of everything you find interesting? The connection between the dots is only evident in hindsight so don’t spend too long thinking about it. But just follow your gut, it right more often than not.
• To be honest, I don’t really understand all of this advice yet and I don’t pretend to but at the moment, it gives me comfort that even if there isn’t light at the end of the tunnel, the darkness will still be enjoyable
# May 16th
• Grant writing again… Finished rough draft for Protocol Labs RFP 000 and writing EV grant proposal + getting feedback
• Had a mini-breakdown today after realizing I am just not enjoying this as much as I thought I would be. I’m often spending 12+ hour days writing code or grants and I just feel so behind. And I don’t get why!!!! I’ve been looking forward to this summer for so long.
• I think financial uncertainty is becoming more real day after day… really hoping that one of these goes through and is successful
• It’s too early to quit. There’s still so much more to build/learn/do/write and I’m not ready to throw in the towel just yet.
# May 15th
• Family roadtrip, no work today :)
# May 14th
• Finish testing harness - it looks so pretty!
• Finally updating research proposals after putting it off for 3 days. I suspect I’m using miniraft as an excuse to avoid the grant writing because making things legible is hard!! I’d much rather write code and look at pretty command line outputs instead but this is important work that needs to be done.
# May 12th - May 13th
• Reaffirming myself that a lot of this is necessary learning and this is a worthwhile project
• Not sure if this is actually true
• But more so convincing myself of it so that I have the energy/motivation to go through with it
• A lot of technical refactoring going on to accommodate unit testing
• Removed a lot of unnecessary lifetimes while changing RaftServer functions to return a vector of sendable messages rather than directly having each server hold a mutable reference to the transport layer (Rust doesn’t allow multiple mutable references without a RefCell!)
Let’s say you want to become good at [x].
It’s almost impossible to do it because every day on Twitter you have friends who’ve raised 6 million to do crazy stuff. And so every single day, you open your books, and you take your notes and you start writing stuff, and you have to solve those equations.
And every single day you tell yourself, why am I doing this?
I could just go out and bullshit investors and build a company. And I think too many people actually do that. Myself included. I managed to resist for a while and I spent a lot of time learning different, difficult things, but it’s very hard not to have ADD in this world. It’s very hard to stay focused on important things that take a while to be learned.
# May 11th
• Finished the first pass of implementation of miniraft! In the midst of adding test infrastructure and verifying correctness of the implementation.
• Probably spent tooo long making it look nice but hey, if I’m going to be spending hours looking at this it might as well be good to look at
• Also spent an hour trying to debug a test only before realizing cargo test runs in parallel so debug messages were out of whack
• Feeling quite demotivated regarding overall self-belief in the project even though I’m only 11 days in! Been trying to explain Rhizome to a few folks who have experience in the space and it is often so intimidating.
• Like yeah, I know this probably isn’t the best way to go about it. Maybe they’ll tell me what I’m working on is a long solved problem and I’m wasting my time. Or “couldn’t you just use x and y to achieve the same effect”? I can’t help but sometimes feel like I’m wasting my time – there are so many smart people working on the same problem, what makes me feel like I can be the one to make a meaningful contribution to it?
• I know that regardless of whether this project succeeds a lot, I’m already learning a lot in terms of technical skills and also about myself in the face of uncertainty and more independent work so I will take that as a win regardless.
# May 10th
• Discussing grant proposals with Verses folks, doing a lot more grant/proposal writing than I’d like these days
• Finished most of miniraft logic up until commit_log_entries. Still need to add tests though :’)
• Tech bear market isn’t promising for raising funding, esp for more experimental/greenfield work like this :((
# May 9th
• Literally just wrestling with Rust’s borrow checker because dyn traits are funky :((
• Ran into a really weird design problem where I wasn’t sure how to order the lifetimes of the log or the state machine (should the app own the log which owns the state machine? should the log hold a reference to the app)?
• I opted to construct the application first then pass a pointer to the log so that when appending entries to the log, it can just call self.entries.iter().for_each(|entry| self.app.transition_fn(entry))
• Finally caved and watched an hour long video on closures, Fn, FnOnce, FnMut, boxed closures, and function pointers ( Jon Gjengset, I owe you my life)
• Feels really stupid but it was literally a change from &'s mut dyn App<'s, T, S> to Box<dyn App<T, S>
• When lifetimes get as messy as they did, there’s probably a cleaner way to do it with a heap allocated value :)) Use Box more often!
# May 8th
• Sketching out grant proposals to Emergent Ventures + Protocol Labs
• Had a chat with Sebastien about research institutes and what long-term support for work like this could look like in the context of Verses
• More implementation work for miniraft, about halfway done I think?
• More of a slower day to spend time with family for Mothers Day :)
# May 7th
• Does not seem promising that my research work will be support by Verses this summer…
• Looking for other places to apply for funding but ugh this is unfortunate
• Lots of coding today for miniraft! Finally feeling like I’m becoming more fluent in Rust. Figured out some nasty named lifetime stuff today by drawing a few diagrams and kinda feel like a wizard!!! Small wins
# May 6th
• Mostly writing up recent learnings and incorporating them into the research proposal… lots of words today
• Sometimes I feel like I’m doing research to be able to do more research…
• I think I am finally getting to a point where Rhizome is making more and more sense and obvious why it is necessary
• I started this project/research very much like “oh wow, this is a cool set of technologies and here are some vague words and feelings about what I think is inadequate in the space” and it has sort of refined itself into a clear use case!
• Came across the concept of a “cloud peer” today in Hypercore documentation and it was like “WOW I had this exact same idea and they already have a name for it” and it was so cool
• Really excited about this future of ‘personal cloud computing’
• I think this summer will be mostly focused on the data replication / identity aspect of Rhizome, realizing that I think I was way too ambitious with my first proposal
• More implementation on miniraft. Rust feels so slow to get back into a ‘de-rusted state’ (hah) where code just ‘flows’. It feels fun though! Type system reminds me a lot of Haskell.
# May 5th
• Finishing up Martin Kleppmann’s Course on Distributed Systems
• Cleaning up notes into atomic concepts that I can reference
• Continuing implementation of miniraft
• What if… Rhizome had built in mechanisms for managing ‘branches’
• Default branches are single stream
• To make a collaborative doc you can ‘fuse’ or ‘join’ branches together temporarily to sync them with each other
• What if we made something on top of git like this that actually functions on a syntax level rather than a character level… one for the idea list
• Pace layers for collaboration
• Real-time (keystroke-by-keystroke)
• On-click (manually click refresh)
• Suggest changes (like Google Docs, accept/reject)
• Agreeing on what operations a CRDT can perform still seems to be difficult ( see 1hr into this talk)
• Possible room for data lensing on public schemas to be useful here
# May 3rd
• Read about more NAT traversal and holepunching efficacy, turns out hole punching is just not as reliable as I thought it was
• Compared more traditional consensus algorithms like Raft to Solana.
• First formal architecture sketch?
• Need to read more about DID and IPFS but this seems like a promising start?
• Each user is essentially a DID that is associated with an IPFS document that references a bunch of other things
• Each device in the devices array runs a Rhizome Node which is essentially a wrapped IPFS node that pins the user IPFS object and can edit it
• Right now, this means that if all a user’s devices are offline, those files are unreachable. For people who still want their stuff to be replicated online, perhaps can integrate FileCoin to incentivize other nodes to pin their document?
• The devices array is also used by Raft to coordinate what devices should be included in the cluster
• Modifications in the devices array leads to a Raft configuration update
• All devices that are reachable sync via Raft to keep an appState object up to date for the user
• When any appState log gets too long, it is snapshotted by the leader and persisted in IPFS.
• All the questions that are unanswered right now are in red. Lots of unanswered questions :))
• How does auth work for applications?
• How will schemas be published? Is there an app store?
• Who runs the web host? Is it self-hosted?
• What about non-technical people?
• How is a user created?
# May 2nd
• Mostly reading about Raft consensus algorithm today and understanding how it works
• Always wondered how these consensus algorithms deal with bad actors – turn out they don’t! That’s where BFT comes in
• Seems to be promising for replicating between trusted peers (potentially applicable)
• Starting a very minimal stripped down implementation of Raft in Rust I am nicknaming miniraft. Code here (but will most likely be private until it is done).
# May 1st
• Settling back into home, general research reading + writing the proposal
• Read various papers
• Learned about the basic premise of SSI
|
2022-07-06 05:06:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25051766633987427, "perplexity": 2650.4792738178685}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00521.warc.gz"}
|
http://www.slideshare.net/waynemarc1/what-is-a-renewable-energy-resource
|
Upcoming SlideShare
×
# What is a renewable energy resource
353 views
301 views
Published on
Renewable Energy Resources
Published in: Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.
Are you sure you want to Yes No
• Be the first to comment
• Be the first to like this
Views
Total views
353
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
7
0
Likes
0
Embeds 0
No embeds
No notes for slide
### What is a renewable energy resource
1. 1. Renewable energy is energy which comes fromnatural resources such as sunlight, wind, rain,tides, and geothermal heat, which arerenewable (naturally replenished).
2. 2. Solar Panels capture the energy and helpsconvert it to electricity.
3. 3. Wind turn turbines to produce electricity.Wind mills have been used since 1180 AD
4. 4. The impact of rain produces energyThe impact of each raindrop has the potential to produce energy.
5. 5. Tidal action can produce energyThe ocean can produce thermal energy from thesuns heat and mechanical energy from the tidesand waves.
6. 6. Geothermal energyCaptures the earth’s internal heat to produce many types of energy.
7. 7. It is estimated less than 35 percent of the worldsenergy is produced by a renewable resource. Itis likely that as the world energy demandsincrease, so will the use of the many types ofrenewable energy producers.
|
2016-08-26 17:34:17
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9140897989273071, "perplexity": 9292.965739676481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296020.34/warc/CC-MAIN-20160823195816-00076-ip-10-153-172-175.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/218870/does-de-franchis-theorem-hold-over-any-base-field
|
# Does de Franchis' theorem hold over any base field
Let $k$ be a field and let $X$ be a hyperbolic curve over $k$.
Then, there are only finitely many hyperbolic curves $Y$ over $k$ dominated by $X$.
I know this statement holds over $k=\mathbf{C}$. In particular, it holds over $k=\overline{\mathbf{Q}}$.
Does it hold over any field $k$?
-
What is a hyperbolic curve over $k$ ? – Georges Elencwajg Oct 22 '12 at 18:42
A smooth projective geometrically connected curve of genus $\geq 2$. – Harry Oct 22 '12 at 19:05
Thanks. Where did you see that definition? – Georges Elencwajg Oct 22 '12 at 19:22
I can't remember...I think sometimes people also use it for an integral curve whose normalization is a smooth projective geometrically connected curve of genus $\geq 2$. – Harry Oct 22 '12 at 19:32
Over an algebraically closed field, a hyperbolic curve is either a non-empty open subset of a projective curve of genus $g\ge 2$, or an elliptic curve minus at least one point, or the projective line minus at least three points. Equivalently, this is a smooth geometrically connected curve (not necessarily projective) with finite automomorphism group.
|
2016-02-08 21:26:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975816011428833, "perplexity": 202.18748671010033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701154221.36/warc/CC-MAIN-20160205193914-00164-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://soscholar.com/domain/detail?domain_id=edf44642-333a-b414-69ff-e98b38ac42a3
|
Symmetric Algebra 91 浏览 0关注
In mathematics, the symmetric algebra S(V) (also denoted Sym(V)) on a vector space V over a field (mathematics)|field K is the Free object|free commutative unital algebra|unital associative algebra over K containing V. It corresponds to polynomials with indeterminates in V, without choosing coordinates. The dual, S(V*) corresponds to polynomials on V. It should not be confused with symmetric tensors in V. A Frobenius algebra whose bilinear form is symmetric bilinear form|symmetric is also called a symmetric algebra, but is not discussed here.
相关概念
主要的会议/期刊 JAL Proc. L... arXiv:m... PROC AM... CORR QUART J... J. Diff... COMPOS ...
|
2017-10-18 15:04:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137262105941772, "perplexity": 3396.5230329182077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822992.27/warc/CC-MAIN-20171018142658-20171018162658-00679.warc.gz"}
|
https://www.techwhiff.com/learn/using-mathematica-how-do-you-type-1-and-2-in/411061
|
# Using Mathematica, how do you type 1 and 2 in mathematica. exlist[ (2]1 exlist[(7]1 exlistit Length(exList] 1 Note that the
###### Question:
Using Mathematica, how do you type 1 and 2 in mathematica.
exlist[ (2]1 exlist[(7]1 exlistit Length(exList] 1 Note that the last command gives us the last element of the list. We can also use extist[l-1]] for this. Negative numbers count backwards from the end of the list: exlist-1] exlisti-2]1 exListlI-3]1 1. Write a command accessing individual list elements to find the sum of the second and fifth element of our list exList. Summing Over a List Another important tool for working with lists is the Sum ] function. If you just want to add up all the elements of a list, you can use Totall ], but for anything more complicated (like Simpson's Rule) we'll need more sophistication. The basic syntax for a sum is Sum[ fUl, li, a, b)] which gives us ie a Thus to add all the elements of our list, we could use: SuntexList [[1]], {1, 1, Length [exlist])) Sum[] is very powerful, and has many more options for more complicated formulas. For example, you can increment i by more than just 1 each time like this: Sum[ fli] , (i,a,bAi] ], where Δί is how much you want to change i each time. So to add only the even-numbered elements in our list, we would use: Sum[exListt[i], (i, 2, Length [exList], 2) 2. Write a Sum[] command to add only the odd-numbered elements of our example list.
#### Similar Solved Questions
##### 10. Combinations In doing problems a - d below, use the combinations formula and calculator to...
10. Combinations In doing problems a - d below, use the combinations formula and calculator to evaluate each combination. SHOW SOME WORK for each problem Remember that each answer should be a WHOLE number. a. CS A bicycle shop owner has 12 mountain bicycles in the showroom. The owner wishes to selec...
##### Cardio World Inc. (CWI) is a sporting goods retailer that specializes in bicycles, running shoes, and...
Cardio World Inc. (CWI) is a sporting goods retailer that specializes in bicycles, running shoes, and related clothing. The firm has become successful by careful attention to trends in cycling, running, and changes in the technology and fashion of sport clothing. In recent years, however, the profit...
##### [2] A linear combination of vectors is given. Determine the resultant vector using the tip- to-tail...
[2] A linear combination of vectors is given. Determine the resultant vector using the tip- to-tail method for adding vectors geometrically. (9,-6) + (-12, -1) – (3, -15) + 5(2, -1)...
##### Assign the appropriate ICD-10-CM code(s): Closed dislocation of clavicle (initial encounter)
Assign the appropriate ICD-10-CM code(s): Closed dislocation of clavicle (initial encounter)...
##### Between a 5 year-old child, a 15 year-old teenager, a 35 year-old man and a 75...
Between a 5 year-old child, a 15 year-old teenager, a 35 year-old man and a 75 year-old grandpa, who will experience the most, the toxic effects of a specific dose of a given substance and why? If your answer is more than one, please elaborate....
##### A researcher wished to compare the average amount of time spent in extracurricular activities by high...
A researcher wished to compare the average amount of time spent in extracurricular activities by high school students in a suburban school district with that in a school district of a large city. The researcher obtained a simple random sample of 60 high school students in a large suburban school dis...
|
2022-08-09 07:23:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.413760781288147, "perplexity": 2292.5374743820494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00310.warc.gz"}
|
http://math.stackexchange.com/questions/64427/on-dilworths-theorem
|
# On Dilworth's theorem
Dilworth's Theorem on Posets states that if $P$ is a poset and $w(P)$ is the maximum cardinality of antichains in $P$ then there exist a decomposition of P of size $w(P)$.
The question is, why this theorem is not trivial?
Consider that there is a whole paper on Annals of Mathematics devoted to it: Dilworth, Robert P. (1950), "A Decomposition Theorem for Partially Ordered Sets", Annals of Mathematics 51 (1): 161–166, doi:10.2307/1969503.
Best
f
-
If that were Dilworth’s theorem, it would be trivial, but in fact Dilworth’s theorem says that the maximum size of an antichain in $P$ equals the minimum size of a partition of $P$ into chains. It’s that last requirement that makes the result non-trivial. – Brian M. Scott Sep 14 '11 at 9:53
@Brian, perhaps you'd like to make that comment an answer. – Gerry Myerson Sep 14 '11 at 13:33
If Dilworth’s theorem just said that every poset $P$ has a decomposition of size $w(P)$, where $w(P)$ is the maximum size of an antichain in $P$, it would indeed be trivial, but Dilworth’s theorem is actually a much stronger statement. It says that if $w(P)$ is finite, then $w(P)$ is equal to the minimum size of any partition of $P$ into chains. The requirement that each member of the partition be a chain is what makes the theorem non-trivial. It isn’t enormously difficult $-$ there are nice short proofs by H. Tverberg (On Dilworth’s decomposition theorem for partially ordered sets, J. Combinatorial Theory 3 (1967), 305-306) and Fred Galvin (A proof of Dilworth’s chain decomposition theorem, Amer. Math. Monthly 101 (1994), 352-353) $-$ but it’s definitely not trivial.
|
2015-07-05 19:28:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205615520477295, "perplexity": 125.51789470430309}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097710.32/warc/CC-MAIN-20150627031817-00161-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://www.hydrol-earth-syst-sci.net/22/943/2018/
|
Journal cover Journal topic
Hydrology and Earth System Sciences An interactive open-access journal of the European Geosciences Union
Journal topic
Hydrol. Earth Syst. Sci., 22, 943-956, 2018
https://doi.org/10.5194/hess-22-943-2018
Hydrol. Earth Syst. Sci., 22, 943-956, 2018
https://doi.org/10.5194/hess-22-943-2018
Research article 02 Feb 2018
Research article | 02 Feb 2018
# A dimensionless approach for the runoff peak assessment: effects of the rainfall event structure
A dimensionless approach for the runoff peak assessment
Ilaria Gnecco, Anna Palla, and Paolo La Barbera Ilaria Gnecco et al.
• Department of Civil, Chemical and Environmental Engineering, University of Genova, Genoa, 16145, Italy
Abstract
The present paper proposes a dimensionless analytical framework to investigate the impact of the rainfall event structure on the hydrograph peak. To this end a methodology to describe the rainfall event structure is proposed based on the similarity with the depth–duration–frequency (DDF) curves. The rainfall input consists of a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed. Soil abstractions are modelled using the Soil Conservation Service method and the instantaneous unit hydrograph theory is undertaken to determine the dimensionless form of the hydrograph; the two-parameter gamma distribution is selected to test the proposed methodology. The dimensionless approach is introduced in order to implement the analytical framework to any study case (i.e. natural catchment) for which the model assumptions are valid (i.e. linear causative and time-invariant system). A set of analytical expressions are derived in the case of a constant-intensity hyetograph to assess the maximum runoff peak with respect to a given rainfall event structure irrespective of the specific catchment (such as the return period associated with the reference rainfall event). Looking at the results, the curve of the maximum values of the runoff peak reveals a local minimum point corresponding to the design hyetograph derived according to the statistical DDF curve. A specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; finally their effects on the hydrograph peak are examined.
1 Introduction
The ability to predict the hydrologic response of a river basin is a central feature in hydrology. For a given rainfall event, estimating rainfall excess and transforming it to a runoff hydrograph is an important task for planning, design and operation of water resources systems. For these purposes, design storms based on the statistical analysis of the annual maximum series of rainfall depth are used in practice as input data to evaluate the corresponding hydrograph for a given catchment. Several models are documented in the literature to describe the hydrologic response (e.g. Chow et al., 1988; Beven, 2012): the simplest and most successful is the unit hydrograph concept first proposed by Sherman (1932). Due to a limited availability of observed streamflow data mainly in small catchment, the attempts in improving the peak flow predictions have been documented in the literature since the last century (e.g. Henderson, 1963; Meynink and Cordery, 1976) to date. Recently, Rigon et al. (2011) investigated the dependence of peak flows on the geomorphic properties of river basins. In the framework of flood frequency analysis, Robinson and Sivapalan (1997) presented an analytical description of the peak discharge irrespective of the functional form assumed to describe the hydrologic response. Goel et al. (2000) combine a stochastic rainfall model with a deterministic rainfall–runoff model to obtain a physically based probability distribution of flood discharges; results demonstrate that the positive correlation between rainfall intensity and duration impacts the flood flow quantiles. Vogel et al. (2011) developed a simple statistical model in order to simulate observed flood trends as well as the frequency of floods in a nonstationary context including changes in land use, climate and water uses. Iacobellis and Fiorentino (2000) proposed a derived distribution of flood frequency, identifying the combined role played by climatic and physical factors on the catchment scale. Bocchiola and Rosso (2009) developed a derived distribution approach for flood prediction in poorly gauged catchments to shift the statistical variability of a rainfall process into its counterpart in terms of statistical flood distribution. Baiamonte and Singh (2017) investigated the role of the antecedent soil moisture condition in the probability distribution of peak discharge and proposed a modification of the rational method in terms of a priori modification of the rational runoff coefficients.
In this framework, the present research study takes a different approach by exploring the role of the rainfall event features on the peak flow rate values. Therefore the main objective is to implement a dimensionless analytical framework that can be applied to any study case (i.e. natural catchment) in order to investigate the impact of the rainfall event structure on hydrograph peak. Since the catchment hydrologic response and in particular the hydrograph peak is subjected to a very broad range of climatic, physical, geomorphic and anthropogenic factors, the focus is posed on catchments where lumped rainfall–runoff models are suitable for deterministic event-based analysis. In the proposed approach, the rainfall event structure is described by investigating the maximum rainfall depths for a given duration d in the range of durations [$d/\mathrm{2};\mathrm{2}d$] within that specific rainfall event, differently from the statistical analysis of the extreme rainfall events. Other authors (e.g. Alfieri et al., 2008) have previously discussed the accuracy of literature design hyetographs (such as the Chicago hyetograph) for the evaluation of peak discharges during flood events; conversely the proposed methodology allows the investigation of the impact of the above-mentioned rainfall event structure on the magnification of the runoff peak neglecting the expected rainfall event features condensed in the depth–duration–frequency (DDF) curves.
The first specific objective is to define a structure relationship of the rainfall event able to describe the sample space of the rainfall event structures by means of a simple power function. The second specific objective is to implement a dimensionless approach that allows the generalization of the assessment of the hydrograph peak irrespective of the specific catchment characteristic (such as the hydrologic response time, the variability of the infiltration process, etc.), thus focusing on the impact of the rainfall event structure.
Finally a specific catchment application is discussed in order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall structures with respect to observed rainfall events; furthermore their effects on the hydrograph peak are examined.
2 Methodology
A dimensionless approach is proposed in order to define an analytical framework that can be applied to any study case (i.e. natural catchment). It follows that both the rainfall depth and the rainfall–runoff relationship, which are strongly related to the climatic and morphologic characteristics of the catchment, are expressed through dimensionless forms. In this paper, [L] refers to length and [T] refers to time.
The rainfall event is then described as constant hyetographs of a given durations; this simplification is consistent with the use of deterministic lumped models based on the linear system theory (e.g. Bras, 1990). The proposed approach is therefore valid within a framework that assumes that the watershed is a linear causative and time-invariant system, where only the rainfall excess produces runoff. In detail, the rainfall–runoff processes are modelled using the Soil Conservation Service (SCS) method for soil abstractions and the instantaneous unit hydrograph (IUH) theory. Consistently with the assumptions of the UH theory, the proposed approach is strictly valid when the following conditions are maintained: the known excess rainfall and the uniform distribution of the rainfall over the whole catchment area.
## 2.1 The dimensionless form of the rainfall event structure function
Rainfall DDF curves are commonly used to describe the maximum rainfall depth as a function of duration for given return periods. In particular for short durations, rainfall intensity has often been considered rather than rainfall depth, leading to intensity–duration–frequency (IDF) curves (Borga et al., 2005). Power laws are commonly used to describe DDF curves in Italy (e.g. Burlando and Rosso, 1996) and elsewhere (e.g. Koutsoyiannis et al., 1998). The proposed approach describes the internal structure of rainfall events based on the similarity with the DDF curves. Referring to a rainfall event, the maximum rainfall depth observed for a given duration is described in terms of a power function similarly to the DDF curve, as follows:
$\begin{array}{}\text{(1)}& h\left(d\right)={a}^{\prime }{d}^{n},\end{array}$
where h [L] is the maximum rainfall depth, a [LTn] and n [–] are respectively the coefficient and the structure exponent of the power function for a given duration, d [T]. For each duration di, the corresponding power function exponent, n, is estimated based on the maximum rainfall depth values observed in the range of duration [$d/\mathrm{2};\mathrm{2}d$] by means of a simple linear regression analysis. Based on such assumptions, the structure exponent n allows the description of the rainfall event based on a simple rectangular hyetograph, thus representing the rainfall event structure at a given duration. In other words, a rainfall event that is characterized by a specific n-structure exponent at a given duration is only one of the possible outcomes in the sample space of the rainfall structures. The n-structure exponent mathematically ranges between 0 and 1: the two extreme values represent unrealistic events characterized by opposite internal structure; when the structure exponent n tends to zero the internal structure of the rainfall event is comparable to a Dirac impulse while it is comparable to a constant intensity rainfall for n close to 1. As an example, Fig. 1 describes the rainfall event structure according to the approach illustrated above. In Fig. 1, the observed rainfall depth (at the top), the observed maximum rainfall depths (at the centre) and the corresponding rainfall structure exponent (at the bottom) are reported on hourly basis.
Figure 1Rainfall event structure: the observed rainfall depth (a), the observed maximum rainfall depths (b) and the corresponding rainfall structure exponent (c) are reported.
In order to correlate the rainfall event structure function to the DDF curve, a reference rainfall event has to be defined in terms of the maximum rainfall depth, hr, occurring for the reference duration, tr. Focusing on a given catchment, the reference duration, tr, is assumed to be equal to the hydrologic response time of the catchment; thus, assuming a specific return period Tr [T], the reference value of the maximum rainfall depth, hr [L], is defined according to the corresponding DDF curves, as follows:
$\begin{array}{}\text{(2)}& {h}_{\mathrm{r}}\left({T}_{\mathrm{r}},{t}_{\mathrm{r}}\right)=a\left({T}_{\mathrm{r}}\right){t}_{\mathrm{r}}^{b},\end{array}$
where a(Tr) [LTb] and b [–] are respectively the coefficient and the scaling exponent of the DDF curve.
Referring to a rainfall duration corresponding to tr, the rainfall depth is assumed to be equal to the reference value of the maximum rainfall depth. Based on this assumption a relationship between the parameters of the DDF curve and the rainfall event structure function can be derived as follows:
$\begin{array}{}\text{(3)}& h\left({t}_{\mathrm{r}}\right)={h}_{\mathrm{r}}\left({T}_{\mathrm{r}},{t}_{\mathrm{r}}\right)\phantom{\rule{0.125em}{0ex}}\to {a}^{\prime }{t}_{\mathrm{r}}^{n}=\phantom{\rule{0.125em}{0ex}}a\left({T}_{\mathrm{r}}\right){{t}_{\mathrm{r}}}^{b}\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}\frac{{a}^{\prime }}{a\left({T}_{\mathrm{r}}\right)}=\frac{{t}_{\mathrm{r}}^{b}}{{t}_{\mathrm{r}}^{n}}.\end{array}$
From Eq. (3) it is possible to derive the coefficient of the rainfall event structure function, a, for a given reference duration, tr. Note that the a coefficient is assumed to be valid in the range [${t}_{\mathrm{r}}/\phantom{{t}_{\mathrm{r}}\mathrm{2}}\mathrm{2};\phantom{\rule{0.125em}{0ex}}\mathrm{2}{t}_{\mathrm{r}}$] similarly to the n-structure exponent.
The dimensionless approach is then introduced since it allows an analytical framework to be defined which can be applied to any study case (i.e. natural catchment) for which the model assumptions are valid (i.e. linear causative and time-invariant system). The reference values hr and tr are directly linked to the climatic and morphologic characteristics of the specific catchment, and therefore the dimensionless approach based on the hr and tr values allows the generalization of the results irrespective of the specific catchment characteristic (such as the return period associated with the reference rainfall event).
Based on the proposed approach, the dimensionless form of the rainfall depth, h, is defined by the ratio of the rainfall depth, h, to the reference value of the maximum rainfall depth, hr; similarly the dimensionless duration, d, is expressed by the ratio of the duration, d, to the reference time, tr. Therefore, the dimensionless form of the rainfall structure relationship may be expressed utilizing Eqs. (1), (2) and (3):
$\begin{array}{}\text{(4)}& {h}_{\ast }\left({d}_{\ast }\right)=\frac{h}{{h}_{\mathrm{r}}}=\frac{{a}^{\prime }{d}^{n}}{a\left({T}_{\mathrm{r}}\right){{t}_{\mathrm{r}}}^{b}}=\frac{{d}^{n}}{{t}_{\mathrm{r}}^{n}}={d}_{\ast }^{n}.\end{array}$
## 2.2 The dimensionless form of the unit hydrograph
The hydrologic response of a river basin is here predicted through a deterministic lumped model: the interaction between rainfall and runoff is analysed by viewing the catchment as a lumped linear system (Bras, 1990). The response of a linear system is uniquely characterized by its impulse response function, called the instantaneous unit hydrograph. For the IUH, the excess rainfall of unit amount is applied to the drainage area in zero time (Chow et al., 1988).
To determine the dimensionless form of the unit hydrograph a functional form for the IUH and thus the S-hydrograph has to be assumed. In this paper the IUH shape is described with the two-parameter gamma distribution (Nash, 1957):
$\begin{array}{}\text{(5)}& f\left(t\right)=\frac{\mathrm{1}}{k\mathrm{\Gamma }\left(\mathit{\alpha }\right)}{\left(\frac{t}{k}\right)}^{\mathit{\alpha }-\mathrm{1}}{e}^{-\left(\frac{t}{k}\right)},\end{array}$
where f(t) [T−1] is the IUH, Γ [–] is the gamma function, α [–] is the shape parameter and k [T] is the scale parameter. In the well-known two-parameter Nash model, the parameters α and k represent the number of linear reservoirs added in series and the time constant of each reservoir, respectively. The product αk is the first-order moment, thus corresponding to the mean lag time of the IUH. Note that the IUH parameters can be related to the watershed geomorphology; in these terms the geomorphologic unit hydrograph (GIUH) theory attempts to relate the IUH of a catchment to the geometry of the stream network (e.g. Rodriguez-Iturbe and Valdes, 1979; Rosso, 1984). The use of the Nash IUH allows an analytical framework to be defined which assesses the relationship between the maximum dimensionless peak and the n-structure exponent for a given dimensionless duration, and similar analytical derivation can be carried out for simple synthetic IUHs. The dimensionless form of the IUH is obtained by using the dimensionless time, t, defined as follows:
$\begin{array}{}\text{(6)}& {t}_{\ast }=\frac{t}{\mathit{\alpha }k}.\end{array}$
The proposed dimensionless approach is based on the use of the IUH scale parameter as the reference time of the hydrologic response (i.e. tr=αk). Using the first-order moment in the dimensionless procedure, the proposed approach can be applied to any IUH form even if, for experimentally derived IUHs, the analytical solution of the problem is not feasible.
By applying the change of variable $t=\mathit{\alpha }k{t}_{\ast }$, the IUH may be expressed as follows:
$\begin{array}{}\text{(7)}& f\left(t\right)=\frac{\mathrm{1}}{k\mathrm{\Gamma }\left(\mathit{\alpha }\right)}{\left(\frac{\mathit{\alpha }k{t}_{\ast }}{k}\right)}^{\mathit{\alpha }-\mathrm{1}}{e}^{-\left(\frac{\mathit{\alpha }k{t}_{\ast }}{k}\right)}.\end{array}$
The dimensionless form of the IUH, f(t), is defined and derived from Eq. (7) as follows:
$\begin{array}{}\text{(8)}& f\left({t}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}f\left(t\right)\cdot \mathit{\alpha }k=\frac{\mathit{\alpha }}{\mathrm{\Gamma }\left(\mathit{\alpha }\right)}{\left(\mathit{\alpha }{t}_{\ast }\right)}^{\mathit{\alpha }-\mathrm{1}}{e}^{-\left(\mathit{\alpha }{t}_{\ast }\right)}.\end{array}$
Note that for the dimensionless IUH the first-order moment is equal to 1 and the time to peak, tI∗, can be expressed as follows:
$\begin{array}{}\text{(9)}& \frac{\mathrm{d}f\left({t}_{\ast }\right)}{\mathrm{d}{t}_{\ast }}=\mathrm{0}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}{t}_{\mathrm{I}\ast }=\frac{\mathit{\alpha }-\mathrm{1}}{\mathit{\alpha }}.\end{array}$
The dimensionless unit hydrograph (UH) is derived by integrating the dimensionless IUH:
$\begin{array}{}\text{(10)}& S\left({t}_{\ast }\right)=\underset{\mathrm{0}}{\overset{{t}_{\ast }}{\int }}f\left({\mathit{\tau }}_{\ast }\right)d{\mathit{\tau }}_{\ast },\end{array}$
where S(t) is the dimensionless S curve (e.g. Henderson, 1963).
For a dimensionless unit of rainfall of a given dimensionless duration, d, the dimensionless UH is obtained by subtracting the two consecutive S curves that are lagged d:
$\begin{array}{}\text{(11)}& U\left({t}_{\ast }\right)=\left\{\begin{array}{l}S\left({t}_{\ast }\right)\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathrm{for}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{t}_{\ast }<{d}_{\ast }\\ S\left({t}_{\ast }\right)-S\left({t}_{\ast }-{d}_{\ast }\right)\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathrm{for}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{t}_{\ast }\ge {d}_{\ast },\end{array}\right\\phantom{\rule{0.125em}{0ex}}\end{array}$
where U(t) is the dimensionless UH. The time to peak of the dimensionless UH, tp∗, is derived by solving $\mathrm{d}U\left({t}_{\ast }\right)/\mathrm{d}{t}_{\ast }=\mathrm{0}$. Using Eqs. (8) and (11) and recognizing that ${t}_{\mathrm{p}\ast }\ge \phantom{\rule{0.125em}{0ex}}{d}_{\ast }$ gives the following equation for tp∗:
$\begin{array}{ll}f\left({t}_{\mathrm{p}\ast }\right)& =f\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}{t}_{\mathrm{p}\ast }={d}_{\ast }\frac{{e}^{\frac{\mathit{\alpha }{d}_{\ast }}{\mathit{\alpha }-\mathrm{1}}}}{{e}^{\frac{\mathit{\alpha }{d}_{\ast }}{\mathit{\alpha }-\mathrm{1}}}-\mathrm{1}}\\ \text{(12)}& & ={d}_{\ast }\frac{\mathrm{1}}{\mathrm{1}-{e}^{-\frac{\mathit{\alpha }{d}_{\ast }}{\mathit{\alpha }-\mathrm{1}}}}.\end{array}$
Similar expressions for the time to peak are available in the literature (e.g. Rigon et al., 2011; Robinson and Sivapalan, 1997). Consequently the peak value of the dimensionless UH may be expressed as a function of d by the following:
$\begin{array}{}\text{(13)}& {U}_{max}\left({d}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}S\left({t}_{\mathrm{p}\ast }\right)-S\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right).\end{array}$
## 2.3 The dimensionless runoff peak analysis
Based on the unit hydrograph theory and assuming a rectangular hyetograph of duration d, the dimensionless convolution equation for a given catchment becomes
$\begin{array}{}\text{(14)}& Q\left({t}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}{i}_{\mathrm{e}}\left({d}_{\ast }\right)U\left({t}_{\ast }\right),\end{array}$
where Q(t) is the dimensionless hydrograph and ie(d) is the dimensionless excess rainfall intensity.
Note that the hypothesis of the rectangular hyetograph is not motivated in order to simplify the methodology but in order to describe the rainfall event structure. Based on such an approach, the rainfall event structure at a given duration is represented throughout the n-structure exponent, and it follows that the rainfall event is described by a simple rectangular hyetograph. It has to be noticed that the constant hyetograph derived by a given n structure is assumed to be valid in the same range of duration from which it is derived, [${d}_{i}/\phantom{{d}_{i}\mathrm{2}}\mathrm{2};\phantom{\rule{0.125em}{0ex}}\mathrm{2}{d}_{i}$].
In the following sections the dimensionless hydrograph and the corresponding peak are examined in the case of constant and variable runoff coefficients.
### 2.3.1 The analysis in the case of a constant runoff coefficient
By considering a constant runoff coefficient, φ0 [–], similarly to the dimensionless rainfall depth h the dimensionless excess rainfall depth he∗ is defined by
$\begin{array}{}\text{(15)}& {h}_{\mathrm{e}\ast }=\frac{{\mathit{\phi }}_{\mathrm{0}}h}{{\mathit{\phi }}_{\mathrm{0}}{h}_{\mathrm{r}}}={d}_{\ast }^{n}.\end{array}$
The corresponding dimensionless excess rainfall intensity becomes
$\begin{array}{}\text{(16)}& {i}_{\mathrm{e}\ast }={d}_{\ast }^{n-\mathrm{1}}.\end{array}$
From Eqs. (13), (14) and (16), the dimensionless hydrograph and the corresponding peak may be expressed by
$\begin{array}{ll}\text{(17)}& & Q\left({t}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}{d}_{\ast }^{n-\mathrm{1}}U\left({t}_{\ast }\right),& {Q}_{max}\left({d}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}{d}_{\ast }^{n-\mathrm{1}}{U}_{max}\left({d}_{\ast }\right)\\ \text{(18)}& & \phantom{\rule{1em}{0ex}}={d}_{\ast }^{n-\mathrm{1}}\left[S\left({t}_{\mathrm{p}\ast }\right)-S\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)\right].\end{array}$
In order to investigate the critical condition for a given catchment which maximizes the runoff peak, the partial derivative of the Eq. (18) with respect to the variable d is calculated.
$\begin{array}{ll}\frac{\partial {Q}_{max}\left({d}_{\ast }\right)}{\partial {d}_{\ast }}& =\mathrm{0}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\frac{f\left({t}_{\mathrm{p}\ast }\right){d}_{\ast }}{\mathrm{1}-n}\\ \text{(19)}& & =S\left({t}_{\mathrm{p}\ast }\right)-S\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)={U}_{max}\left({d}_{\ast }\right)\end{array}$
The analytical expression for estimating the critical duration of rainfall that maximizes the peak flow was first derived by Meynink and Cordery (1976). Similarly, from Eq. (19) it is possible to analytically derive the n-structure value that maximizes the dimensionless runoff peak for a specific duration d referring to a given catchment:
$\begin{array}{}\text{(20)}& n=\mathrm{1}-\phantom{\rule{0.125em}{0ex}}\frac{f\left({t}_{\mathrm{p}\ast }\right){d}_{\ast }}{{U}_{max}\left({d}_{\ast }\right)}.\end{array}$
### 2.3.2 The analysis in the case of a variable runoff coefficient
The variability of the infiltration process across the rainfall event as well as the initial soil moisture conditions significantly affects the hydrological response of the catchment. In order to take into account these elements a variable runoff coefficient, φ, is introduced. The variable runoff coefficient is estimated based on the SCS method for computing soil abstractions (SCS, 1985). Since the analysis deals with high rainfall intensity events it would be reasonable to force the SCS method in order to always produce runoff (Boni et al., 2007). The assumption that the rainfall depth always exceeds the initial abstraction is implemented in the model by supposing that a previous rainfall depth at least equal to the initial abstraction occurred; therefore, the excess rainfall depth he is evaluated as follows:
$\begin{array}{}\text{(21)}& {h}_{\mathrm{e}}=\mathit{\phi }h=\frac{{h}^{\mathrm{2}}}{h+S}\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}\mathit{\phi }=\frac{h}{h+S},\end{array}$
where S is the soil abstraction [L]. The variable runoff coefficient is therefore described as a monotonic increasing function of the rainfall depth. It follows that the runoff component is affected by the variability of the infiltration process: the runoff is reduced in case of small rainfall events and is enhanced in case of heavy events.
The dimensionless excess rainfall depth, he∗, is defined by
$\begin{array}{}\text{(22)}& {h}_{\mathrm{e}\ast }=\frac{{h}_{\mathrm{e}}}{{h}_{{\mathrm{e}}_{\mathrm{r}}}}=\frac{\mathit{\phi }h}{{\mathit{\phi }}_{\mathrm{r}}{h}_{\mathrm{r}}}=\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{h}_{\ast }=\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{d}_{\ast }^{n},\end{array}$
where ${h}_{{\mathrm{e}}_{\mathrm{r}}}$ [L] is the reference excess rainfall depth and φr [–] is the corresponding reference runoff coefficient.
The corresponding dimensionless excess rainfall intensity becomes
$\begin{array}{}\text{(23)}& {i}_{\mathrm{e}\ast }=\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{d}_{\ast }^{n-\mathrm{1}}\end{array}$
From Eq. (21) the ratio $\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}$ may be determined in terms of h:
$\begin{array}{}\text{(24)}& \frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}=\frac{h\phantom{\rule{-0.125em}{0ex}}/\phantom{h\left(h+S\right)}\phantom{\rule{-0.125em}{0ex}}\left(h+S\right)}{{h}_{\mathrm{r}}\phantom{\rule{-0.125em}{0ex}}/\phantom{{h}_{\mathrm{r}}\left({h}_{\mathrm{r}}+S\right)}\phantom{\rule{-0.125em}{0ex}}\left({h}_{\mathrm{r}}+S\right)}={h}_{\ast }\left(\frac{{h}_{\mathrm{r}}+S}{h+S}\right)={h}_{\ast }\left(\frac{\mathrm{1}+{S}_{\ast }}{{h}_{\ast }+{S}_{\ast }}\right),\end{array}$
where S is the dimensionless soil abstraction defined by the ratio of S to hr.
According to the dimensionless approach proposed in the present paper, different initial moisture conditions can be analysed by considering different S associated with different CN conditions (i.e. CNI or CNIII or different soil characteristics) for the same reference rainfall depth.
The ratio $\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}$ is lower than 1 when the dimensionless rainfall depth is lower than 1 and vice versa. In the domain of ${h}_{\ast }<\mathrm{1}$ (i.e. ${d}_{\ast }<\mathrm{1}$), the variable runoff coefficient implies that the runoff component is reduced with respect to the reference case and vice versa. The impact of the ratio $\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}$ on the runoff production is enhanced if S increases, thus causing a wider range of runoff coefficients.
From Eqs. (13), (14) and (23), the dimensionless hydrograph and the corresponding peak may be expressed by
$\begin{array}{ll}\text{(25)}& & Q\left({t}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{d}_{\ast }^{n-\mathrm{1}}U\left({t}_{\ast }\right),& {Q}_{max}\left({d}_{\ast }\right)=\phantom{\rule{0.125em}{0ex}}\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{d}_{\ast }^{n-\mathrm{1}}{U}_{max}\left({d}_{\ast }\right)\\ \text{(26)}& & \phantom{\rule{1em}{0ex}}=\frac{\mathit{\phi }}{{\mathit{\phi }}_{\mathrm{r}}}{d}_{\ast }^{n-\mathrm{1}}\left[S\left({t}_{\mathrm{p}\ast }\right)-S\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)\right].\end{array}$
Similarly to the runoff peak analysis carried out in the case of the constant runoff coefficient, the partial derivative of the Eq. (26) with respect to the variable d is calculated:
$\begin{array}{ll}\frac{\partial {Q}_{max}\left({d}_{\ast }\right)}{\partial {d}_{\ast }}=& \phantom{\rule{0.125em}{0ex}}\mathrm{0}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\to \phantom{\rule{0.125em}{0ex}}f\left({t}_{\mathrm{p}\ast }\right){d}_{\ast }\\ =& \left[S\left({t}_{\mathrm{p}\ast }\right)-S\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)\right]\\ \text{(27)}& & \left[\mathrm{1}-\mathrm{2}n+\frac{n{d}_{\ast }^{n}}{{d}_{\ast }^{n}+{S}_{\ast }}\right].\end{array}$
From Eq. (27) it is possible to implicitly derive the n-structure value that maximizes the dimensionless runoff peak for a specific duration d referring to a given catchment.
3 Results and discussion
The proposed dimensionless approach is derived using the two-parameter gamma distribution for the shape parameter equal to 3. Such an assumption is derived by using the Nash model relation proposed by Rosso (1984) to estimate the shape parameter based on Horton order ratios according to which the α parameter is generally in the neighbourhood of 3 (La Barbera and Rosso, 1989; Rosso et al., 1991). In Fig. 2, the dimensionless rainfall duration is plotted vs. the dimensionless time to peak together with the dimensionless IUH and the corresponding dimensionless UH for ${d}_{\ast }=\mathrm{1}$.0. Note that the dotted grey line indicates the UH peak while the dashed grey lines show tp∗, f(tp∗) and $f\left({t}_{\mathrm{p}\ast }-{d}_{\ast }\right)$, respectively.
Figure 2Dimensionless rainfall duration vs. dimensionless time to peak; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for ${d}_{\ast }=\mathrm{1}$.0. Note that the shape parameter α is equal to 3.
The dimensionless UH is evaluated, varying the dimensionless rainfall duration in the range between 0.5 and 2 in accordance with the n-structure definition in the range of durations [${d}_{i}/\phantom{{d}_{i}\mathrm{2}}\mathrm{2};\phantom{\rule{0.125em}{0ex}}\mathrm{2}{d}_{i}$]; then the runoff peak analysis is carried out in the case of constant and variable runoff coefficients. The achieved results are presented with respect to the above-mentioned dimensionless duration range [0.5;2] that is wide enough to include the duration of the rainfall able to generate the maximum peak flow for a given catchment (Robinson and Sivapalan, 1997).
Finally the dimensionless procedure is applied to a small Mediterranean catchment. In the catchment application the dimensionless procedure is fully specified as from the evaluation of the rainfall structures associated with three observed rainfall events with regard to the determination of the reference peak flow and consequently of the dimensionless hydrograph peaks for the three observed rainfall structures.
## 3.1 Maximum dimensionless runoff peak with constant runoff coefficient
The dimensionless form of the hydrograph is shown in Fig. 3 with variation of the rainfall structure exponents, n, for the selected dimensionless rainfall duration. The hydrographs are obtained for excess rainfall intensities characterized by a constant runoff coefficient and rainfall structure exponents of 0.2, 0.3, 0.5 and 0.8.
Figure 3Dimensionless flow rates obtained for excess rainfall intensities characterized by constant runoff coefficient and different rainfall structure exponents, n (n= 0.2, 0.3, 0.5 and 0.8) at assigned dimensionless rainfall duration, d (${d}_{\ast }=\mathrm{0}$.5, 1.0, 1.5 and 2.0). Note that the shape parameter α is equal to 3.
The impact of the rainfall structure exponents on the hydrograph form depends on the rainfall duration: for d lower than 1, the higher n the lower is the peak flow rate and vice versa. Figure 4 illustrates the 3-D mesh plot and the contour plot of the dimensionless runoff peak as a function of the rainfall structure exponent and the dimensionless rainfall duration. In the 3-D mesh plot as well as in the contour plot, it is possible to observe a saddle point located in the neighbourhood of d and n values equal to 1 and 0.3, respectively. Note that the intersection line (reported as bold line in Fig. 4) between the saddle surface and the plane of the principal curvatures where the saddle point is a minimum indicates the highest values of the runoff peak for a given n-structure exponent.
Figure 43-D mesh plot (a) and contour plot (b) of the dimensionless hydrograph peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a constant runoff coefficient. The maximum dimensionless hydrograph peak curve is also reported (bold line).
Figure 5Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of a constant runoff coefficient; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for ${d}_{\ast }=\mathrm{1}$.0. Note that the shape parameter α is equal to 3.
In Fig. 5, the maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent are plotted vs. the dimensionless time to peak. Further, the dimensionless IUH and the corresponding dimensionless UH for ${d}_{\ast }=\mathrm{1}$.0 are reported as an example. The reference line (indicated as short–short–short dashed grey line in Fig. 5) illustrates the lower control line corresponding to the rainfall duration infinitesimally small. Note that the rainfall structure exponent that maximizes the runoff peak for a given duration can be simply derived as a function of the dimensionless time to peak (see Eq. 20). The maximum dimensionless hydrograph peak curve tends to one for long dimensionless rainfall duration (${d}_{\ast }>\mathrm{3}\right)$ when the corresponding n-structure exponent tends to one (see Eq. 18): for high values of n structure, the critical conditions occur for long durations that correspond to paroxysmal events for which the rainfall intensity remains fairly constant. The local minimum of the maximum dimensionless runoff peak curve (see Fig. 5) occurs at tp∗ of 1.29 corresponding to n-structure value of 0.31 and d of 1, thus pointing out that the less critical runoff peak occurs at n-structure exponent values corresponding to those typically derived by the statistical analysis of the annual maximum rainfall depth series in the Mediterranean climate. Furthermore, it can be observed that different rainfall event conditions (i.e. rainfall structure exponent n and duration d) in the neighbourhood of the local minimum point could determine comparable effects in term of the runoff peak value.
## 3.2 Maximum dimensionless runoff peak with variable runoff coefficient
The excess rainfall depth, in the case of variable runoff coefficient, is evaluated by assigning a value to the reference runoff coefficient. In particular, the reference runoff coefficient is defined as follows, utilizing Eq. (21):
$\begin{array}{}\text{(28)}& {\mathit{\phi }}_{\mathrm{r}}=\frac{{h}_{\mathrm{r}}}{{h}_{\mathrm{r}}+S}\to \phantom{\rule{0.125em}{0ex}}{\mathit{\phi }}_{\mathrm{r}}=\frac{\mathrm{1}}{\mathrm{1}+{S}_{\ast }}.\end{array}$
In order to provide an example of the proposed approach, the presented results are obtained assuming a dimensionless soil abstraction S of 0.25. It follows that the reference runoff coefficient φr is equal to 0.8.
Similarly to the results presented for the case of constant runoff coefficient, Fig. 6 illustrates the dimensionless hydrographs obtained for excess rainfall intensities characterized by variable runoff coefficient and n-structure exponents of 0.2, 0.3, 0.5 and 0.8 at assigned dimensionless rainfall duration (${d}_{\ast }=\mathrm{0}$.5, 1.0, 1.5 and 2.0). The dimensionless hydrographs, obtained for the variable runoff coefficient, show the same behaviour as those derived for the constant runoff coefficient (see Figs. 3 and 6), even if they differ in magnitude, thus confirming the role of the variable runoff coefficient on the runoff peak. In particular, due to the variability of the infiltration process, the runoff peaks slightly decrease for rainfall duration lower than 1 (i.e. ${d}_{\ast }=\mathrm{0}$.5) when compared with those observed in the case of a constant runoff coefficient while they rise up for a duration larger than 1 (i.e. ${d}_{\ast }=\mathrm{1}$.5 and 2).
Figure 6Dimensionless flow rates obtained for excess rainfall intensities characterized by a variable runoff coefficient and different rainfall structure exponents, n (n=0.2, 0.3, 0.5 and 0.8) at assigned dimensionless rainfall duration, d (${d}_{\ast }=\mathrm{0}$.5, 1.0, 1.5 and 2.0). Note that the shape parameter α is equal to 3.
Figure 73-D mesh plot (a) and contour plot (b) of the dimensionless hydrograph peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a variable runoff coefficient. The maximum dimensionless hydrograph peak curve is also reported (bold line).
Figure 7 shows the 3-D mesh plot and the contour plot of the dimensionless runoff peak as a function of the rainfall structure exponent and the dimensionless rainfall duration in the case of a variable runoff coefficient. By comparing Figs. 7 and 4, it emerges that the contour lines observed in the case of a variable runoff coefficient reveal a steeper trend with respect to constant runoff coefficient trends; indeed, the impact of the n-structure exponent on the hydrograph peak is enhanced when the runoff coefficient is assumed to be variable. The saddle point is again located in the neighbourhood of d and n values equal to 1 and 0.3, respectively, while the curve of the maximum values of the runoff peak (reported as bold line in Fig. 7) is moved to the left.
Figure 8Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of a variable runoff coefficient; dimensionless instantaneous unit hydrograph and the corresponding dimensionless unit hydrographs for ${d}_{\ast }=\mathrm{1}$.0. Note that the shape parameter α is equal to 3.
In Fig. 8, the maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent are plotted vs. the dimensionless time to peak in the case of a variable runoff coefficient. Results plotted in Fig. 8 confirm that the maximum runoff peak curve reveals the local minimum point at tp∗ of 1.29, corresponding to n of 0.26 and d of 1. Referring to S of 0.25, the maximum dimensionless runoff peak tends to 1.25 for long dimensionless rainfall duration (${d}_{\ast }>\mathrm{3}\right)$ when consequently the n-structure exponent tends to 1 (see Eqs. 24 and 26). Figure 9 illustrates the influence of different variable runoff coefficients (i.e. for instance different initial moisture conditions or different soil characteristics) on the maximum dimensionless runoff peak. Similarly to Fig. 8, the maximum dimensionless hydrograph peak (see the top graph) and the corresponding rainfall structure exponent (see the centre graph) are plotted vs. the dimensionless time to peak in the case of a variable runoff coefficient (for S values of 0.25 and 0.67) together with the comparison to the case of constant runoff coefficient. The maximum dimensionless runoff peak is similar for short rainfall duration (i.e. tp∗ lower than 1.5) when the variable runoff coefficient reduces the runoff component with respect to the reference runoff case (that is, also the constant runoff case i.e. ${S}_{\ast }=\mathrm{0}$). On the contrary, the maximum dimensionless runoff peak increases with increasing the dimensionless soil abstraction for long rainfall duration. Such behaviour is due to the rate of change in the runoff production with respect to the rainfall duration: with increasing the rainfall volume the relevance of runoff with respect to the soil abstraction rises. In other words, the n-structure exponent that maximizes the runoff peak decreases when the dimensionless soil abstractions are increased (see Eq. 27).
Figure 9Maximum dimensionless hydrograph peak and the corresponding rainfall structure exponent vs. dimensionless time to peak in the case of variable runoff coefficients with respect to dimensionless maximum retention S of 0.25 and 0.67. The comparison to the case of constant runoff coefficient is also reported.
## 3.3 Catchment application
In order to point out the dimensionless procedure implications and to provide some numerical examples of the rainfall event structures, the proposed methodology has been implemented for the Bisagno catchment at La Presa station, located at the centre of Liguria region (Genoa, Italy).
The Bisagno–La Presa catchment has a drainage area of 34 km2 with an index flood of about 95 m3 s−1. The upstream river network is characterized by a main channel length of 8.36 km and mean streamflow velocity of 2.4 m s−1. Regarding the geomorphology of the catchment, the area (RA), bifurcation (RB) and length (RL) ratios that are evaluated according to the Horton–Strahler ordering scheme are respectively equal to 5.9, 5.6 and 2.5. By considering the altimetry, vegetation and limited anthropogenic exploitation of the territory, the Bisagno–La Presa is a mountain catchment characterized by an average slope of 33 %. The soil abstraction, SII, is assumed to be equal to 41 mm; its evaluation is based on the land use analysis provided in the framework of the EU Project CORINE (EEA, 2009). The mean value of the annual maximum rainfall depth for unit duration (hourly) and the scaling exponent of the DDF curves are respectively equal to 41.31 mm h−1 and 0.39. Detailed hydrologic characterization of the Bisagno catchment can be found elsewhere (Bocchiola and Rosso, 2009; Rulli and Rosso, 2002; Rosso and Rulli, 2002). With regard to the rainfall–runoff process, the two parameters of the gamma distribution are evaluated based on the Horton order ratio relationship (Rosso, 1984). The shape and scale parameters are estimated to be equal to 3.4 and 0.25 h respectively, thus corresponding to the lag time of 0.85 h.
In this application, three rainfall events observed in the catchment area have been selected in order to analyse the different runoff peaks that occurred for the three rainfall event structures. For comparison purposes, the selected events are characterized by an analogous magnitude of the maximum rainfall depth observed for the duration equal to the reference time (i.e. hr=80 mm, tr=0.85 h).
Figure 10Rainfall event structure of three events observed in Genoa (Italy): the observed rainfall depths (a) and the estimated rainfall structure exponents (b) are reported. At the bottom, the rainfall structure and depth–duration–frequency curves, evaluated for the reference time of the Bisagno–La Presa catchment, are reported.
Figure 10 illustrates the rainfall event structure curves derived for the three selected rainfall events. The graphs at the top report the observed rainfall depths while the central graphs show the estimated rainfall structure exponents. At the bottom of Fig. 10, by considering the three structure exponents corresponding to the Bisagno–La Presa reference time (i.e. n=0.55, 0.62, 0.71), the rainfall event structure curves are derived for a rainfall durations ranging between 0.5⋅tr and 2⋅tr; for comparison purposes, the DDF curve is also reported. Based on each rainfall structure curve, four rectangular hyetographs with duration of 0.425, 0.85, 1.275 and 1.7 h in the range [${t}_{\mathrm{r}}/\phantom{{t}_{\mathrm{r}}\mathrm{2}}\mathrm{2};\phantom{\rule{0.125em}{0ex}}\mathrm{2}{t}_{\mathrm{r}}$] are derived to evaluate the impact on the hydrograph peak of the Bisagno–La Presa catchment. Note that the analysis is performed in the case of a variable runoff coefficient whose reference value is equal to 0.66 (i.e. ${S}_{\ast }=\mathrm{0}$.5; S=41 mm). In Fig. 11, the excess rainfall hyetographs, the corresponding hydrographs and the reference value of the runoff peak flow are plotted for the three investigated rainfall structure exponents. The reference value of the runoff peak flow (dash–dot line) is evaluated by assuming a constant-intensity hyetograph of infinite duration and having excess rainfall intensity equal to that estimated for the reference time. The role of the rainfall structure exponent emerges in the different decreasing rate of the excess rainfall intensity with the duration, thus resulting in the corresponding increasing rate of the peak flow values.
Figure 11The excess rainfall hyetographs, the corresponding hydrographs and the reference value of the hydrograph peak flow evaluated for three rainfall structure exponents applied to the Bisagno–La Presa catchment. Note that each graph includes four rainfall durations (i.e. 0.5, 1.0, 1.5 and 2.0 times the reference time).
Figure 12Contour plot of the dimensionless hydrograph peak evaluated for the Bisagno–La Presa catchment in the case of a variable runoff coefficient (${S}_{\ast }=\mathrm{0}$.5). The maximum dimensionless runoff peak curve is also reported (bold line) together with the dimensionless hydrograph peaks (grey-filled stars) for the selected rainfall structure exponents (n=0.55, 0.62, 0.71) and durations (${d}_{\ast }=\mathrm{0}$.5, 1.0, 1.5 and 2.0).
Figure 12 shows the contour plot of the dimensionless hydrograph peak in the case of a variable runoff coefficient (${S}_{\ast }=\mathrm{0}$.5). The maximum runoff peak curve is also reported (bold line) together with the dimensionless hydrograph peaks (grey-filled stars) for the selected rainfall structure exponents (n=0.55, 0.62, 0.71) and durations (${d}_{\ast }=\mathrm{0}$.5, 1.0, 1.5 and 2.0). Note that these selected rainfall structures represent only three of the possible outcomes in the sample space of the rainfall structures that are described in the contour plot. Similarly to Fig. 7, the Bisagno–La Presa catchment application shows a curve of the highest values of the runoff peak characterized by a local minimum (saddle point) in the neighbourhood of d and n values equal to 1 and 0.3, respectively.
4 Conclusions
The proposed analytical dimensionless approach allows the investigation of the impact of the rainfall event structure on the hydrograph peak. To this end a methodology to describe the rainfall event structure is proposed based on the similarity with the depth–duration–frequency curves. The rainfall input consists of a constant hyetograph where all the possible outcomes in the sample space of the rainfall structures can be condensed through the n-structure exponent. The rainfall–runoff processes are modelled using the Soil Conservation Service method for soil abstractions and the instantaneous unit hydrograph theory. In the present paper the two-parameter gamma distribution is adopted as an IUH form; however, the analysis can be repeated using other synthetic IUH forms obtaining similar results.
The proposed dimensionless approach allows an analytical framework to be defined which can be applied to any study case for which the model assumptions are valid; the site-specific characteristics (such as the morphologic and climatic characteristics of the catchment) are no more relevant, as they are included within the parameters of the dimensionless procedure (i.e. hr(Tr) and tr), thus allowing the implication on the hydrograph peak irrespective of the absolute value of the rainfall depth (i.e. the corresponding return period) to be figured out. A set of analytical expressions has been derived to provide the estimation of the maximum peak with respect to a given n-structure exponent. Results reveal the impact of the rainfall event structure on the runoff peak, thus pointing out the following features:
• The curve of the maximum values of the runoff peak reveals a local minimum point (saddle point).
• Different combinations of n-structure exponent and rainfall duration may determine similar conditions in terms of runoff peak.
• Analogous behaviour of the maximum dimensionless runoff peak curve is observed for different runoff coefficients although wider range of variations are observed with increasing soil abstraction values.
Referring to the Bisagno–La Presa catchment application (hr=80 mm; tr=0.85 h and ${S}_{\ast }=\mathrm{0}$.5), the saddle point of the runoff peaks is located in the neighbourhood of an n value equal to 0.3 and rainfall duration corresponding to the reference time (${d}_{\ast }=\mathrm{1}$). Further, it emerges that the maximum runoff peak value, corresponding to the scaling exponent of the DDF curve, is comparable to the less critical value (saddle point). Findings of the present research suggest that further review is needed of the derived flood distribution approaches that coupled the information on precipitation via DDF curves and the catchment response based on the iso-frequency hypothesis. Future research with regard to the structure of the extreme rainfall event is needed; in particular the analysis of several rainfall data series belonging to a homogeneous climatic region is required in order to investigate the frequency distribution of specific rainfall structures.
The developed approach, besides suggesting remarkable issues for further researches and being unlike the merely analytical exercise, succeeds in highlighting once more the complexity in the assessment of the maximum runoff peak.
Data availability
Data availability.
The rainfall data used in the catchment application are freely available for download (http://www.dicca.unige.it/meteo/text_files/piogge/, DICCA, 2017).
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We thank Federico Fenicia, Giorgio Baiamonte and the anonymous reviewer for having contributed to the improvement of the original manuscript with their valuable comments.
Edited by: Fabrizio Fenicia
Reviewed by: Giorgio Baiamonte and one anonymous referee
References
Alfieri, L., Laio, F., and Claps, P.: A simulation experiment for optimal design hyetograph selection, Hydrol. Process., 22, 813–820, https://doi.org/10.1002/hyp.6646, 2008.
Baiamonte, G. and Singh, V. P.: Modelling the probability distribution of peak discharge for infiltrating hillslopes, Water Resour. Res., 53, 6018–6032, https://doi.org/10.1002/2016WR020109, 2017.
Beven, K.: Rainfall-Runoff Modelling: The Primer: Second Edition, Wiley-Blackwell, Chichester UK, 2012.
Bocchiola, D. and Rosso, R.: Use of a derived distribution approach for flood prediction in poorly gauged basins: A case study in Italy, Adv. Water Resour., 32, 1284–1296, https://doi.org/10.1016/j.advwatres.2009.05.005, 2009.
Boni, G., Ferraris, L., Giannoni, F., Roth, G., and Rudari, R.: Flood probability analysis for un-gauged watersheds by means of a simple distributed hydrologic model, Adv. Water Resour., 30, 2135–2144, https://doi.org/10.1016/j.advwatres.2006.08.009, 2007.
Borga, M., Vezzani, C., and Dalla Fontana, G.: Regional rainfall depth-duration-frequency equations for an alpine region, Nat. Hazards, 36, 221–235, https://doi.org/10.1007/s11069-004-4550-y, 2005.
Bras, R. L.: Hydrology: An Introduction to Hydrological Science, Addison-Wesley Publishing Company, New York, 1990.
Burlando, P. and Rosso, R.: Scaling and multiscaling models of depth–duration–frequency curves for storm precipitation, J. Hydrol., 187, 45–64, https://doi.org/10.1016/S0022-1694(96)03086-7, 1996.
Chow, V. T., Maidment, D. R., and Mays, L. W.: Applied Hydrology, McGraw-Hill, New York, 1988.
DICCA: Rainfall data, Dipartimento di Ingegneria Civile, Chimica e Ambientale, University of Genova, available at: http://www.dicca.unige.it/meteo/text_files/piogge/, 2017.
European Environment Agency (EEA): Corine Land Cover (CLC2006) 100 m-version 12/2009, Copenhagen, Denmark, 2009.
Goel, N., Kurothe, R., Mathur, B., and Vogel, R.: A derived flood frequency distribution for correlated rainfall intensity and duration, J. Hydrol., 228, 56–67, https://doi.org/10.1016/S0022-1694(00)00145-1, 2000.
Henderson, F. M.: Some Properties of the Unit Hydrograph, J. Geophys. Res., 68, 4785–4794, 1963.
Iacobellis, V. and Fiorentino, M.: Derived distribution of floods based on the concept of partial area coverage with a climatic appeal, Water Resour. Res., 36, 469–482, https://doi.org/10.1029/1999WR900287, 2000.
Koutsoyiannis, D., Kozonis, D., and Manetas, A.: A mathematical framework for studying intensity–duration–frequency relationships, J. Hydrol., 206, 118–135, https://doi.org/10.1016/S0022-1694(98)00097-3, 1998.
La Barbera, P. and Rosso, R.: On the fractal dimension of stream networks, Water Resour. Res., 25, 735–741, https://doi.org/10.1029/WR025i004p00735, 1989.
Meynink, W. J. C. and Cordery, I.: Critical duration of rainfall for flood estimation, Water Resour. Res., 12, 1209–1214, https://doi.org/10.1029/WR012i006p01209, 1976.
Nash, J. E.: The form of the instantaneous unit hydrograph, International Union of Geology and Geophysics Assembly of Toronto, 3, 114–120, 1957.
Rigon, R., D'Odorico, P., and Bertoldi, G.: The geomorphic structure of the runoff peak, Hydrol. Earth Syst. Sci., 15, 1853–1863, https://doi.org/10.5194/hess-15-1853-2011, 2011.
Robinson, J. S. and Sivapalan, M.: An investigation into the physical causes of scaling and heterogeneity of regional flood frequency, Water Resour. Res., 33, 1045–1059, https://doi.org/10.1029/97WR00044, 1997.
Rodriguez-Iturbe, I. and Valdes, J. B.: The geomorphic structure of hydrologic response, Water Resour. Res., 18, 877–886, 1979.
Rosso, R.: Nash model relation to Horton order ratios, Water Resour. Res., 20, 914–920, https://doi.org/10.1029/WR020i007p00914, 1984.
Rosso, R. and Rulli, M. C.: An integrated simulation method for flash-flood risk assessment: 2. Effects of changes in land-use under a historical perspective, Hydrol. Earth Syst. Sci., 6, 285–294, https://doi.org/10.5194/hess-6-285-2002, 2002.
Rosso, R., Bacchi, B., and La Barbera, P.: Fractal relation of mainstream length to catchment area in river networks, Water Resour. Res., 27, 381–388, https://doi.org/10.1029/90WR02404, 1991.
Rulli, M. C. and Rosso, R.: An integrated simulation method for flash-flood risk assessment: 1. Frequency predictions in the Bisagno River by combining stochastic and deterministic methods, Hydrol. Earth Syst. Sci., 6, 267–284, https://doi.org/10.5194/hess-6-267-2002, 2002.
Sherman, L. K.: Streamflow from rainfall by the unit-graph method, Engineering News-Record, 108, 501–505, 1932.
Soil Conservation Service (SCS): Hydrology, National Engineering Handbook, Section 4 – Hydrology, U.S.D.A., Washington DC, 1985.
Vogel, R. M., Yaindl, C., and Walter, M.: Nonstationarity: Flood magnification and recurrence reduction factors in the United States, J. Am. Water Resour. As., 47, 464–474, https://doi.org/10.1111/j.1752-1688.2011.00541, 2011.
|
2019-08-17 17:53:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 62, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8541575074195862, "perplexity": 2071.5586602890776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313436.2/warc/CC-MAIN-20190817164742-20190817190742-00404.warc.gz"}
|
http://eprints.utas.edu.au/185/
|
# Generators and weights of polynomial codes
Cazaran, J and Kelarev, A (1997) Generators and weights of polynomial codes. Archiv Math. (Basel, Germany), 69. pp. 479-486.
Preview
PDF (author version)
Available under University of Tasmania Standard License.
| Preview
## Abstract
Several authors have established that many classical codes are ideals in certain ring constructions. Berman, in the case of characteristic two, and Charpin, in the general case, proved that all generalized Reed-Muller codes coincide with powers of the radical of the quotient ring
$A=F_q[x_1,\ldots,x_n]/(x_1^{q_1}-1,\ldots,x_n^{q_n}-1),$
where $F_q$ is a finite field, $p=\char F_q>0$ and $q_i=p^{c_i}$, for $i=1,\ldots,n$,
and gave formulas for their Hamming weights. These codes form an important class containing many codes of practical value. Error-correcting codes in similar ring constructions $A$ have also been considered by Poli. Our paper contains new results that generalise and strengthen several facts obtained earlier by other authors.
Item Type: Article error-correcting codes Archiv Math. (Basel, Germany) pp. 479-486 16 Jun 2005 18 Nov 2014 03:10 View statistics for this item
|
2017-01-20 09:44:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319698572158813, "perplexity": 2201.22181134769}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280825.87/warc/CC-MAIN-20170116095120-00261-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://questions.examside.com/past-years/jee/question/pconsider-the-spring-mass-system-with-the-mass-submerged-jee-advanced-physics-units-and-measurements-2vmyp7q8nhnymbei
|
NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
1
### IIT-JEE 2011 Paper 1 Offline
Phase space diagrams are useful tools in analyzing all kinds of dynamical problems. They are especially useful in studying the changes in motion as initial position and momentum are changed. Here we consider some simple dynamical systems in one-dimension. For such systems, phase space is a plane in which position is plotted along horizontal axis and momentum is plotted along vertical axis. The phase space diagram is x(t) vs. p(t) curve in this plane. The arrow on the curve indicates the time flow. For example, the phase space diagram for a particle moving with constant velocity is a straight line as shown int he figure. We use the sign convention in which position or momentum upwards (or to right) is positive and downwards (or to left) is negative.
Consider the spring-mass system, with the mass submerged in water, as shown in the figure. The phase space diagram for one cycle of this system is
A
B
C
D
## Explanation
Due to upthrust, the spring will be compressed. Due to damping by the liquid, the final position will be smaller than the initial position. Hence choices (c) and (d) are not possible. Due to buoyancy, the block will move upwards. Hence, according to the given sign convention, position (x) is positive initially. When the system is released, x will decrease and momentum (p) will increase becoming maximum when the system reaches the mean position (x = 0) after which the momentum will decrease to zero when the oscillator reaches the extreme position, after which the momentum becomes negative. Hence the correct graph is (b).
2
### IIT-JEE 2011 Paper 1 Offline
Phase space diagrams are useful tools in analyzing all kinds of dynamical problems. They are especially useful in studying the changes in motion as initial position and momentum are changed. Here we consider some simple dynamical systems in one-dimension. For such systems, phase space is a plane in which position is plotted along horizontal axis and momentum is plotted along vertical axis. The phase space diagram is x(t) vs. p(t) curve in this plane. The arrow on the curve indicates the time flow. For example, the phase space diagram for a particle moving with constant velocity is a straight line as shown int he figure. We use the sign convention in which position or momentum upwards (or to right) is positive and downwards (or to left) is negative.
The phase space diagram for simple harmonic motion is a circle centred at the origin. In the figure, the two circles represent the same oscillator but for different initial conditions, and E1 and E2 are the total mechanical energies respectively. Then
A
E1 = $$\sqrt2$$E2
B
E1 = 2E2
C
E1 = 4E2
D
E1 = 16E2
## Explanation
Energy of simple harmonic oscillator is
$$E = {1 \over 2}k{A^2}$$
where k is the force constant and A the amplitude of the oscillator. Since the oscillator is the same, the value of k is the same. Hence
$${E_1} = {1 \over 2}kA_1^2$$ and $${E_2} = {1 \over 2}kA_2^2$$
$$\therefore$$ $${{{E_1}} \over {{E_2}}} = {\left( {{{{A_1}} \over {{A_2}}}} \right)^2}$$
Now, A1 = maximum value of displacement of oscillator having energy E1 = 2a and A2 = a. Therefore
$${{{E_1}} \over {{E_2}}} = {\left( {{{2a} \over a}} \right)^2} = 4$$. So, $${E_1} = 4{E_2}$$
3
### IIT-JEE 2011 Paper 1 Offline
Phase space diagrams are useful tools in analyzing all kinds of dynamical problems. They are especially useful in studying the changes in motion as initial position and momentum are changed. Here we consider some simple dynamical systems in one-dimension. For such systems, phase space is a plane in which position is plotted along horizontal axis and momentum is plotted along vertical axis. The phase space diagram is x(t) vs. p(t) curve in this plane. The arrow on the curve indicates the time flow. For example, the phase space diagram for a particle moving with constant velocity is a straight line as shown int he figure. We use the sign convention in which position or momentum upwards (or to right) is positive and downwards (or to left) is negative.
The phase space diagram for a ball thrown vertically up from ground is
A
B
C
D
## Explanation
Let the ball of mass m be thrown up with an initial velocity u. Its velocity v and displacement x are related by v2 $$-$$ u2 = $$-$$2gx, where g is the acceleration due to gravity. The momentum (p = mv) is given by
p2 = m2u2 $$-$$ 2m2gx,
which gives
$$p = \pm \sqrt {{m^2}{u^2} - 2{m^2}gx}$$.
At x = 0, the momentum is mu when the ball starts going up and it becomes $$-$$mu when the ball comes back. At the maximum height, x = u2/(2g), the momentum becomes zero.
4
### IIT-JEE 2011 Paper 2 Offline
A wooden block performs $$SHM$$ on a frictionless surface with frequency, $${v_0}.$$ The block carries a charge $$+Q$$ on its surface . If now a uniform electric field $$\overrightarrow E$$ is switched- on as shown, then the $$SHM$$ of the block will be
A
of the same frequency and with shifted mean position.
B
of the same frequency and with the same mean position
C
of changed frequency and with shifted mean position.
D
of changed frequency and with the same mean position.
## Explanation
The force exerted on charge +Q by the electric field $$\overrightarrow E$$ is
$$\overrightarrow F = Q\overrightarrow E$$
in the direction of $$\overrightarrow E$$. Since $$\overrightarrow F$$ is constant, a constant force is added to the applied force. Hence only the mean position will change.
The frequency will be same.
As $${v_0} = {1 \over {2\pi }}\sqrt {{k \over m}}$$ does not depend on the constant external force.
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12
|
2022-10-02 11:57:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8172820806503296, "perplexity": 577.2184212842651}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00487.warc.gz"}
|
http://swmath.org/software/10604
|
# ZOO
The umbral transfer-matrix method. III: Counting animals. par This is the third part of the five-part saga on the umbral transfer-matrix method, based on Gian-Carlo Rota’s seminal notion of the umbra. In this article we describe the Maple package ZOO that for any specific $k$, automatically constructs an umbral scheme for enumerating “$k$-board” lattice animals (polyominoes) on the two-dimensional square lattice. Such umbral schemes enable counting these important classes of animals in polynomial time as opposed to the exponential time that is required for counting all animals.
## Keywords for this software
Anything in here will be replaced on browsers that support the canvas element
## References in zbMATH (referenced in 4 articles )
Showing results 1 to 4 of 4.
Sorted by year (citations)
|
2017-06-28 03:50:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3305965065956116, "perplexity": 3301.8586255711552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128322320.8/warc/CC-MAIN-20170628032529-20170628052529-00671.warc.gz"}
|
http://openstudy.com/updates/50fe1c4be4b00c5a3be5c8d9
|
## anonymous 3 years ago A right triangle has a hypotenuse of length 10 and a leg of length 7. What is the area of the triangle?
1. anonymous
a=35?
2. ash2326
|dw:1358830749694:dw| Find the missing side
3. ash2326
@marcoduuuh Are you trying?
4. anonymous
yes!
5. ash2326
Find the side then use the area formula, $\frac 1 2 \times base\times \ height$
6. anonymous
hey just an idea here take half of seven and than half of 10 and times them together
7. anonymous
I got 35?
8. anonymous
1/2 of 7 is 3.5 and half of 10 is 5 so what is the answer?
9. ash2326
|dw:1358831111900:dw| You need to find the height first, formula doesn't involve hypotenuse's length
10. anonymous
17.5!
11. ash2326
@marcoduuuh Can you use Pythagoras to find height?
12. anonymous
better use a2+b2=c2 may be that would work here
13. anonymous
that's Pythagoras
14. anonymous
yes
15. anonymous
hey ash am I missing something here??
16. anonymous
10^2+7^2= 149. sqrt of 149 is: 12.2065556
17. anonymous
does this problem have answer choices or anything??
18. anonymous
Nope, it doesn't. ):
19. anonymous
one sec checking something here
20. anonymous
ok
21. anonymous
wait I know now
22. anonymous
opp/ adj what is the function?
23. anonymous
you still there?
|
2016-08-24 16:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7763948440551758, "perplexity": 7631.427878828346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292493.37/warc/CC-MAIN-20160823195812-00036-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://www.scipedia.com/public/Rossi_et_al_2005b
|
Published in Recent Advances in Textile Membranes and Inflatable Structures, Springer Verlag, pp. 89 - 108, 2005
DOI: 10.1007/1-4020-3317-6_6
## 1 Abstract
Current work summarizes the experience of the writer in the modeling of membrane systems. A first subsection describes an efficient membrane model, together with a reliable solution procedure. The following section addresses the simulation of the wrinkling phenomena providing details of a new solution procedure. The last one proposes an efficient technique to obtain the solution of the fluid structural interaction problem.
## 2 The Membrane Model
A membrane is basically a 2D solid which “lives” in a 3D environment. Given the lack of flexural stiffness, membranes can react to applied load only by using their in–plane resistance “choosing” the spatial disposition that is best suited to resist to the external forces. The consequence is that membrane structures tend naturally to find the optimal shape (compatible with the applied constraints) for any given load. In this “shape research”, they typically undergo large displacements and rotations.
From a numerical point of view, this reflects an intrinsic geometrical non–linearity that has to be taken in account in the formulation of the finite element model. In particular, an efficient Membrane Element should be able to represent correctly arbitrary rotations both of the element as a whole and internally to each element. The possibility of unrestricted rigid body motions constitutes a source of ill-conditioning or even of singularity of the tangent stiffness matrix introducing the need of carefully designed solution procedures.
### 2.1 Finite Element Model
Current section describes a finite element model that meets all of the requirements for the correct simulation of general membrane systems. The derivation makes use exclusively of orthogonal bases simplifying the calculations and allowing to express all the terms directly in Voigt Notation, which eases the implementation.
Einstein's sum on repeated indices is assumed unless specified otherwise
• ${\textstyle \mathbf {x_{I}} =\left\{x_{I},y_{I},z_{I}\right\}^{T}}$ is the position vector of the I–th node in the cartesian space (3D space)
• ${\textstyle \mathbf {\xi } =\left\{\xi ,\eta \right\}^{T}}$ describes the position of a point in the local system of coordinates
• Capital letter are used for addressing to the reference configuration
• ${\textstyle N_{I}(\mathbf {\xi } )}$ is the value of the shape function centered on node I on the point of local coordinates ${\textstyle \mathbf {\xi } }$
The use of the standard iso-parametric approach allows to express the position of any point as ${\textstyle \mathbf {x} (\mathbf {\xi } )=N_{I}(\mathbf {\xi } )\mathbf {x} _{I}}$.
In the usual assumptions of the continuum mechanics it is always possible to define the transformation between the local system of coordinates and the cartesian system as
${\displaystyle \left\{\xi {+}d\xi ,\eta \right\}^{T}-\left\{\xi ,\eta \right\}^{T}\rightarrow {\frac {\partial \mathbf {x} (\xi ,\eta )}{\partial \xi }}d\xi =\mathbf {g_{\xi }} d\xi }$
(1)
${\displaystyle \left\{\xi ,\eta {+}d\eta \right\}^{T}-\left\{\xi ,\eta \right\}^{T}\rightarrow {\frac {\partial \mathbf {x} (\xi ,\eta )}{\partial \eta }}d\eta =\mathbf {g_{\eta }} d\eta }$
(2)
in which we introduced the symbols
${\displaystyle \mathbf {g_{\xi }} ={\frac {\partial N_{I}(\xi ,\eta )}{\partial \xi }}\mathbf {x_{I}} }$
(3)
${\displaystyle \mathbf {g_{\eta }} ={\frac {\partial N_{I}(\xi ,\eta )}{\partial \eta }}\mathbf {x_{I}} }$
(4)
the vectors ${\textstyle \mathbf {g_{\xi }} }$ and ${\textstyle \mathbf {g_{\eta }} }$ of the 3D space, can be considered linearly independent (otherwise compenetration or self contact would manifest) follows immediately that they can be used in the construction of a base of the 3D space. In particular an orthogonal base can be defined as
${\displaystyle \mathbf {v_{1}} ={\frac {\mathbf {g_{\xi }} }{\left\|\mathbf {g_{\xi }} \right\|}}}$
(5)
${\displaystyle \mathbf {n} ={\frac {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } }{\left\|\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } \right\|}}\rightarrow \mathbf {v_{2}} =\mathbf {\mathbf {\mathbf {n} } } \times \mathbf {\mathbf {\mathbf {v_{1}} } } }$
(6)
${\displaystyle \mathbf {v_{3}} =\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } }$
(7)
Vectors ${\textstyle \mathbf {v_{1}} }$ and ${\textstyle \mathbf {v_{2}} }$ describe the local tangent plane to the membrane while the third base vector is always orthogonal. Follows the possibility of defining a local transformation rule that links the local coordinates ${\textstyle \mathbf {\xi } }$ and the coordinates ${\textstyle \mathbf {\widehat {x}} }$ in the local tangent plane base. This can be achieved by considering that an increment ${\textstyle \left\{d\xi ,0\right\}}$ maps to the new base as
${\displaystyle {\begin{pmatrix}d\xi \\0\end{pmatrix}}\rightarrow {\begin{pmatrix}\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {v_{1}} } } \\\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {v_{2}} } } \end{pmatrix}}d\xi \,;\,{\begin{pmatrix}0\\d\eta \end{pmatrix}}\rightarrow {\begin{pmatrix}\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {v_{1}} } } \\\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {v_{2}} } } \end{pmatrix}}d\eta }$
(8)
this is sinthetized by the definition of the linear application
${\displaystyle {\begin{pmatrix}d{\widehat {x}}_{1}\\d{\widehat {x}}_{2}\end{pmatrix}}={\begin{pmatrix}\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {v_{1}} } } &\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {v_{1}} } } \\\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {v_{2}} } } &\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {v_{2}} } } \end{pmatrix}}{\begin{pmatrix}d\xi \\d\eta \end{pmatrix}}\rightarrow d{\widehat {\mathbf {x} }}=\mathbf {j} d\mathbf {\xi } }$
(9)
it should be noted that ${\textstyle \mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {v_{3}} } } }$ and ${\textstyle \mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {v_{3}} } } }$ are identically zero consequently no components are left apart in representing the membrane in the new coordinates system. Taking into account the definition of the base vectors the tensor ${\textstyle \mathbf {j} }$ becomes (after some calculations)
${\displaystyle \mathbf {j} ={\begin{pmatrix}\left\|\mathbf {g_{\xi }} \right\|&{\frac {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {g_{\eta }} } } }{\left\|\mathbf {g_{\xi }} \right\|}}\\0&{\frac {\left\|\mathbf {v_{3}} \right\|}{\left\|\mathbf {g_{\xi }} \right\|}}\end{pmatrix}}}$
(10)
and its determinant
${\displaystyle det(\mathbf {j} )=\left\|\mathbf {v_{3}} \right\|=\left\|\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } \right\|}$
(11)
As the interest is focused on the purely membranal behavior, it is not needed to take in account the deformation of the structure over the thickness, as this can be calculated “a posteriori” once known the deformation of the mid–plane. On the base of this observation, the deformation gradient which describes the deformation of the membrane as a 3D body
${\displaystyle \mathbf {F} _{3\times {3}}={\frac {\partial \mathbf {x} }{\partial \mathbf {X} }}}$
(12)
can be replaced by
${\displaystyle \mathbf {\widehat {F}} _{2\times {2}}={\frac {\partial \mathbf {\widehat {x}} }{\partial \mathbf {\widehat {X}} }}={\frac {\partial \mathbf {\widehat {x}} }{\partial \xi }}{\frac {\partial \xi }{\partial \mathbf {\widehat {X}} }}=\mathbf {j} \mathbf {J} ^{-1}}$
(13)
taking in account the behavior over the thickness in the definition of the (two dimensional) constitutive model to be used (for example making the assumption of plane stress). The symbol ${\textstyle \mathbf {J} }$ is used here and in the following to indicate ${\textstyle \mathbf {j} }$ calculated in the reference position.
under this considerations the subsequent development of the finite element follows closely the standard procedure for a non–linear 2D finite element, the only difference being that the local base changes on all the domain, which makes the linearization slightly more involved.
To proceed further we need therefore to write the Right Cauchy Green Strain tensor ${\textstyle \mathbf {C} =\mathbf {F} ^{T}\mathbf {F} }$ which takes the form
${\displaystyle \mathbf {C} =\left(\mathbf {J} ^{-T}\mathbf {j} ^{T}\mathbf {j} \mathbf {J} ^{-1}\right)=\left(\mathbf {G} ^{T}\mathbf {g} \mathbf {G} \right)}$
(14)
where we introduced the symbols ${\textstyle \mathbf {g} =\mathbf {j} ^{T}\mathbf {j} }$ and ${\textstyle \mathbf {G} =\mathbf {J} ^{-1}}$. Operator ${\textstyle \mathbf {g} }$ takes, after some calculations, the simple form
${\displaystyle \mathbf {g} ={\begin{pmatrix}\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {g_{\xi }} } } &\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {g_{\xi }} } } \\\mathbf {\mathbf {\mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {\mathbf {g_{\eta }} } } &\mathbf {\mathbf {\mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {\mathbf {g_{\eta }} } } \end{pmatrix}}}$
(15)
From the definition of he Green Lagrange strain tensor ${\textstyle \mathbf {E} ={\frac {1}{2}}(\mathbf {C-I} )}$ we obtain immediately ${\textstyle \delta \mathbf {E} ={\frac {1}{2}}\delta \mathbf {C} }$. This allows to write the equation of virtual works in compact form as (taking in consideration only body forces and pressure forces)
${\displaystyle \delta W_{int}=\delta W_{ext}+\delta W_{press}}$
(16)
${\displaystyle {\frac {h_{0}}{2}}\int _{\Omega }{\delta \mathbf {C} :\mathbf {S} }=h_{0}\int _{\Omega }{\mathbf {\mathbf {\delta \mathbf {x} } } \bullet \mathbf {\mathbf {\mathbf {b} } } }+\int _{\omega }{p\mathbf {\mathbf {\delta \mathbf {x} } } \bullet \mathbf {\mathbf {\mathbf {n} } } }}$
(17)
#### 2.1.1 Internal Work
The term ${\textstyle {\frac {h_{0}}{2}}\int _{\Omega }{\delta \mathbf {C} :\mathbf {S} }}$ describes the work of internal forces during the deformation process. Operator ${\textstyle \mathbf {G} =\mathbf {J} ^{-1}}$ is referred to the reference configuration and is therefore strictly constant, follows immediately that
${\displaystyle \delta \mathbf {C} =\mathbf {G} ^{T}\delta \mathbf {g} \mathbf {G} }$
(18)
The term ${\textstyle \delta \mathbf {C} :\mathbf {S} }$ becomes in Einstein's notation
${\displaystyle {\frac {1}{2}}\delta \mathbf {C} :\mathbf {S} ={\frac {1}{2}}\delta C_{IJ}S_{IJ}={\frac {1}{2}}\delta g_{ij}G_{iI}G_{jJ}S_{IJ}}$
(19)
introducing the symbols
${\displaystyle {\frac {1}{2}}\delta g_{ij}\rightarrow {\frac {1}{2}}\delta \left\{\mathbf {g} \right\}={\frac {1}{2}}{\begin{pmatrix}\delta g_{11}\\\delta g_{22}\\2\delta g_{12}\end{pmatrix}}\,;\,S_{IJ}\rightarrow \left\{\mathbf {S} \right\}={\begin{pmatrix}S_{11}\\S_{22}\\S_{12}\end{pmatrix}}}$
(20)
${\displaystyle G_{iI}G_{jJ}\rightarrow \left[\mathbf {Q} \right]^{T}={\begin{pmatrix}(G_{11})^{2}&(G_{12})^{2}&2G_{11}G_{12}\\0&(G_{22})^{2}&0\\0&G_{12}G_{22}&G_{11}G_{22}\end{pmatrix}}}$
(21)
it is possible to express the (19) in Voigt form as
${\displaystyle {\frac {1}{2}}\delta \mathbf {C} :\mathbf {S} ={\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\left[\mathbf {Q} \right]^{T}\left\{\mathbf {S} \right\}={\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\left\{\mathbf {s} \right\}\,;\,\left\{\mathbf {s} \right\}=\left[\mathbf {Q} \right]^{T}\left\{\mathbf {S} \right\}}$
(22)
considering the definition(15), introducing the symbol ${\textstyle \left\{\mathbf {\delta x} \right\}^{T}={\begin{pmatrix}\left\{\mathbf {\delta x_{1}} \right\}^{T}&\ldots &\left\{\mathbf {\delta x_{k}} \right\}^{T}\end{pmatrix}}}$ and taking in account the isoparametric approximation one obtains
${\displaystyle {\frac {1}{2}}\delta g_{11}={\frac {\partial N_{I}}{\partial \xi }}\delta \mathbf {\mathbf {\mathbf {x} _{I}} } \bullet \mathbf {\mathbf {\mathbf {g_{\xi }} } } =\left\{\mathbf {\delta x} \right\}^{T}{\begin{pmatrix}{\frac {\partial N_{1}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\\\ldots \\{\frac {\partial N_{k}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\end{pmatrix}}}$
(23)
${\displaystyle {\frac {1}{2}}\delta g_{22}={\frac {\partial N_{I}}{\partial \xi }}\delta \mathbf {\mathbf {\mathbf {x} _{I}} } \bullet \mathbf {\mathbf {\mathbf {g_{\eta }} } } =\left\{\mathbf {\delta x} \right\}^{T}{\begin{pmatrix}{\frac {\partial N_{1}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}\\\ldots \\{\frac {\partial N_{k}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}\end{pmatrix}}}$
(24)
${\displaystyle {\frac {1}{2}}\delta 2g_{12}={\frac {\partial N_{I}}{\partial \eta }}\delta \mathbf {\mathbf {\mathbf {x} _{I}} } \bullet \mathbf {\mathbf {\mathbf {g_{\xi }} } } +{\frac {\partial N_{I}}{\partial \xi }}\delta \mathbf {\mathbf {\mathbf {x} _{I}} } \bullet \mathbf {\mathbf {\mathbf {g_{\eta }} } } =\left\{\mathbf {\delta x} \right\}^{T}{\begin{pmatrix}{\frac {\partial N_{1}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}+{\frac {\partial N_{1}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\\\ldots \\{\frac {\partial N_{k}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}+{\frac {\partial N_{k}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\\\end{pmatrix}}}$
(25)
by defining the matrix
${\displaystyle \left[\mathbf {b} \right]^{T}={\begin{pmatrix}{\frac {\partial N_{1}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}&{\frac {\partial N_{1}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}&{\frac {\partial N_{1}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}+{\frac {\partial N_{1}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\\\ldots &\ldots &\ldots \\{\frac {\partial N_{k}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}&{\frac {\partial N_{k}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}&{\frac {\partial N_{k}}{\partial \xi }}\left\{\mathbf {\mathbf {g_{\eta }} } \right\}+{\frac {\partial N_{k}}{\partial \eta }}\left\{\mathbf {\mathbf {g_{\xi }} } \right\}\\\end{pmatrix}}}$
(26)
it is then possible to write
${\displaystyle {\frac {1}{2}}\left\{\mathbf {\delta C} \right\}^{T}\left\{\mathbf {S} \right\}=\left\{\mathbf {\delta E} \right\}^{T}\left\{\mathbf {S} \right\}=\left\{\mathbf {\delta x} \right\}^{T}\left[\mathbf {b} \right]^{T}\left[\mathbf {Q} \right]^{T}\left\{\mathbf {S} \right\}}$
(27)
Defining the symbol ${\textstyle \left[\mathbf {B} \right]}$
${\displaystyle \left[\mathbf {B} \right]=\left[\mathbf {Q} \right]\left[\mathbf {b} \right]}$
(28)
we finally obtain
${\displaystyle \left\{\mathbf {f_{int}} \right\}=\int _{\Omega }{h_{0}\left[\mathbf {B} \right]^{T}\left\{\mathbf {S} \right\}d\Omega }}$ (29) ${\displaystyle \delta W_{int}=\left\{\mathbf {\delta x} \right\}^{T}\left\{\mathbf {f_{int}} \right\}}$ (30)
#### 2.1.2 External Work
Derivation of the work of external conservative forces follows the standard procedure and can be found on any book on nonlinear finite elements. The expression of the work of follower forces (body forces) is on the other hand a little more involved. In the following the pressure is considered constant, the non linearity being introduced by the change of direction of the normal. For the derivation of the pressure contributions it is much easier to perform the integration over the actual domain then over the reference one.
${\displaystyle \delta W_{pr}=\int _{\omega }{p\mathbf {\mathbf {\delta \mathbf {x} } } \bullet \mathbf {\mathbf {\mathbf {n} } } }d\omega =\int _{\xi ,\eta }{p\mathbf {\mathbf {\delta \mathbf {x} } } \bullet \mathbf {\mathbf {\mathbf {n} } } }det(j)d\xi d\eta }$
(31)
taking in account the definition of the base vectors (6)(7), and considering (11) we obtain immediately
${\displaystyle \left\{\mathbf {f_{I}} \right\}=\int _{\xi ,\eta }{N_{I}(\xi ,\eta )p(\xi ,\eta )\mathbf {v_{3}} (\xi ,\eta )d\xi d\eta }}$
(32)
${\displaystyle \left\{\mathbf {f_{pr}} \right\}={\begin{pmatrix}\left\{\mathbf {f_{1}} \right\}^{T}&\ldots &\left\{\mathbf {f_{k}} \right\}^{T}\end{pmatrix}}^{T}}$
(33)
${\displaystyle \delta W_{pr}=\left\{\mathbf {\delta \mathbf {x} } \right\}^{T}\left\{\mathbf {f_{pr}} \right\}}$
(34)
#### 2.1.3 Linearization
Equation (16) is nonlinear, its practical use needs therefore its linearization. The best rate of convergence is theoretically given by Newton-Raphson technique which guarantees quadratical convergence to the solution. Defining ${\textstyle \Psi =\delta W_{int}-\delta W_{ext}-\delta W_{pr}}$ each Newton–Raphson step takes the form
${\displaystyle d\Psi +\Psi =0}$
(35)
The term ${\textstyle \Psi }$ can be explicitated using expression (30)(34) we therefore miss only the differential ${\textstyle d\Psi }$ that can be evaluated from the linearization of the different contributions
Linearization of internal work
The term connected to the internal works can be linearized as follows
${\displaystyle d\left(W_{int}\right)=d\left({\frac {h_{0}}{2}}\int _{\Omega }{\delta \mathbf {C} :\mathbf {S} }\right)=}$ ${\displaystyle ={\frac {h_{0}}{2}}\int _{\Omega }{d\left(\delta \mathbf {C} \right):\mathbf {S} }+{\frac {h_{0}}{2}}\int _{\Omega }{\delta \mathbf {C} :d\left(\mathbf {S} \right)}}$ (36)
the first terms gives, by using (22)
${\displaystyle {\frac {h_{0}}{2}}\int _{\Omega }{d\left(\delta \mathbf {C} \right):\mathbf {S} }={\frac {h_{0}}{2}}\int _{\Omega }{d\left(\left\{\mathbf {\delta g} \right\}^{T}\right)\left\{\mathbf {s} \right\}}=}$ ${\displaystyle ={\frac {h_{0}}{2}}\int _{\Omega }{d\left({\frac {\partial \left\{\mathbf {\delta g} \right\}^{T}}{\partial \left\{\mathbf {x} \right\}}}d\left\{\mathbf {x} \right\}\right)\left\{\mathbf {s} \right\}}}$ (37)
now it can be seen that
${\displaystyle d\left({\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\right)={\begin{pmatrix}\mathbf {\mathbf {\delta \mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\xi }} } } &\mathbf {\mathbf {\delta \mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\eta }} } } &\mathbf {\mathbf {\delta \mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\xi }} } } +\mathbf {\mathbf {\delta \mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\eta }} } } \end{pmatrix}}\left\{\mathbf {s} \right\}=}$ ${\displaystyle ={\begin{pmatrix}s_{11}\mathbf {\mathbf {\delta \mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\xi }} } } &s_{22}\mathbf {\mathbf {\delta \mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\eta }} } } &s_{12}\left(\mathbf {\mathbf {\delta \mathbf {g_{\eta }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\xi }} } } +\mathbf {\mathbf {\delta \mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\eta }} } } \right)\end{pmatrix}}}$ (38)
substitution of the shape functions gives immediately a set of equalities in the form
${\displaystyle s_{11}\mathbf {\mathbf {\delta \mathbf {g_{\xi }} } } \bullet \mathbf {\mathbf {d\mathbf {g_{\xi }} } } =s_{11}{\frac {\partial N_{I}}{\partial \xi }}{\frac {\partial N_{J}}{\partial \xi }}\delta _{ij}\mathbf {\mathbf {\delta x_{I}} } \bullet \mathbf {\mathbf {dx_{jJ}} } =s_{11}{\frac {\partial N_{I}}{\partial \xi }}{\frac {\partial N_{J}}{\partial \xi }}\delta _{ij}\delta x_{iI}dx_{jJ}}$
(39)
which makes possible to write
${\displaystyle d\left({\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\right)}$ ${\displaystyle =\left(s_{11}{\frac {\partial N_{I}}{\partial \xi }}{\frac {\partial N_{J}}{\partial \xi }}+s_{22}{\frac {\partial N_{I}}{\partial \eta }}{\frac {\partial N_{J}}{\partial \eta }}+s_{12}\left({\frac {\partial N_{I}}{\partial \eta }}{\frac {\partial N_{J}}{\partial \xi }}+{\frac {\partial N_{I}}{\partial \xi }}{\frac {\partial N_{J}}{\partial \eta }}\right)\right)\delta _{ij}\delta x_{iI}dx_{jJ}}$ (40)
introducing the vectors ${\textstyle \mathbf {a} ={\begin{pmatrix}{\frac {\partial N_{1}}{\partial \xi }}&\ldots &{\frac {\partial N_{k}}{\partial \xi }}\end{pmatrix}}}$ and ${\textstyle \mathbf {b} ={\begin{pmatrix}{\frac {\partial N_{1}}{\partial \eta }}&\ldots &{\frac {\partial N_{k}}{\partial \eta }}\end{pmatrix}}}$ together with the new tensor
${\displaystyle \mathbf {A} =\left(s_{11}\mathbf {a} \mathbf {a} +s_{22}\mathbf {b} \mathbf {b} +s_{12}\left(\mathbf {b} \mathbf {a} +\mathbf {a} \mathbf {b} \right)\right)\,;\,\mathbf {a} \mathbf {a} =\mathbf {a} \otimes \mathbf {a} }$
(41)
we can simplify greatly the expression in the form
${\displaystyle d\left({\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\right)}$ ${\displaystyle =A_{IJ}\delta {ij}\delta \mathbf {x_{iI}} d\mathbf {x_{jJ}} }$
(42)
or, in Voigt form
${\displaystyle d\left({\frac {1}{2}}\left\{\mathbf {\delta g} \right\}^{T}\right)}$ ${\displaystyle ={\begin{pmatrix}\left\{\mathbf {\delta x_{1}} \right\}^{T}&\ldots &\left\{\mathbf {\delta x_{k}} \right\}^{T}\end{pmatrix}}{\begin{pmatrix}A_{11}\left[\mathbf {I} \right]&\ldots &A_{1k}\left[\mathbf {I} \right]\\\ldots &\ldots &\ldots \\A_{k1}\left[\mathbf {I} \right]&\ldots &A_{kk}\left[\mathbf {I} \right]\\\end{pmatrix}}}$ ${\displaystyle {\begin{pmatrix}\left\{\mathbf {dx_{1}} \right\}\\\ldots \\\left\{\mathbf {dx_{k}} \right\}\end{pmatrix}}=\left\{\mathbf {\delta x} \right\}^{T}\left[\mathbf {K_{geo}} \right]\left\{\mathbf {dx} \right\}}$ (43)
the derivation of the “material” contribution to the stiffness matrix follows the standard path. Assuming as normally
${\displaystyle d\mathbf {S} ={\frac {\partial \mathbf {S} }{\partial \mathbf {E} }}:d\mathbf {E} \rightarrow \left\{\mathbf {dS} \right\}=\left[\mathbf {D_{tan}} \right]\left\{\mathbf {dE} \right\}}$
(44)
we obtain immediately
${\displaystyle \int _{\Omega }{{\frac {h_{0}}{2}}\delta \mathbf {C} :d\left(\mathbf {S} \right)}=\left(\int _{\Omega }{h_{0}\delta \left\{\mathbf {x} \right\}^{T}\left[\mathbf {B} \right]^{T}\left[\mathbf {D_{tan}} \right]\left[\mathbf {B} \right]d\Omega }\right)\left\{\mathbf {dx} \right\}}$ ${\displaystyle =\delta \left\{\mathbf {x} \right\}^{T}\left[\mathbf {K_{mat}} \right]\left\{\mathbf {dx} \right\}}$
(45)
${\displaystyle \left[\mathbf {K_{mat}} \right]=\int _{\Omega }{h_{0}\left[\mathbf {B} \right]^{T}\left[\mathbf {D_{tan}} \right]\left[\mathbf {B} \right]d\Omega }}$
(46)
Linearization of external forces The linearization of the work ${\textstyle W_{ext}}$ is not needed as it describes the work of constant forces. The only term missing is therefore the one relative to the work of the follower forces.
${\displaystyle d\left(\int _{\omega }{p\mathbf {\mathbf {\delta \mathbf {x} } } \bullet \mathbf {\mathbf {\mathbf {n} } } d\omega }\right)\rightarrow \delta \left\{\mathbf {x_{I}} \right\}^{T}d\mathbf {f_{I}} =\delta \left\{\mathbf {x_{I}} \right\}^{T}d\left(\int _{\omega }{pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } } \right\}d\omega }\right)}$
(47)
differentiating ${\textstyle pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } } \right\}}$we obtain
${\displaystyle d\left(pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } } \right\}\right)=pN_{I}\left\{\mathbf {\mathbf {\mathbf {d\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } } \right\}+pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {d\mathbf {g_{\eta }} } } } \right\}=}$ ${\displaystyle =pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {d\mathbf {g_{\eta }} } } } \right\}-pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\eta }} } } \times \mathbf {\mathbf {d\mathbf {g_{\xi }} } } } \right\}}$ (48)
Considering that it is possible to write the cross product of two vectors in Voigt format as
${\displaystyle \mathbf {c} =\mathbf {\mathbf {a} } \times \mathbf {\mathbf {b} } \rightarrow {\begin{pmatrix}c_{1}\\c_{2}\\c_{3}\end{pmatrix}}={\begin{pmatrix}0&-a_{3}&a_{2}\\a_{3}&0&-a_{1}\\-a_{2}&a_{1}&0\end{pmatrix}}{\begin{pmatrix}b_{1}\\b_{2}\\b_{3}\end{pmatrix}}\rightarrow \left\{\mathbf {c} \right\}=\left[\mathbf {a\times } \right]\left\{\mathbf {b} \right\}}$
(49)
and taking in account (3) and (4) we obtain
${\displaystyle d\left(pN_{I}\left\{\mathbf {\mathbf {\mathbf {\mathbf {g_{\xi }} } } \times \mathbf {\mathbf {\mathbf {g_{\eta }} } } } \right\}\right)=\left(pN_{I}{\frac {\partial N_{J}}{\partial \eta }}\left[\mathbf {\mathbf {g_{\xi }} \times } \right]-pN_{I}{\frac {\partial N_{J}}{\partial \xi }}\left[\mathbf {\mathbf {g_{\xi }} \times } \right]\right)\left\{\mathbf {dx_{J}} \right\}}$
(50)
${\displaystyle \left[\mathbf {K_{pr}} \right]={\begin{pmatrix}K_{11}&\ldots &K_{1k}\\\ldots &\ldots &\ldots \\K_{k1}&\ldots &K_{kk}\end{pmatrix}}\,;\,\left[\mathbf {K_{IJ}} \right]=\left(pN_{I}{\frac {\partial N_{J}}{\partial \eta }}\left[\mathbf {\mathbf {g_{\xi }} \times } \right]-pN_{I}{\frac {\partial N_{J}}{\partial \xi }}\left[\mathbf {\mathbf {g_{\xi }} \times } \right]\right)}$
(51)
${\displaystyle d(\delta W_{pr})=\left\{\mathbf {\delta x} \right\}^{T}\left[\mathbf {K_{pr}} \right]\left\{\mathbf {dx} \right\}}$
(52)
Linearized formulation
The only step missing is to merge all the terms in (35) to find the final expression. The result of this operation is
${\displaystyle \left\{\mathbf {\delta x} \right\}^{T}\left(\left[\mathbf {K_{geo}} \right]+\left[\mathbf {K_{mat}} \right]-\left[\mathbf {K_{pr}} \right]\right)\left\{\mathbf {dx} \right\}=\left\{\mathbf {\delta x} \right\}^{T}\left(\left\{\mathbf {f_{ext}} \right\}-\left\{\mathbf {f_{int}} \right\}\right)}$
(53)
invoking the arbitrariety of ${\textstyle \left\{\mathbf {\delta x} \right\}}$ and introducing the definitions
${\displaystyle \left[\mathbf {K_{tan}} \right]=\left[\mathbf {K_{geo}} \right]+\left[\mathbf {K_{mat}} \right]-\left[\mathbf {K_{pr}} \right]}$
(54)
${\displaystyle \left\{\mathbf {R} \right\}=\left\{\mathbf {f_{ext}} \right\}-\left\{\mathbf {f_{int}} \right\}}$
(55)
the principle of virtual works gives for each element
${\displaystyle \left[\mathbf {K_{tan}} \right]\left\{\mathbf {dx} \right\}=\left\{\mathbf {R} \right\}}$
(56)
### 2.2 Solution procedure
As briefly outlined at the beginning of the section, membrane systems are possibly subjected to large rigid body motions which reflects in singular or ill-conditioned “static” stiffness matrices. In addition, convergence of the Newton–Raphson algorithm is often difficult as the final solution can be very “far” from the initial guess even for little variations of the applied loads.
Dynamic solution techniques on the other hand are not affected by such problems. Mass and damping contributions remove the singularities from the system and generally provide a better conditioning to the problem. The introduction of dynamic terms provides as well an excellent source of stabilization for the solution (physically the solution can't change much in a small time), ending up with better convergence properties inside each solution step.
Any standard (non–linear) time integration technique can be theoretically used in conjunction with the proposed FE model for the study of dynamic response of the systems of interest. Some care should be however taken in the choice because the high geometric non–linearities tend to challenge the stability of the time–integration scheme chosen.
Generally speaking, “statics” can be seen as the limit to which a dynamic process tends (under a given constant load). Dynamic systems show a “transient” phase that vanishes in time to reach the so called “steady state”; the presence of damping in the system reduces gradually the oscillations making the system tend to a constant configuration that is the “static” solution. The time needed for the system to reach this final configuration is controlled by the amount of damping. For values of system's damping exceeding a critical value, the transient phase disappears and the systems reaches directly the final solution without any oscillation.
In many situations the main engineering interest is focused on “static” solutions rather than on the complete dynamic analysis of the system. The previews considerations suggests immediately that “statics” could be obtained efficiently by studying the dynamics of over damped systems. This could be obtained by simply adding a fictitious damping source to the “standard” dynamic problem. The “only” problem is therefore the choice of an idoneous form for such damping. Unfortunately this choice is not trivial, however it possible to observe [1],[2] that the “steady state” solution of the system
${\displaystyle \mathbf {M} {\ddot {\mathbf {x} }}+\mathbf {D} {\dot {\mathbf {x} }}+\mathbf {K} \mathbf {x} =\mathbf {f} \left(\mathbf {x} \right)}$
(57)
is (statically) equivalent to that of the system
${\displaystyle \mathbf {D} {\dot {\mathbf {x} }}+\mathbf {K} \mathbf {x} =\mathbf {f} \left(\mathbf {x} \right)}$
(58)
which can be seen as the previews for the case of zero density. The advantage of this equivalent system is that the inertia terms are always zero, consequently the system converges smoothly in time to its solution. This final solution is not affected by the particular choice of the damping, however in the author's experience, an effective choice is ${\textstyle \mathbf {D} =\beta \mathbf {M} }$ as proposed by [1].
Table (1) gives the details of the proposed solution procedure, making use of Newmark's integration scheme. The procedure described differs from a “real” dynamics simulation only on the choice of the damping and of the mass matrix. Any other choice is possible for the time integration scheme to be used. It is of interest to observe that the system described is highly dissipative, energy stability of the time integration scheme is therefore not crucial.
Table. 1 Pseudo–Static solution procedure
• for pseudo–static strategy: calculate the constant matrices
${\displaystyle D=\beta \mathbf {M} \,}$
set ${\textstyle \mathbf {M} =0}$ after initializing the damping matrix. (if ${\textstyle \mathbf {M} }$ is not set to 0, “real” dynamic simulation can be performed)
• choose Newmark constants: a classical choice is
${\displaystyle \delta ={\frac {1}{2}}\,;\,\alpha ={\frac {1}{4}}}$
• evaluate the constants
${\displaystyle a_{0}={\frac {1}{\alpha \Delta t^{2}}}\,;\,a_{1}={\frac {\delta }{\alpha \Delta t}}\,;\,a_{2}={\frac {1}{\alpha \Delta t}}}$
${\displaystyle a_{3}={\frac {1}{2\alpha }}-1\,;\,a_{4}={\frac {\delta }{\alpha }}-1\,;\,a_{5}={\frac {\Delta t}{2}}\left({\frac {\delta }{\alpha }}-2\right)}$
• predict the solution at time ${\textstyle t+\Delta t}$ using for example
${\displaystyle \mathbf {x} _{t+\Delta t}^{0}=\mathbf {x} _{t}+{\dot {\mathbf {x} }}_{t}\Delta t}$
${\displaystyle {\dot {\mathbf {x} }}_{t+\Delta t}={\dot {\mathbf {x} }}_{t}}$
${\displaystyle {\ddot {\mathbf {x} }}_{t+\Delta t}=0}$
• iterate until convergence
• calculate the system's contributions
${\displaystyle \left[\mathbf {K_{tan}^{dyn}} \right]=\left[\mathbf {K_{tan}} \right]+a_{0}\left[\mathbf {M} \right]+a_{1}\left[\mathbf {D} \right]}$
${\displaystyle \left\{\mathbf {R^{dyn}} \right\}=\left\{\mathbf {R} \right\}-\left[\mathbf {M} \right]\left\{\mathbf {{\ddot {\mathbf {x} }}_{t+\Delta t}^{i}} \right\}-\left[\mathbf {D} \right]\left\{\mathbf {{\dot {\mathbf {x} }}_{t+\Delta t}^{i}} \right\}}$
• solve the system for the correction ${\textstyle \mathbf {dx} }$
• update the results as
${\displaystyle \mathbf {x} _{t+\Delta t}^{i+1}=\mathbf {x} _{t+\Delta t}^{i}+\mathbf {dx} }$
${\displaystyle \Delta \mathbf {x} =\mathbf {x} _{t+\Delta t}^{i+1}-\mathbf {x} _{t}}$
${\displaystyle {\dot {\mathbf {x} }}_{t+\Delta t}=a_{1}\Delta \mathbf {x} -a_{4}{\dot {\mathbf {x} }}_{t}-a_{5}{\ddot {\mathbf {x} }}_{t}}$
${\displaystyle {\ddot {\mathbf {x} }}_{t+\Delta t}=a_{0}\Delta \mathbf {x} -a_{2}{\dot {\mathbf {x} }}_{t}-a_{3}{\ddot {\mathbf {x} }}_{t}}$
• go to next time step
## 3 Wrinkling Simulation
Given the lack of flexural stiffness, membrane systems are easily subjected to buckling in presence of any compressive load. The idea is that when a compressive stress tends to appear on a part of a structure, it is immediately removed by local instability phenomena, that manifest with the formation of little "waves" of direction perpendicular to the direction of stresses. Prediction of the size of those "waves" commonly called "wrinkles" is not generally possible as their disposition is somehow random and connected to initial imperfections. However their average size is strictly connected to the bending stiffness meaning in particular that for the problems of interest the wrinkle tend to become quite little in comparison with the total size of the structure. It has been proved to be feasible [3] and [4] to describe correctly the formation of the wrinkles using extensively mesh refinement procedures together with low order thin-shell elements. An analogous approach using higher order shells and a fixed reference mesh, joint with some comparison with experimental data can be found in [5]. A key point to be taken in account is that this procedures need a mesh density inversely proportional to the expected size of the wrinkles. In other words the smaller are the wrinkles, the more elements are needed to correctly describe the phenomena. As in our structures, the thickness is very low compared to the other dimensions, the referenced approaches would become soon too expensive.
An alternative procedure is based on the "enrichment" of the elements involved in the simulation. The idea is to renounce to a description of the single wrinkle and to focus the analysis of the average stress and displacement field. This allows to consider in the analysis elements of size bigger then the expected wrinkle size introducing the effect of the local instability in the calculation of the stress or strain field at integration point level. We would like to stress that this approach is not necessarily less "precise" than the former. Indeed no information on the wrinkling size is provided, however the global stress field is correctly described. It is as well important to highlight how the position of the wrinkles is never known given its strong dependence on the initial imperfections, therefore the only reliable result is the individuation of the "wrinkled zone" that can be correctly described by both methods.
### 3.1 Enriched material model
Over the years many different proposals to perform the element enrichment were presented. Mainly two different approaches survived, one based on manipulations of the gradient of deformations, the second connected with a redefinition of the constitutive model.
The former, proposed by Roddeman in [6] and [7], is based on the definition of an effective deformation gradient obtained superimposing to the normal displacement field a term connected with the formation of wrinkles. This modification allows to describe correctly the shortening of the average plane of the membrane in presence of compressive stresses.
The latter is based on a modification of the stress-strain relationship, meaning that the constitutive law is modified not to allow compressive stresses. The main advantage of this second techniques, is to make the implementation completely independent from the element used, characteristic that makes them very attractive for the practical implementation.
A "new" material model, based on the modification of a standard linear material is introduced in current section. This formulation is based on the penalization of the elastic characteristics of the material in the system of the principal stresses. In simple words, the material is softened in the direction of the compressive stresses and keeps its characteristics in the other direction. This is achieved by a two step procedure, based on a phase of assessment of the state of the membrane and on a phase of modification of the material tangent matrix.
Many different choices are theoretically possible in combining the two different phases, however in the writer experience, iterative application of the wrinkling correction inside the same time step leads generally to a very slow or unstable convergence behavior. The proposed solution procedure is therefore based on a “explicit” approach in the form
• standard pseudo–static solution step
• check state of each element
• modify material
• go to next “time” step
This procedure is very efficient as it takes full advantage of the pseudo–static solution procedure the only additional cost being linked to the evaluation of the state and to the penalization of the constitutive matrix. As during each time step the material is "constant", no additional source of non-linearity is introduced therefore the element retains its convergence properties. The stabilization of the stress-field is guaranteed by the dynamic process that, together with the stabilization introduced in the material model effectively damps out the oscillations.
The reader should note that the aim of the proposed technique is to get a reliable static solution. There is absolutely no guarantee that “on the way” to the static solution the wrinkling procedure converges inside each time step, however, when all the movement is dissipated so the structure reached the final configuration, wrinkling arrived to a constant solution.
Assessment of the state of the membrane
One of the crucial steps in the procedure is the evaluation of the state of the membrane. In particular it is necessary to “decide” if the membrane is (or rather should be) in biaxial tension, in uniaxial tension or completely unstressed because of the formation of wrinkles. The assessment procedure, is based on the introduction of the fictitious stress ${\textstyle \mathbf {\sigma } ^{*}}$ that represents the stress that would exist on the membrane if formation of the wrinkles was not allowed. This is related to the total stress from the relation
${\displaystyle \left[\mathbf {\sigma ^{*}} \right]=\left[\mathbf {D_{original}} \right]:\left\{\mathbf {E} \right\}}$
(59)
the principal direction of ${\textstyle \mathbf {\sigma } ^{*}}$ can be calculated as
${\displaystyle c_{1}=\sigma _{11}^{*}+\sigma _{22}^{*}\,;\,c_{2}=\sigma _{11}^{*}-\sigma _{22}^{*}\,;\,c_{3}={\sqrt {({\frac {c_{2}}{2}})^{2}+(\sigma _{12}^{*})^{2}}}}$ ${\displaystyle \sigma _{I}^{*}=c_{1}+c_{2}\,;\,\sigma _{II}^{*}=c_{1}-c_{2}\,;\,\alpha ^{*}=tan^{-1}\left({\frac {2\sigma _{12}^{*}}{c_{2}}}\right)\,;\,}$
(60)
by introducing the tensors
${\displaystyle \left[\mathbf {E} \right]={\begin{pmatrix}\epsilon _{11}&{\frac {\gamma _{12}}{2}}\\{\frac {\gamma _{12}}{2}}&\epsilon _{22}\end{pmatrix}}\,;\,\left\{\mathbf {n_{\sigma ^{*}}} \right\}={\begin{pmatrix}cos(\alpha ^{*})\\sin(\alpha ^{*})\end{pmatrix}}}$
(61)
it is possible to express the strain corresponding to the principal stresses as
${\displaystyle \left\{\mathbf {\epsilon ^{*}} \right\}=\left\{\mathbf {n_{\sigma ^{*}}} \right\}^{T}\left[\mathbf {E} \right]\left\{\mathbf {n_{\sigma ^{*}}} \right\}}$
(62)
this strains can be used together with the corresponding principal stresses in assessing the state of the membrane, using the so called “mixed criteria”. The “decision” proceeds as follows:
• ${\textstyle (\sigma _{II}^{*}>0)}$ biaxial tension ${\textstyle \rightarrow }$ "taut state"
• ${\textstyle (\sigma _{II}^{*}<0)}$ and ${\textstyle (\epsilon _{sigma_{I}}>0)}$ uniaxial tension ${\textstyle \rightarrow }$ "wrinkled state"
• otherwise (all compressed) ${\textstyle \rightarrow }$ "slack state"
Modification of the material
Once known the state, the material has to be modified to remove undesired compression. This is obtained by modifying the directions in which direction appears, basically removing the stiffness contribution in that direction. The procedure is distinguished for the various cases:
• “taut case”: the ${\textstyle \left[\mathbf {D_{tan}} \right]}$ is the original matrix as the whole membrane is in tension and acts with its whole stiffness
• “wrinkled state” in this case one direction has to be penalized leaving the other unchanged. Introducing the matrix
${\displaystyle {\begin{array}{c}{c}c=cos(\alpha ^{*})\\s=sin(\alpha ^{*})\end{array}}\,;\,\left[\mathbf {R(\alpha ^{*})} \right]={\begin{pmatrix}c^{2}&s^{2}&-2sc\\s^{2}&c^{2}&2sc\\sc&-sc&c^{2}-s^{2}\end{pmatrix}}}$
(63)
the penalization can be applied following the steps
1. ${\textstyle \left[\mathbf {D_{rotated}} \right]=\left[\mathbf {R[-\alpha ]} \right]\left[\mathbf {D_{original}} \right]\left[\mathbf {R[-\alpha ]^{T}} \right]}$
2. ${\textstyle \left[\mathbf {D_{modified}} \right]={\begin{pmatrix}D_{rot_{11}}&PD_{rot_{12}}&D_{rot_{13}}\\PD_{rot_{21}}&PD_{rot_{22}}&PD_{rot_{23}}\\D_{rot_{31}}&PD_{rot_{32}}&D_{rot_{33}}\\\end{pmatrix}}}$
3. ${\textstyle \left[\mathbf {D_{modified}} \right]=\left[\mathbf {R[\alpha ]} \right]\left[\mathbf {D_{modified}} \right]\left[\mathbf {R[\alpha ]} \right]^{T}}$
• “slack state” the membrane is compressed in all directions. No contribution to the stiffness should be provided, consequently the whole constitutive matrix can be penalized as ${\textstyle \left[\mathbf {D_{modified}} \right]=P\left[\mathbf {D_{modified}} \right]}$
This modification procedure guarantees that the stress ${\textstyle \left\{\mathbf {S} \right\}=\left[\mathbf {D_{modified}} \right]\left\{\mathbf {E} \right\}}$ presents no compression.
The penalty factor "${\textstyle P}$" plays a central role in the stability of the wrinkling procedure. The problem is that when some parts of the structure are softened in some direction the stress redistributes, often causing a cyclic change of state of other parts of the structure. The use of a constant penalty factor as proposed for example in [8], causes some parts of the structure to be basically switched on and off when they change of state. An improvement can be obtained through the definition of a variable penalty factor, which makes the transition smoother helping the convergence. Introducing the parameter ${\textstyle \sigma _{max}}$ that indicates the maximum tolerable compression, ${\textstyle P_{max}}$ as the max penalty factor and defining ${\textstyle P_{\sigma }={\frac {\sigma _{max}}{\sigma }}}$ a suitable formulation for the penalty parameter can be obtained as
${\displaystyle P_{\sigma }
(64)
${\displaystyle P_{\sigma }<0orP_{\sigma }>1\rightarrow P=1.0}$
(65)
stability can be further improved by taking in account the loading history of each element. This should be considered as a purely numerical artifice to minimize oscillations of the stress field and can be expressed as:
${\displaystyle if(State\equiv OldState)\rightarrow {\hbox{leave P unchanged}}}$
${\displaystyle otherwise\rightarrow P=P_{old}*coeff}$
it should be however checked that the modified value for P is allowable.
### 3.2 validation
A few examples are proposed in current subsection to validate the procedure presented. Given the nature of the problem, it is very difficult to obtain an analytical or experimental prove of the effectiveness of the procedure, validation is therefore based on a set of numerical experiments.
It has already been highlighted that a realistic representation of the wrinkling behavior can be obtained using a sufficiently high number of elements; simulation can therefore be performed on dense meshes, introducing initial imperfections to initialize the formation of the wrinkles. This way the compressive stresses are correctly removed, and the results obtained can be used in validating the proposed wrinkling procedure.
A few test examples are proposed here showing the results obtained with the proposed approach.
(a) Plot of Principal PK2 stresses (b) deformed VS reference configuration (c) Plot of Principal PK2 stresses (d) deformed VS reference configuration (e) Plot of Principal PK2 stresses (f) deformed VS reference configuration Figure 1: Inflated Circular Airbag
CIRCULAR AIRBAG: The circular airbag is probably one of the best examples to be used in testing the efficacy of the wrinkling procedure. The simulation proposed was carried using
${\displaystyle \rho =2700\left[{\frac {Kg}{m^{3}}}\right]\,;\,E=7000\left[{\frac {N}{mm^{3}}}\right]\,;\,\nu =0.3}$
${\displaystyle Thickness=0.001\left[mm\right]\,;\,Radius=1000\left[mm\right]}$
Symmetry boundary conditions were applied and the problem was evaluated with and without wrinkling algorithm. The same airbag was simulated using different meshes increasing progressively the mesh density. The results reported here refer to a coarse mesh of 236 elements and to a denser one of 4802 elements. For this example, a very dense mesh is needed to capture the formation of folds and wrinkles that eliminate the compression. Fig 1c and 1d suggests immediately how the formation of wrinkles and deep folds (larger wrinkles) correctly removes the compressive stresses. It is very relevant to highlight how the location of the folds changes in different simulations but their "distance" tends to be the same.
It can be easily checked how even different runs of the same structure with the same mesh can lead to different wrinkling patterns. The only realistic result is therefore the extension of the wrinkled zone.
The solution obtained on the coarse mesh without any improvement (see Fig. 1a,1b) is poor both in terms of stresses and displacements The introduction of the wrinkling correction allows to catch the correct behavior using a much coarser mesh. Considering the results on the dense mesh as the reference solution, Fig 1(e),1(f) clearly shows how a remarkable improvement is obtained both in terms of stresses and displacements using the wrinkling correction. Table 2 in particular highlights how the results of the analysis on the coarse mesh with the wrinkling correction are practically coincident to the reference solution confirming the efficacy of the approach.
Dense Coarse No Correction Coarse Corrected Displacement: [m] 0.465 0.37 0.47 ${\displaystyle S_{I}}$: ${\displaystyle \left[{\frac {N}{mm^{2}}}\right]}$ 9.55E7 11.2E7 9.51E7 ${\displaystyle S_{II}}$: ${\displaystyle \left[{\frac {N}{mm^{2}}}\right]}$ 9.05E7 7.21E7 9.09E7
SHEAR TEST: A simple shear test is performed by imposing displacements on one side of a square membrane. The parameters used are the same as for the previews example. The dimension of the side is ${\textstyle 1000[mm]}$ and the imposed displacement is ${\textstyle 200[mm]}$. Two cases are considered the first (Fig. 2c, 2d) using the standard approach on a dense mesh, the second (Fig. 2a, 2b) applying the proposed correction on a coarser mesh. Local buckling is correctly reproduced by the first approach that is considered a representation of the "true" behavior of the membrane; this result is achieved imposing an initial imperfection in the form of a very small out-of-plane load. The formation of the tensile diagonal is correctly reproduced in the second simulation using the enriched material model. The improved procedure allows as well to describe correctly the deformed shape of the square (it can be easily checked that the “normal” solution has straight sides).
(a) Values of Principal PK2 Stresses (b) Plot of Principal PK2 stresses (c) Wrinkled configuration (d) Plot of Principal PK2 stresses Figure 2: quadrilatera under shear
ANNULUS UNDER SHEAR: The last proposed example is an annulus under shear constituted by a thin membrane blocked by two rigid disks on the inner and outer boundaries. The inner disk is rotated by 10° counterclockwise causing the membrane to wrinkle. Fig 3(b) shows the results of the wrinkling procedure applied to a coarse mesh. Comparison with the reference results (Fig. 3d) shows excellent agreement in terms of principle PK2 stresses.
(a) Principal PK2 Stresses (b) Values of First Principal PK2 (c) Principal PK2 Stresses (d) Values of First Principal PK2 Figure 3: annulus subjected to torsion
## 4 Coupling issues - The case of the sails
Coupled fluid–membrane analysis is a challenging problem involving high non–linearities both on the side of the structure and of that of the fluid. The physical problem is however pretty clear: the membrane, is immersed in a fluid field. The presence of the structure influences the flow of the fluid, which exerts a force on the membrane. This force causes a deformation, changing the boundary conditions for the fluid flow and consequently the force exerted. Given the high flexibility of the structure, the coupling becomes strong.
This section addresses the coupled simulation of membrane systems, with particular reference to the “static” simulation of boat sails.
Figure 4: interaction of a genoa and a main sail - pressures at the end of the coupled analysis
(a) genoa - leeward face (b) genoa - windward face Figure 5 pressure field on the genoa at the end of the coupled analysis
Before proceeding to the description of “our” method we should observe that sails are aerodynamic bodies which tend to a “stable” configuration with the fluid flow reaching a sort of steady state condition. The main engineering interest is therefore connected to the determination of this “final” configuration which represents a sort of “static solution” of the problem.
It is theoretically possible to deal with the coupled process using different strategies, including in particular “implicit (coupling) procedures” as proposed for example in [9] or “explicit” ones as described in [10]. Classical arguments in favor of one or of the other are connected to the numerical stability and computational efficiency of the different techniques. The traditional objection to the use of “explicit” ones is linked to the stringent requirements on the time step. Time step constraint for the stability of the coupling procedure is in fact normally more stringent than the one for the single-field solution. This strong requirement is connected to the lack of energy conservation at the interface between the various fields, which tends to introduce spurious energy contributions in the system.
It is however possible to observe that the pseudo–dynamic solution procedure presented in section 2 has very high dissipative properties and is perfectly suited for the research of coupled “static” solutions
The objection to the use of “explicit” coupling schemes is therefore no longer applicable as our artificial damping can easily remove any spurious energy contribution introduced by the couping process. Given this observation “explicit” procedures are much more efficient than the corresponding “implicit” ones as the single step is much cheaper. Table (3) proposes an efficient coupled solution strategy.
Fig. (4) presents the results of the coupled analysis of a genoa and main sail; a genoa alone is presented in Fig (5) The results obtained by this approach are presented in Fig. (4) in application to the simulation of a genoa and main sail.
Predict Structural Solution Deform the mesh of the fluid domain According the predicted shape of the structure (variables needed for ALE formulation of the fluid should be calculated). The mesh–movement should keep the quality of the mesh, minimizing the deformation of the elements close to the structure, see [11] Advance in time the fluid on the deformed mesh transfer stresses FROM the fluid boundary TO the structural Boundary (stresses can be transferred as calculated) Advance the structure in using the pseudo–static solution technique proceed to next time step
## References
[1] R.L. Taylor. (2001) "Finite Element Analysis of membrane structures". CIMNE
[2] R. Rossi. (2003) "A finite element formulation for 3D membrane structures including a wrinkling modified material model". CIMNE 226
[3] Cirak F., Ortiz M., Schroeder P. (2000) "Subdivision Surfaces: a new paradigm for thin shell finite element analysis", Volume 47. IJNME 2039–2072
[4] Cirak F., Ortiz M. (2001) "Fully C1 conforming subdivision elements for finite deformation thin shell analysis", Volume 51. IJNME 813–833
[5] Wong Y.W, Pellegrino S. (2002) "Computation of Wrinkling Amplitudes in Thin Membranes". 43rd AIAA/ASME/ASCE/AHS/ASC conference
[6] Roddeman D.G., Drukker J. et al. (1987) "The wrinkling of Thin Membranes: Part 1 - Theory", Volume 54. Journal of Applied Mechanics 884–887
[7] Roddeman D.G., Drukker J. et al. (1987) "The wrinkling of Thin Membranes: Part 2 - Numerical Analysis", Volume 54. Journal of Applied Mechanics 888-892
[8] Liu X., Jenkins C. Schur W. (2001) "Large deflecion Analysis of pneumatic envelopes using a penalty parametr modified material model", Volume 37. Finite Elements in Analysis and Design 233–251
[9] D.P. Mok, W.A. Wall. (2001) "Partitioned Analysis Schemes for the transient interaction of incompressible flows and nonlinear flexible structures". trends in computational structural mechanics
[10] S.Piperno, C.Farhat, B. Larroutorou. (1995) "Partitioned Procedures for the transient solution of coupled aeroelastic problems - Part2 - energy transfer analysis and three dimensional applications", Volume 124. Computer Methods in Applied Mechanics and Engineering 79-112
[11] E. Onate, J. Garcia, G.Bugeda, S.R. Idelsohn. (2002) "A general Stabilized formulation for incompressible fluid flow using Finite Calculus and the Finite Element method". Towards a New Fluid Dynamics with its challenges in Aeronautics
### Document information
Published on 18/05/19
Submitted on 10/05/19
DOI: 10.1007/1-4020-3317-6_6
|
2019-08-24 16:00:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 157, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7370181083679199, "perplexity": 854.8601417708159}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00201.warc.gz"}
|
http://mathhelpforum.com/algebra/210919-logarithm-help-need-explanation-print.html
|
# Logarithm help need explanation
• January 7th 2013, 08:17 AM
dzomberg
Logarithm help need explanation
Write 1 - 2log7x as a single logarithm.
Thanks!
• January 7th 2013, 08:27 AM
Plato
Re: Logarithm help need explanation
Quote:
Originally Posted by dzomberg
Write 1 - 2log7x as a single logarithm.
Thanks!
$\log_7(7)-\log_7(x^2)=\log_7\left(\frac{7}{x^2}\right)~.$
• January 7th 2013, 08:28 AM
Re: Logarithm help need explanation
$1{\rm{ }} - {\rm{ }}2{\log _7}x = {\log _7}7 - {\log _7}{x^2} = {\log _7}{7 \over {{x^2}}}$
• January 7th 2013, 08:30 AM
russo
Re: Logarithm help need explanation
Hi, this kind of problems involves the use of properties. So you want that expression to be simplified in one logarithm.
First, let's write everything as a logarithm. We know that: $log_n (n) = 1$ so we could say that $n = 7$ (in our case). Thus $1 = log_7 (7)$
Then, we have to put, for the second part, "inside" the logarithm. So we also know that: $a \cdot log_n(x) = log_n(x^a)$. Finally $2 \cdot log_7(x) = log_7(x^2)$
We put it altogether (using the substraction of logarithm property of the same base) $log_n(x)-log_n(y) = log_n\left(\frac{x}{y}\right)$:
$1 - 2\cdot log_7(x) = log_7(7) - log_7(x^2) = log_7\left(\frac{7}{x^2}\right)$
|
2014-07-26 06:39:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868080973625183, "perplexity": 2985.6952247012878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997894983.24/warc/CC-MAIN-20140722025814-00076-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://indico.cern.ch/event/219436/contributions/1523048/
|
# Quark Matter 2014 - XXIV International Conference on Ultrarelativistic Nucleus-Nucleus Collisions
19-24 May 2014
Europe/Zurich timezone
## New approach to lattice QCD thermodynamics from Yang-Mills gradient flow
20 May 2014, 11:30
20m
### platinum
Contributed Talk New Theoretical Developments
### Speaker
Dr Tetsuo Hatsuda (RIKEN)
### Description
A novel method to study the bulk thermodynamics in lattice gauge theory is proposed on the basis of the Yang-Mills gradient flow with a fictitious time t. The energy density (epsilon) and the pressure (P) of SU(3) gauge theory at fixed temperature are calculated directly on 32$^3 \times$ (6,8,10) lattices from the thermal average of the well-defined energy-momentum tensor ($T_{\mu \nu}^R(x)$) obtained by the gradient flow. It is demonstrated that the continuum limit can be taken in a controlled manner from the t-dependence of the flowed data. [1] M. Asakawa, T. Hatsuda, E. Itou, M. Kitazawa and H. Suzuki [FlowQCD Coll.], arXiv:1312.7492 [hep-lat].
### Primary author
Dr Tetsuo Hatsuda (RIKEN)
### Co-authors
Dr Etsuko Itou (KEK) Prof. Hiroshi Suzuki (Kyushu Univ.) Dr Masakiyo Kitazawa (Osaka Univ.) Prof. Masayuki Asakawa (Osaka Univ.)
Slides
|
2020-12-02 03:05:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7159689664840698, "perplexity": 12900.01117124324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141686635.62/warc/CC-MAIN-20201202021743-20201202051743-00391.warc.gz"}
|
https://math.stackexchange.com/questions/3577227/pdf-of-min-and-max-of-n-iid-random-variables
|
# PDF of $\min$ and $\max$ of $n$ iid random variables
The Problem: Suppose that $$X_1,\dots,X_n$$ are independent random variables with the same absolutely continuous distribution. Let $$f$$ denote their common marginal PDF. Set $$Y=\min(X_1,\dots,X_n)$$ and $$Z=\max(X_1,\dots,X_n)$$. Show that $$Y$$ and $$Z$$ are both absolutely continuous, and find their marginal PDFs.
My Thoughts: We begin by finding the CDF of $$Y$$. For $$t\in\mathbb R$$ we have $$$$\begin{split} F_Y(t)&=P(Y\leq t)=P(\min(X_1,\dots,X_n)\leq t)=1-P(X_1>t,\dots,X_n>t)\\ &=1-P(X_1>t)\cdots P(X_n>t)\\ &=1-[1-F(t)]^n, \end{split}$$$$ where in the fourth step we used the independence of the random variables $$X_1,\dots,X_n$$, and in the last step we used the fact the latter random variables have the same distribution which we call $$F$$. By the absolute continuity of the random variables $$X_1,\dots,X_n$$, we may use the chain rule to differentiate the expression for $$F_Y$$ to obtain $$f_Y(t)=nf(t)[1-F(t)]^{n-1}=nf(t)\left[1-\int_{-\infty}^tf(s)\,ds\right]^{n-1}.$$ It follows that $$Y$$ is an absolutely continuous random variable.
Next, we find the CDF of $$Z$$. For $$t\in\mathbb R$$ we have $$\begin{equation*}\begin{split} F_Z(t)&=P(Z\leq t)=P(\max(X_1,\dots,X_n)\leq t)=P(X_1\leq t,\dots,X_n\leq t)\\ &=P(X_1\leq t)\cdots P(X_n\leq t)\\ &=F(t)^n, \end{split}\end{equation*}$$ where in th fourth step we used the independence of the random variables $$X_1,\dots,X_n$$. Since the latter mentioned random variables are absolutely continuous, we may use the chain rule to differentiate the expression for $$F_Z$$ to obtain $$f_Z(t)=nF(t)^{n-1}f(t)=nf(t)\left[\int_{-\infty}^tf(s)\,ds\right]^{n-1}.$$ It follows that $$Z$$ is an absolutely continuous random variable.
Could anyone please provide feedback on the correctness of my proof above?
The derivative $$F_X'$$ of the distribution function of a random variable $$X$$, if it exists, is always measurable and non-negative, but its integral need not be $$1$$. So just proving that the derivative exists is not enough. In both cases you can see that the derivative you obtained actually integrates to $$1$$ and the formula $$F_X(x)=\int_{-\infty} ^{x} F_X'(t)dt$$ holds. Hint for computing the integral: putting $$h(t)=\int_{-\infty}^{t} f(s)ds$$ helps in both cases.
|
2020-10-27 09:46:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 28, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9539188742637634, "perplexity": 75.64545816396881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893845.76/warc/CC-MAIN-20201027082056-20201027112056-00329.warc.gz"}
|
http://www.digitalmars.com/d/archives/digitalmars/D/learn/1265.html
|
## digitalmars.D.learn - another std.path problem: getBaseName
Greetings!
I already have a problem with std.path.getDirName, but that's another on-going
post. However, what is the use of getBaseName? Shouldn't it be getting the
last entry of a path?
Take a look at this piece of code:
import std.stdio;
import std.path;
void main ()
{
char[] dir0 = "c:\\this\\is\\a\\path\\long.pog";
char[] dir1 = "c:\\this\\is\\also\\a\\path\\";
writefln(std.path.getBaseName(dir0));
writefln(std.path.getBaseName(dir1));
}
when run, it outputs:
long.pog
And as you can see, the second line is empty. That is wrong. There is no such
a path as a blank or null value. Yes, if I take the \ from the end it works.
However, this should be taken care by the function. I should not need to do any
cleanup for a function when I have a legitimate path. This also goes for
getDirName. It should return the last directory name in a path. Even though it
is a directory. Otherwise, confusing outcomes will be the result.
thanks,
josé
Jul 19 2005
"Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"jicman" <jicman_member pathlink.com> wrote in message
news:dbjis4$iet$1 digitaldaemon.com...
I already have a problem with std.path.getDirName, but that's another
on-going
post. However, what is the use of getBaseName? Shouldn't it be getting
the
last entry of a path?
Maybe it's just a badly-named function. Seems to me that it should be called getFileName(), not getBaseName(). Here. Have this. char[] getLastPathElement(char[] path) { if(path.length==0) return null; if(path[length-1]=='\\' || path[length-1]=='/') path.length=path.length-1; return getBaseName(path); }
Take a look at this piece of code:
import std.stdio;
import std.path;
void main ()
{
char[] dir0 = "c:\\this\\is\\a\\path\\long.pog";
char[] dir1 = "c:\\this\\is\\also\\a\\path\\";
writefln(std.path.getBaseName(dir0));
writefln(std.path.getBaseName(dir1));
}
when run, it outputs:
long.pog
And as you can see, the second line is empty. That is wrong.
If the name of the function were getFileName (which it should be), then no, it's not wrong.
There is no such
a path as a blank or null value. Yes, if I take the \ from the end it
works.
That's because the function then thinks that "path" is the filename, and returns that.
However, this should be taken care by the function. I should not need to
do any
cleanup for a function when I have a legitimate path. This also goes for
getDirName. It should return the last directory name in a path. Even
though it
is a directory. Otherwise, confusing outcomes will be the result.
What do you mean? getDirName functions just fine.
Jul 19 2005
Jarrett Billingsley says...
"jicman" <jicman_member pathlink.com> wrote in message
news:dbjis4$iet$1 digitaldaemon.com...
I already have a problem with std.path.getDirName, but that's another
on-going
post. However, what is the use of getBaseName? Shouldn't it be getting
the
last entry of a path?
Maybe it's just a badly-named function. Seems to me that it should be called getFileName(), not getBaseName().
Ok, I guess I saw getBaseName and I thought it would always return the last item of a path. My mistake.
Here. Have this.
char[] getLastPathElement(char[] path)
{
if(path.length==0)
return null;
if(path[length-1]=='\\' || path[length-1]=='/')
path.length=path.length-1;
return getBaseName(path);
}
Yep, already did it, except that I splitted to an array and use the last entry of the array. :-) Yours is too hard to read. ;-) Just kidding.
Take a look at this piece of code:
import std.stdio;
import std.path;
void main ()
{
char[] dir0 = "c:\\this\\is\\a\\path\\long.pog";
char[] dir1 = "c:\\this\\is\\also\\a\\path\\";
writefln(std.path.getBaseName(dir0));
writefln(std.path.getBaseName(dir1));
}
when run, it outputs:
long.pog
And as you can see, the second line is empty. That is wrong.
If the name of the function were getFileName (which it should be), then no, it's not wrong.
Right. Again, the lack of documentation and the few hours trying to find a bug was what caused my complaining. My apologies.
There is no such
a path as a blank or null value. Yes, if I take the \ from the end it
works.
That's because the function then thinks that "path" is the filename, and returns that.
Well, that would have worked the way I wanted it to work, but it does not. "path" does not get return. Instead, null value is returned. The other question is, what if that "path" is a file without a .txt or a .* ending?
However, this should be taken care by the function. I should not need to
do any
cleanup for a function when I have a legitimate path. This also goes for
getDirName. It should return the last directory name in a path. Even
though it
is a directory. Otherwise, confusing outcomes will be the result.
What do you mean? getDirName functions just fine.
Well, again, lack of documentation and expecting it to work like JScript was my problem. For example, var fso, s = ""; var w = WScript; fso = new ActiveXObject("Scripting.FileSystemObject"); w.Echo(fso.GetBaseName("c:\\logs\\JICW2klog")); if you run it, with this command, cscript filetest.js you will get, Microsoft (R) Windows Script Host Version 5.6 Copyright (C) Microsoft Corporation 1996-2001. All rights reserved. JICW2klog Also, take a look at this D program, import std.stdio; import std.path; import std.file; void main () { char[] dir1 = "c:\\this\\is\\a\\path\\"; char[] dir2 = "c:\\this\\is\\also\\a\\path"; char[] dir3 = "c:\\this\\is\\another\\path\\file.txt"; writefln(std.path.getDirName(dir1)); writefln(std.path.getDirName(dir2)); writefln(std.path.getDirName(dir3)); } if you compile it and run it, you'll get, c:\this\is\a\path c:\this\is\also\a c:\this\is\another\path Which is not consistent, but anyway, I don't mean to go on. I apologize. thanks and sorry, josé
Jul 19 2005
"Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"jicman" <jicman_member pathlink.com> wrote in message
news:dbkf7j$19h2$1 digitaldaemon.com...
Also, take a look at this D program,
import std.stdio;
import std.path;
import std.file;
void main ()
{
char[] dir1 = "c:\\this\\is\\a\\path\\";
char[] dir2 = "c:\\this\\is\\also\\a\\path";
char[] dir3 = "c:\\this\\is\\another\\path\\file.txt";
writefln(std.path.getDirName(dir1));
writefln(std.path.getDirName(dir2));
writefln(std.path.getDirName(dir3));
}
if you compile it and run it, you'll get,
c:\this\is\a\path
c:\this\is\also\a
c:\this\is\another\path
Which is not consistent, but anyway, I don't mean to go on. I apologize.
Umm, yes, it actually is consistent! The getDirName() function starts at the end of the string and searches for the first / or \ it finds. Then it returns the slice of the string from the beginning until one before that character. dir1 has a \ at the end. So the function returns immediately. With dir2, "path" is _not_ part of the path, as far as the function is concerned. It is a filename. It will go back to the after "a", and return up to that. dir3 is the same situation as dir2. Remember, these functions are just string manip! No file system checking! If they did do file system checking, you wouldn't even be able to do this: if(std.file.exists(std.path.join(std.file.getcwd(),"blah.txt"))) ... Or: char[] newname=std.path.join(std.file.getcwd(), "something.bmp"); File f=new File(newname, FileMode.Out);
Jul 20 2005
In article <dblnfi$7el$1 digitaldaemon.com>, Jarrett Billingsley says...
"jicman" <jicman_member pathlink.com> wrote in message
news:dbkf7j$19h2$1 digitaldaemon.com...
Also, take a look at this D program,
import std.stdio;
import std.path;
import std.file;
void main ()
{
char[] dir1 = "c:\\this\\is\\a\\path\\";
char[] dir2 = "c:\\this\\is\\also\\a\\path";
char[] dir3 = "c:\\this\\is\\another\\path\\file.txt";
writefln(std.path.getDirName(dir1));
writefln(std.path.getDirName(dir2));
writefln(std.path.getDirName(dir3));
}
if you compile it and run it, you'll get,
c:\this\is\a\path
c:\this\is\also\a
c:\this\is\another\path
Which is not consistent, but anyway, I don't mean to go on. I apologize.
Umm, yes, it actually is consistent!
It may be consistent with the search and split logic, but not consistent with the directory structure. This entry, c:\path\to\file\ is the same as c:\path\to\file my friend. There is no "" directory. If you run these two same "directory structure" examples with getDirName(), you'll get a different outcome. I am no longer talking about checking if it's file or directory. I am now talking just plain directory structure definition.
The getDirName() function starts at
the end of the string and searches for the first / or \ it finds. Then it
returns the slice of the string from the beginning until one before that
character.
last character is a \ and make an appropiate "directory structure" result.
dir1 has a \ at the end. So the function returns immediately.
With dir2, "path" is _not_ part of the path, as far as the function is
concerned. It is a filename. It will go back to the after "a", and return
up to that.
dir3 is the same situation as dir2.
Remember, these functions are just string manip! No file system checking!
If they did do file system checking, you wouldn't even be able to do this:
if(std.file.exists(std.path.join(std.file.getcwd(),"blah.txt")))
...
Or:
char[] newname=std.path.join(std.file.getcwd(), "something.bmp");
File f=new File(newname, FileMode.Out);
I understand. Thanks. jic
Jul 20 2005
"Jarrett Billingsley" <kb3ctd2 yahoo.com> writes:
"jicman" <jicman_member pathlink.com> wrote in message
news:dbmdkb$rei$1 digitaldaemon.com...
It may be consistent with the search and split logic, but not consistent
with
the directory structure. This entry,
c:\path\to\file\
is the same as
c:\path\to\file
my friend.
The first is nothing but a pathname. The second is a directory called "c:\path\to\" with a filename of "file". What exactly are you finding so difficult about this?
There is no "" directory. If you run these two same "directory
structure" examples with getDirName(), you'll get a different outcome. I
am no
longer talking about checking if it's file or directory. I am now talking
just
plain directory structure definition.
Yes, you'll get a different outcome, because they _are_ different. In "directory structure definition", directories end in a \. Full filenames end with a letter. End.
Yes, which is exactly what is going on here. However, it should check if
the
last character is a \ and make an appropiate "directory structure" result.
It does!
I understand. Thanks.
But.. but.. you're still arguing that you don't!
Jul 20 2005
Jarrett Billingsley says...
"jicman" <jicman_member pathlink.com> wrote in message
news:dbmdkb$rei$1 digitaldaemon.com...
It may be consistent with the search and split logic, but not consistent
with
the directory structure. This entry,
c:\path\to\file\
is the same as
c:\path\to\file
my friend.
The first is nothing but a pathname. The second is a directory called "c:\path\to\" with a filename of "file". What exactly are you finding so difficult about this?
Never mind. I see that you're completely missing the point and so, why explain it. :-) Don't worry about it. I get it.
There is no "" directory. If you run these two same "directory
structure" examples with getDirName(), you'll get a different outcome. I
am no
longer talking about checking if it's file or directory. I am now talking
just
plain directory structure definition.
Yes, you'll get a different outcome, because they _are_ different. In "directory structure definition", directories end in a \. Full filenames end with a letter. End.
Yes, which is exactly what is going on here. However, it should check if
the
last character is a \ and make an appropiate "directory structure" result.
It does!
I understand. Thanks.
But.. but.. you're still arguing that you don't!
Jul 20 2005
Derek Parnell <derek psych.ward> writes:
On Tue, 19 Jul 2005 19:04:36 +0000 (UTC), jicman wrote:
Greetings!
I already have a problem with std.path.getDirName, but that's another on-going
post. However, what is the use of getBaseName? Shouldn't it be getting the
last entry of a path?
Take a look at this piece of code:
import std.stdio;
import std.path;
void main ()
{
char[] dir0 = "c:\\this\\is\\a\\path\\long.pog";
char[] dir1 = "c:\\this\\is\\also\\a\\path\\";
writefln(std.path.getBaseName(dir0));
writefln(std.path.getBaseName(dir1));
}
when run, it outputs:
long.pog
And as you can see, the second line is empty. That is wrong. There is no such
a path as a blank or null value. Yes, if I take the \ from the end it works.
However, this should be taken care by the function. I should not need to do
any
cleanup for a function when I have a legitimate path. This also goes for
getDirName. It should return the last directory name in a path. Even though
it
is a directory. Otherwise, confusing outcomes will be the result.
I guess the names of these functions could be better, but the functions in std.path are string manipulation functions and not file system functions. You use them to get and set strings which represent files and paths, and not actually touch the file or path itself. It means you can use them on non-existing files and they still work. Check out the stuff in std.file to check for the actual existence or not of files and paths. I have had to supplement both modules with extra functions for my D applications to date. I can post those if you'd like to check them out ;-) -- Derek Parnell Melbourne, Australia 20/07/2005 8:18:29 AM
Jul 19 2005
Derek Parnell says...
On Tue, 19 Jul 2005 19:04:36 +0000 (UTC), jicman wrote:
Greetings!
I already have a problem with std.path.getDirName, but that's another on-going
post. However, what is the use of getBaseName? Shouldn't it be getting the
last entry of a path?
Take a look at this piece of code:
import std.stdio;
import std.path;
void main ()
{
char[] dir0 = "c:\\this\\is\\a\\path\\long.pog";
char[] dir1 = "c:\\this\\is\\also\\a\\path\\";
writefln(std.path.getBaseName(dir0));
writefln(std.path.getBaseName(dir1));
}
when run, it outputs:
long.pog
And as you can see, the second line is empty. That is wrong. There is no such
a path as a blank or null value. Yes, if I take the \ from the end it works.
However, this should be taken care by the function. I should not need to do
any
cleanup for a function when I have a legitimate path. This also goes for
getDirName. It should return the last directory name in a path. Even though
it
is a directory. Otherwise, confusing outcomes will be the result.
I guess the names of these functions could be better, but the functions in std.path are string manipulation functions and not file system functions. You use them to get and set strings which represent files and paths, and not actually touch the file or path itself. It means you can use them on non-existing files and they still work. Check out the stuff in std.file to check for the actual existence or not of files and paths. I have had to supplement both modules with extra functions for my D applications to date. I can post those if you'd like to check them out ;-)
if you don't mind, I would surely use it. Since I use your build tool all the time, might as well start using some of your code also. ;-) thanks, josé
Jul 19 2005
|
2014-10-25 16:13:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5791158080101013, "perplexity": 3371.917805366352}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119648438.26/warc/CC-MAIN-20141024030048-00138-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://mathoverflow.net/questions/377011/an-integral-identity
|
An integral identity
$$\int_\R \frac{1-e^{itu}}{e^{itu}-1-it}\,\frac{dt}t=\pi i\,\frac u{1-u}$$ for $$u\in(0,1)$$, with the integral understood in the principal value sense. However, I have not been able to prove this, even with the help of Mathematica.
How can this be proved?
• Maybe it helps to set $s=(e^{itu}-1)/(it)$, identify the geometric series and thus write it as $i\int_{\mathbb R} \sum_{n=0}^\infty\left((e^{itu}-1)^{n+1}\,/\,(i t)^{n+2}\right)dt$. Having gotten rid of the fraction, I'd try to formally switch the integral and sum and see if you can compute the principal value. Maybe they are $\pi\,u^{n+1}$ and the sum gives you the result. – Nikolaj-K Nov 20 at 19:02
• The numeric calculation with Mathematica NIntegrate[(1 - Exp[I*t*1/2])/(Exp[I*t*1/2] - 1 - I*1/2)/ t, {t, -1000, -0.001}, AccuracyGoal -> 3, PrecisionGoal -> 3] + NIntegrate[(1 - Exp[I*t*1/2])/(Exp[I*t*1/2] - 1 - I*1/2)/t, {t, 0.001, 1000}, AccuracyGoal -> 3, PrecisionGoal -> 3] produces $1.25389\, +2.51409 i$ and does not confirm your hypothesis. – user64494 Nov 20 at 19:07
• Also NIntegrate[(1 - Exp[I*t*1/2])/(Exp[I*t*1/2] - 1 - I*1/2)/ t, {t, -10000, -0.0001}, AccuracyGoal -> 3, PrecisionGoal -> 3, WorkingPrecision -> 30] + NIntegrate[(1 - Exp[I*t*1/2])/(Exp[I*t*1/2] - 1 - I*1/2)/t, {t, 0.0001, 10000}, AccuracyGoal -> 3, PrecisionGoal -> 3, WorkingPrecision -> 30] produces $1.2564281632324901625528374684+2.51331913735615084972161764584 i$. – user64494 Nov 20 at 19:31
• @user64494: It is no longer a hypothesis. See the two proofs below. – GH from MO Nov 20 at 19:36
• @user64494 -- there is a way to avoid the need to take a principal value, which returns a value close to the expected answer (I worked this out in the answer box, it's a method which I have found quite useful). – Carlo Beenakker Nov 20 at 21:07
3 Answers
I would close the contour in the upper half of the complex plane, the principal value picks up $$i\pi$$ times the residue$$^\ast$$ at $$t=0$$, which is $$u/(1-u)$$. There are no other poles.$$^{\ast\ast}$$
$$^\ast$$ $$\frac{1-e^{i t u}}{e^{i t u}-i t-1}=\frac{u}{1-u}+{\cal O}(t^2).$$
$$^{\ast\ast}$$ poles are at $$t=i\tau$$ with $$e^{-\tau u}+\tau=1$$ (excluding $$\tau=0$$, which is canceled by the numerator); these remain at $$\tau<0$$ for all $$u\in(0,1)$$, approaching $$-2(1-u)$$ for $$u\rightarrow 1$$.
In the comments there was an issue with the numerical evaluation. Principal value integrals of this type can be evaluated more accurately by replacing $$1/t$$ by $$\frac{d\log |t|}{dt}$$ and carrying out a partial integration. This gives $$\int_{-\infty}^\infty dt\,\frac{1-e^{itu}}{e^{itu}-1-it}\,\frac{1}t= -2i\Im\int_{0}^\infty dt\,\ln|t|\frac{d}{dt}\frac{1-e^{itu}}{e^{itu}-1-it}.$$ For the case $$u=1/2$$ considered in the comments, Mathematica gives 3.1406.
• Why are there no other poles? (I was thinking about the same argument and haven't been able so far to show this.) – Christian Remling Nov 20 at 17:53
• Sorry, I still don't understand this. How does it follow that the solutions of $e^{-u\tau}+\tau =1$ are all real? – Christian Remling Nov 20 at 18:31
• Thank you. I don't know why, before seeing your answer, I decided to deal with the poles in the lower half-plane. :-) – Iosif Pinelis Nov 20 at 19:18
• The integration by parts in an improper integral should be grounded. In other case this is done in the L. Euler's style. – user64494 Nov 21 at 5:46
$$\newcommand\eps\varepsilon$$ We want to show that, under $$R\to\infty$$ and $$\eps\to 0+$$, we have $$\int_{(-R,-\eps)\cup(\eps,R)} \frac{1-e^{itu}}{e^{itu}-1-it}\,\frac{dt}t=\pi i\,\frac u{1-u}+o(1).$$ Equivalently, $$\int_{(-R,-\eps)\cup(\eps,R)}\left(\frac{1-e^{itu}}{e^{itu}-1-it}+1\right)\,\frac{dt}t=\pi i\,\frac u{1-u}+o(1).$$ In other words, $$\int_{(-R,-\eps)\cup(\eps,R)}\frac{dt}{e^{itu}-1-it}=\pi\,\frac u{u-1}+o(1).$$ The integrand is holomorphic in an open set containing $$\{t\in\mathbb{C}:\text{\Im(t)\geq 0 and t\neq 0}\}$$, hence by Cauchy's theorem it suffices to show that $$\int_{\gamma(R)}\frac{dt}{e^{itu}-1-it}=-\pi+o(1)\qquad\text{and}\qquad \int_{\gamma(\eps)}\frac{dt}{e^{itu}-1-it}=\frac{\pi}{u-1}+o(1),$$ where $$\gamma(r)$$ is the semicircle in $$\{t\in\mathbb{C}:\Im(t)\geq 0\}$$ going from $$r$$ to $$-r$$. For large $$r$$, the integrand on $$\gamma(r)$$ is $$i/t+O(1/t^2)$$. For small $$r$$, the integrand on $$\gamma(r)$$ is $$-i/(t(u-1))+O_u(1)$$. The result follows.
This is to detail Carlo Beenakker's assertion about the poles of the integrand. Suppose that $$t=x+iy$$ is such a pole, where $$x$$ and $$y$$ are real. Then $$1-y=e^{-uy}\cos ux,\quad x=e^{-uy}\sin ux.$$ Suppose that $$y>0$$. If $$x=0$$ then $$1-y=e^{-uy}\ge1-uy$$, so that $$(u-1)y\ge0$$, which contradicts the conditions $$y>0$$ and $$u\in(0,1)$$. So, $$x\ne0$$ and hence $$\frac{\sin ux}{ux}=\frac{e^{uy}}u>1,$$ which contradicts the inequality $$\frac{\sin v}{v}\le1$$ for all real $$v\ne0$$.
So, $$y\le0$$.
If now $$y=0$$ then $$1=\cos ux$$ and hence $$x=\sin ux=0$$.
Thus, the only pole $$x+iy$$ with $$y\ge0$$ is $$0$$.
• @GHfromMO : Yes indeed, this is much simpler. – Iosif Pinelis Nov 20 at 19:37
• @ChristianRemling : Indeed. When I saw $t$ in the inequality $1\le|1+it|$ I somehow forgot that $t$ is complex. :-) So, it again looks like the assertion about the poles is not quite trivial. – Iosif Pinelis Nov 20 at 20:57
• @IosifPinelis: For what it's worth, I tried it for about 10 minutes unsuccessfully (and then Carlos posted his answer), so I think it has to be something like the argument you give here. – Christian Remling Nov 20 at 21:52
|
2020-11-27 12:23:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 61, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8923504948616028, "perplexity": 276.15149204049436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191692.20/warc/CC-MAIN-20201127103102-20201127133102-00345.warc.gz"}
|
https://nbviewer.org/github/jaanos/kirv/blob/master/notebooks/ReblockingProblem.ipynb
|
# The reblocking problem¶
Suppose that Alice wants to send Bob a signed and encrypted message $m$. She will use RSA for both signing and encrypting. She first uses her private key $(n_A, d_A)$ to obtain the signature $s = m^{d_A} \bmod{n_A}$, and then uses Bob's public key $(n_B, e_B)$ to obtain the ciphertext $c = s^{e_B} \bmod{n_B}$. She sends $c$ to Bob, who then decrypts it with his private key $(n_B, d_B)$ to obtain $s' = c^{d_B} \bmod{n_B}$, and finally recovers the message $m' = s'^{e_A} \bmod{n_A}$ using Alice’s public key $(n_A, e_A)$.
In [1]:
nA = 62894113
eA = 3
dA = 41918819
nB = 55465219
eB = 17
dB = 26094257
m = 2
Let us first sign the message $m$ with Alice's private key, and then encrypt the signature using Bob's public key.
In [2]:
s = Integer(pow(m, dA, nA))
s
Out[2]:
19714331
In [3]:
c = Integer(pow(s, eB, nB))
c
Out[3]:
20507849
We now decrypt the ciphertext using Bob's private key, and then extract the message with Alice's public key.
In [4]:
ss = Integer(pow(c, dB, nB))
ss
Out[4]:
19714331
In [5]:
mm = Integer(pow(ss, eA, nA))
mm
Out[5]:
2
Let us try with another message.
In [6]:
m = 5
s = Integer(pow(m, dA, nA))
c = Integer(pow(s, eB, nB))
ss = Integer(pow(c, dB, nB))
mm = Integer(pow(ss, eA, nA))
(m, s, c, ss, mm)
Out[6]:
(5, 62006589, 51623723, 6541370, 42173672)
We se that since the original signature $s$ was larger than Bob's modulus $n_B$, decrypting $c$ only gives $s' = s \bmod{n_B}$, leading to $m' \ne m$.
The probability of this happening can be expressed as $(n_A - n_B)/n_A$, where $n_A > n_B$. Let us compute it for our case.
In [7]:
N((nA - nB)/nA)
Out[7]:
0.118117477863151
Several measures can be taken to prevent this from happening.
• Set a threshold $t$, and use different keys for encryption and signing: the signing keys should have moduli $n < t$, while the encryption keys should have moduli $n > t$.
• Use hash functions and hybrid encryption schemes: to sign the message, compute the RSA signature of its digest, and append it to the message. Then, randomly choose a key $k$ for a symmetric encryption scheme (say, AES), and encrypt the signed message with $k$ (say, using the CBC mode of operation). Then, encrypt $k$ with RSA ($k$ will be much smaller than the modulus), and prepend its encryption to the symmetric encryption of the message.
|
2021-10-26 09:40:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5404970049858093, "perplexity": 1986.2526273209253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587854.13/warc/CC-MAIN-20211026072759-20211026102759-00174.warc.gz"}
|
https://www.biostars.org/p/460784/
|
obtain when applying hhblits and pssm to a protein sequence
0
0
Entering edit mode
13 months ago
can anyone please tell no of the features we obtain when applying hhblits and pssm to a protein sequence?
minimum features required to process ?
advantges of both pssm and hhblits ?
any server or software to run - pssm and hhblits
any link that help me to know more
Thank you so much in advance
hhblits pssm • 520 views
0
Entering edit mode
The title "hhblits and pssm" does not qualify as a good/informative title to your post. Please elaborate the title.
0
Entering edit mode
Why don't you just give hhsuite a try? It's not hard to install (via conda), and the wiki is extensive.
|
2021-10-15 21:56:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3648608922958374, "perplexity": 10421.912334045079}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583083.92/warc/CC-MAIN-20211015192439-20211015222439-00178.warc.gz"}
|
https://anndata.readthedocs.io/en/latest/anndata.h5py.html
|
anndata.h5py¶
Wraps h5py to handle sparse matrices.
anndata.h5py is based on and uses the conventions of h5sparse by Appier Inc.. See the copyright and license note in the source code.
The design choices of anndata.h5py, however, are different. In particular, anndata.h5py allows handling sparse and non-sparse data at the same time. It achieves this by extending the functionality of File and Group objects in the h5py API, and by providing a new SparseDataset object.
For examples and further information, see this blog post.
File(name[, mode, driver, libver, …]) Like h5py.File, but able to handle sparse matrices. Group(h5py_group[, force_dense]) Like h5py.Group, but able to handle sparse matrices. Dataset Equivalent to h5py.Dataset. SparseDataset(h5py_group) Analogous to h5py.Dataset, but for sparse matrices.
|
2018-12-13 15:04:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5132443904876709, "perplexity": 2031.527846281158}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824912.16/warc/CC-MAIN-20181213145807-20181213171307-00192.warc.gz"}
|
https://cs.stackexchange.com/questions/3367/known-facets-of-the-travelling-salesman-problem-polytope
|
# Known facets of the Travelling Salesman Problem polytope
For the branch-and-cut method, it is essential to know many facets of the polytopes generated by the problem. However, it is currently one of the hardest problems to actually calculate all facets of such polytopes as they rapidly grow in size.
For an arbitrary optimization problem, the polytope used by branch-and-cut or also by cutting-plane-methods is the convex hull of all feasible vertices. A vertex is an assignment of all variables of the model. As a (very simple) example: if one would maximize $2\cdot x+y$ s.t. $x+y \leq 1$ and $0\leq x,y\leq 1.5$ then the vertices $(0,0)$, $(0,1)$ and $(1,0)$ are feasible vertices. $(1,1)$ violates the inequality $x+y\leq 1.5$ and is therefore not feasible. The (combinatorical) optimization problem would be to choose among the feasible vertices. (In this case, obviously $(1,0)$ is the optimum). The convex hull of these vertices is the triangle with exactly these three vertices. The facets of this simple polytope are $x\geq0$, $y\geq 0$ and $x+y\leq 1$. Note that the description through facets is more accurate than the model. In most hard problems - such as the TSP - the number of facets exceeds the number of model inequalities by several orders of magnitude.
Considering the Travelling Salesman Problem, for which number of nodes is the polytope fully known and how much facets are there. if it is not complete, what are lower bounds on the number of facets?
I'm particularly interested in the so-called hamiltonian path formulation of the TSP:
$$min \sum_{i=0}^{n-1}(\sum_{j=0}^{i-1}c_{i,j}\cdot x_{i,j}+\sum_{j=i+1}^{n-1}c_{i,j}\cdot x_{i,j})$$ s.t.
$$\forall i \neq j:\ \ 0 \leq x_{i,j}\leq 1$$ $$\forall i \neq j\ \ \ x_{i,j}+x_{j,i}\leq 1$$ $$\forall j \ \ \sum_{i=0}^{j-1}x_{i,j}+\sum_{i=j+1}^{n-1}x_{i,j}\leq 1$$ $$\forall j \ \ \sum_{i=0}^{j-1}x_{j,i}+\sum_{i=j+1}^{n-1}x_{j,i}\leq 1$$ $$\sum_{i=0}^{n-1}(\sum_{j=0}^{i-1}x_{i,j}+\sum_{j=i+1}^{n-1}x_{i,j})=n-1$$
If you have any information about polytopes of other formulations of the TSP, feel free to share that too.
• Personally, I am not sure what "polytopes of a problem" means. But then, I have little background in complexity theory. – Raphael Sep 1 '12 at 22:33
• It's not actually complexity theory (it wasn't me tagging this tag). Actually there is no suitable tag for this kind of question yet. A suitable tag would be branch-and-cut or cutting-plane-method. I will add some information about what polytope I'm talking about shortly – stefan Sep 1 '12 at 22:35
• @Raphael: I've updated the question, so you can read something about facets and polytopes. – stefan Sep 1 '12 at 22:46
• @stean: Ah, so it's just the space of feasible solutions. In that case, the search of TSP is clearly exponential in size; otherwise we'd had P=NP ages ago. Even more, TSP is usually defined on undirected, complete graphs, so there are exactly $n!$ feasible solutions. So I don't see what else you are looking for; maybe I don't get an important detail of your question. Maybe that you have written down the relaxed LP, not the IP? – Raphael Sep 2 '12 at 13:24
• @Raphael it's the convex hull of feasible solutions. you are right that unless P=NP this convex hull will have exponentially many facets. however, the number of vertices has nothing to do with that: the convex hull of the binary vectors $\{0, 1\}^n$ is the boolean cube which has only $2n$ facets. moreover, having exponentially many facets also doesn't mean that there isn't a higher dimensional polytope that projects to the given one. for example take the convex hull of of the standard basis vectors, which has $2^n$ facets, but is the projection of a small linear program. – Sasho Nikolov Sep 5 '12 at 18:32
For asymptotic bounds, Fiorini, Massar, Pokutta, Tiwari, and de Wolf recently showed exponential lower bounds on the number of facets of any polytope that projects to the TSP polytope (the TSP polytope, being the convex hull of feasible TSP solutions). This is stronger than what you ask for, and implies that even adding extra variables will not make the TSP polytope efficiently representable.
Their paper is follow up to the classical 1988 paper by Yannakakis, who showed the same result but only for polytopes that satisfy a certain symmetry condition.
• Thank you for this link! It is certainly an impressive result, even though it would have been odd to have a nice (=non-exponentially growing) polytope for an NP problem. – stefan Sep 5 '12 at 17:59
• the surprising part is being able to prove it :) – Sasho Nikolov Sep 5 '12 at 18:20
• @stefan afaik a polynomial growing polytope for an NP problem would imply P=NP as raphael states above... also has anyone seen a statement/discussion of what would be reqd to extend the Fiorini et al to a P!=NP proof? – vzn Jan 10 '14 at 20:02
• the short answer is that the result is about a computational model weaker than polytime-bounded TMs, and you'd like a version of it for a model that is as strong as P. for evidence that extended formulations are weaker than P, Rothvoss recently proved that the matching polyope has exponential extension complexity; nevertheless, arbitrary linear functions over the matching polytope can be solved using either Edmonds' algorithm, or the ellipsoid method. – Sasho Nikolov Jan 10 '14 at 20:27
• more technically, there are many reasons why the results are far from P vs NP: the results are for a fixed encoding of problem solutions as vectors, and do not rule out a more clever encoding can allow for polysize formulations; also, the results say that for the given encoding, every compact LP fails on some objective function, but it might be possible to use different LPs for different objective functions; finally, we still have essentially no explicit lower bounds against SDPs, and then there is the ellipsoid method which can solve exponential size LPs – Sasho Nikolov Jan 10 '14 at 20:32
There is a library called SMAPO (short for library of linear descriptions of SMAll problem instances of POlytopes in combinatorial optimization) for a lot of polytopes including the symmetric TVP as well as the graphical TSP.
For the STSP, this is the list of number of facets for small polytopes
Nodes in STSP | # of facets
----------------+--------------
6 | 100
7 | 3437
8 | 194187
9 | 42104442
10 | 51043900866
|
2019-08-19 14:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5009284615516663, "perplexity": 581.1132618744886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00388.warc.gz"}
|
https://stats.stackexchange.com/questions/346599/svm-support-vector-has-margin-of-0
|
# SVM : support vector has margin of 0?
I am trying to generate the separating hyperplane of binary classification for 3D points.
Here are my points, which are linearly separable.
Class 0: [[0,0,0], [0,1,1], [1,0,1], [0.5,0.4,0.4]]
Class 1: [[1,3,1], [2,0,2], [1,1,1]]
From sklearn.svm.SVC(kernel='linear'), the following is produced:
w = clf.coeff_ = [ 1. 0.5 0.5]
b = clf.intercept_ = -2.0
sv = clf.support_vectors_ =
array([[ 0., 1., 1.],
[ 1., 0., 1.],
[ 2., 0., 2.],
[ 1., 1., 1.]])
The understanding is, if w.dot(x)+b returns a negative value, then x is of Class 0; if positive value, then Class 1. However, w.dot([1,1,1])+b = 0 !! This means that [1,1,1], which is a support vector from Class 1, lies on the separating plane..... while no SVs from Class 0 lie on the sep. plane.
SO MY QUESTION IS...
My data is linearly separable, so theoretically an SVM should have margins >0 for both classes. But here, my SVM has a =0 for class1 and >0 margin for class0. Why is this the case? And if my hyperplane is incorrect, how can I calculate the correct hyperplane? Thank you.
CODE
from sklearn import svm
X0 = [[0,0,0], [0,1,1], [1,0,1], [0.5,0.4,0.4]]
Y0 = [0] * len(X0)
X1 = [[1,3,1], [2,0,2], [1,1,1]]
Y1 = [1] * len(X1)
X = X0 + X1
Y = Y0 + Y1
clf = svm.SVC(kernel='linear')
clf.fit(X, Y)
sv = clf.support_vectors_
w = clf.coef_[0]
b = clf.intercept_[0]
print([w.dot(X0[i])+b for i in range(len(X0))]) # negative class
print([w.dot(X1[i])+b for i in range(len(X1))]) # positive class
• You are asking a question about the fundamentals of svm, but you are using a bit of python that makes it look like a programming question. I don't think this should be closed. Can you plot the convergence (classification error, or Cohens kappa) of the fitted classification function as a function of iterations? – EngrStudent May 22 '18 at 16:30
By default, most SVM implementations are soft-margin SVM, which allows a point to be within the margin, or even on the wrong side of the decision boundary, even if the data is linearly separable. However, there is a penalty associated with each point which violates the traditional SVM constraints.
If we set the coefficient on the penalty to some very large value, we get something closer to hard-margin SVM which does what you'd expect:
>>> clf = svm.SVC(kernel='linear', C = 10000.)
>>> clf.fit(X, Y)
>>> w = clf.coef_[0]
>>> b = clf.intercept_[0]
>>> print([w.dot(X0[i])+b for i in range(len(X0))]) # negative class
[-2.9999999999999987, -0.9999999999999987, -0.9999999999999991, -1.1999999999999988]
>>> print([w.dot(X1[i])+b for i in range(len(X1))]) # positive class
[5.000000000000002, 1.0000000000000004, 1.0000000000000009]
As you can see, the margins are now equal on both sides.
• Hi @shimao, thanks for your answer! I tried your C=10000, but it returned hyperplane values w [2. 2. 0.] b -2.9999999999999987 which is equates to inf*x + inf*y + 0*z = -3, so that doesn't help with respect to the hyperplane. So I tried C=2, and then values make more sense :) – tamtam May 20 '18 at 13:48
• @tamtam $2x+2y+0z = -3$ is a perfectly valid hyperplane – shimao May 20 '18 at 13:56
|
2020-07-11 18:08:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6968157887458801, "perplexity": 2425.43550821091}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655934052.75/warc/CC-MAIN-20200711161442-20200711191442-00258.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=98601
|
# Power and Amplitude of sound wave
by Trooko
Tags: amplitude, power, sound, wave
Admin P: 21,887 The displacement amplitude A is given by: $$A\,=\,\frac{\Delta{p_o}}{\omega\,\rho\,c}$$, where $\Delta{p_o}$ is the pressure amplitude, $\omega$ is the angular frequency given by $2\,\pi\,f$, $\rho$ is the material density, and c = speed of sound in the material, which is given by - $$c = \sqrt{\frac{E}{\rho}}$$ where E is Young's (Elastic) modulus. The intensity of the sound wave is I = P/a, where P is the power of the wave per unit transverse area, a, and P = 1/2 $\omega^2$A2$\rho$ c
|
2014-09-03 04:54:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9126997590065002, "perplexity": 554.1658237412854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924501.17/warc/CC-MAIN-20140909014408-00452-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/the-unruh-effect-and-expansion-of-the-universe.730378/
|
The Unruh Effect and Expansion of the Universe
1. Dec 29, 2013
JPBenowitz
The Unruh Effect predicts that a uniformly accelerating observer in a vacuum field (full of perturbations) will observe an effective temperature. We know that space is expanding at an accelerating rate. My question is then, in all inertial reference frames would all 'observers' in the universe hypothetically observe this temperature? Could the observation of this temperature be the missing microstates from within blackholes which leads to the information paradox?
2. Dec 29, 2013
phinds
Given the Cosmological Principle, I'd say yes. If such a thing exists, it should be the same everywhere (away from massive bodies)
HUH ???
3. Dec 29, 2013
Staff: Mentor
I would be surprised if the expansion of space (accelerated or not) resembles the acceleration considered for the Unruh effect.
Here is a tricky question: the acceleration (expressed in m/s^2) depends on the distance, but or local observed temperature cannot depend on that. Which distance would we have to choose?
???
4. Dec 29, 2013
Bill_K
No. Unruh radiation is observed by an accelerating observer. An observer in an inertial reference frame (even in an expanding universe) is not an accelerating observer.
|
2017-08-20 12:00:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8491859436035156, "perplexity": 974.9823516573138}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106465.71/warc/CC-MAIN-20170820112115-20170820132115-00341.warc.gz"}
|
http://hal.in2p3.fr/view_by_stamp.php?label=IN2P3&langue=fr&action_todo=view&id=in2p3-00712952&version=1
|
HAL : in2p3-00712952, version 1
16th International Conference in Quantum ChromoDynamics (QCD12), Montpellier : France (2012)
Future prospects for the determination of the Wilson coefficient $C_{7\gamma}^\prime$
(2013)
We discuss the possibilities of assessing a non-zero View the MathML source from the direct and the indirect measurements of the photon polarization in the exclusive b→sγ(⁎) decays. We focus on three methods and explore the following three decay modes: B→K⁎(→KSπ0)γ, B→K1(→Kππ)γ, and B→K⁎(→Kπ)ℓ+ℓ−. By studying different New Physics scenarios we show that the future measurement of conveniently defined observables in these decays could provide us with the full determination of C7γ and View the MathML source.
équipe(s) de recherche : Groupe Théorie
Thème(s) : Physique/Physique des Hautes Energies - PhénoménologiePhysique/Physique des Hautes Energies - Expérience
in2p3-00712952, version 1 http://hal.in2p3.fr/in2p3-00712952 oai:hal.in2p3.fr:in2p3-00712952 Contributeur : Françoise Marechal <> Soumis le : Jeudi 28 Juin 2012, 15:51:17 Dernière modification le : Jeudi 4 Avril 2013, 11:52:10
|
2013-05-25 01:18:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30409887433052063, "perplexity": 3898.372925377454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705310619/warc/CC-MAIN-20130516115510-00027-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.213601
|
# Synopsis: Seeing Just One Photon
Scientists have measured the quantum efficiency of retinal cells using single photons.
Scientists have explored the visual sensitivity of both humans and animals using classical light sources such as lamps and light-emitting diodes. But understanding how retinal cells respond to single photons would enable a precise calculation of their quantum efficiency—the ratio of the number of detected photons to the number of incident photons. Such measurements could lead to creative ways of engineering single-photon detectors based on nature’s design of the eye.
Leonid Krivitsky at the Data Storage Institute in Singapore and his collaborators have now devised an experiment to reliably generate single photons and measure the quantum efficiency of retinal rod cells, which were taken from ten adult male African clawed toads. Krivitsky and his team produced pairs of $532$ nanometer (nm) photons using spontaneous parametric down-conversion—a technique that involves passing a laser beam through a nonlinear crystal to produce two photons with twice the wavelength. One of the $532$-nm photons was sent toward a photodiode, which triggered the release of the other photon into an optical fiber directed to the rod cells. The researchers, reporting in Physical Review Letters, measured current pulses induced by the absorption of the single photons and determined that the quantum efficiency of the rod cells was $29±4.7%$, remarkably similar to estimates of human rod cell quantum efficiency based on behavioral experiments.
The authors’ setup is capable of measuring the quantum efficiency of rod cells at different wavelengths, which would provide a better understanding of the functional properties of the eye. Their experiments could furthermore lead to new clues about the origins of diseases resulting in blindness. – Katherine Kornei
More Features »
### Announcements
More Announcements »
Astrophysics
Geophysics
## Related Articles
Biological Physics
### Focus: Explaining the Ruffles of Lotus Leaves
A new theory accurately predicts a wide range of leaf shapes and explains the differences between dry lotus leaves and those that grow on water. Read More »
Biological Physics
### Synopsis: Age Determines How a Human Aorta Stretches
Younger aortas can expand 5 times more than older ones as fluid pumps through them, a finding that could help to design more successful aortic prostheses. Read More »
Optics
### Synopsis: Lightscape Traps Rydberg Atoms in the Dark
A holographic technique confines excited Rydberg atoms in the central dark region of a 3D light-intensity pattern. Read More »
|
2020-01-27 14:17:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18376772105693817, "perplexity": 2260.209150216087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251700675.78/warc/CC-MAIN-20200127112805-20200127142805-00419.warc.gz"}
|
https://picsar.net/performance/warppicsar-performance/
|
## WARP+PICSAR performance
The particle-in-cell code WARP and the library PICSAR have been coupled in order to run WARP efficiently on KNL and also improve performance on older architectures.
### WARP+PICSAR versus WARP alone
To compare WARP+PXR and WARP alone, the considered test case is an homogeneous thermalized hydrogen plasma of temperature $v_{th} = 0.1c$ where $c$ is the speed of light in vacuum. The plasma is in the entire domain of dimension 64 x 64 x 64 μm. The discretization if of 400x400x400 cells. There are 40 macro-particles per cell per species to represent the plasma (composed of two species, electrons and protons). This corresponds to a total of 5,120,000,000 particles. Simulations are run on 128 KNL nodes configures in quadrant cache with 4 MPI ranks per node. There are therefore 512 MPI subdomains that divide the main domain into cubes of 50x50x50 cells. Then, the tiling divides again each MPI subdomain into cubes of approximately 8x8x8 cells. Each MPI rank have 16 OpenMP threads to handle tiles. Current deposition is performed with order 3 B-splines shape factor. On each node, MCDRAM can contain all allocations.
The original version of WARP appears extremely slow on KNL. Using WARP alone is similar to using PICSAR with the most non-optimized subroutines with MPI only. In this case there is no tiling and using OpenMP is not as efficient as with PICSAR. Therefore, we obtain a simulation time per particle per iteration per node of around 150 ns. Using WARP+PICASR, the time per particle per iteration per node drops to 19 ns. This corresponds to a speedup of almost 8.
The same test case has been run on the Haswell partition of CORI and on Edison. We keep the same number of nodes meaning that the number of cores and the computational power is not equivalent. On CORI, WARP performs the simulation in around 150 ns per particle per iteration per node. WARP+PICSAR is very similar to KNL with a time per particle per iteration per node of 19 ns (speedup of 8). On Edison, WARP performs the test case in 96 ns per particle per iteration per node. With PICSAR, this drops to 33 ns (speedup of 3) showing that Haswell and KNL performs better than the previous generation Ivy Bridge.
We have also tested to run on KNL using the compiled code for Haswell (`-xCORE-AVX2` instead of `-xMIC-AVX512`). KNL architectures can excute AVX2 instructions but these instructions are not as efficient as AVX512 vector instructions to fully use the vector registers. We obtain a time of 27 ns per particle per iteration and per node.
Some performance tests have also been performed with large physical cases. The first one is a 2D physical case of harmonic generation in the interaction of a high-intensity laser with a solid target. The domain is of 4300×6600 cells with 400 millions of particles. We use the Yee’s scheme with 16000 iterations. This simulation includes diagnostics every 1000 time steps. run on 96 nodes both on Edison and CORI KNL, this simulation reveals twice faster on Edison with a simulation time of 6377 seconds against 13504 seconds on KNL.
### Best KNL configuration
#### Hyper-threading
Hyper-threading is first tested with 4 parallel configurations on a KNL configured in quadrant cache mode. The 4 parallel configurations are given in abscissa of Fig. 1. Oordinate is the time per iteration per particle (and per node). For each case, 1, 2 and 4 threads per core are tested (respectively corresponding to the blue, orange and green markers). Fig. 1 shows that WARP+PICSAR does not benefit from hyper-threading. The best performances are obtained with 1 thread per core. Using 2 threads per core, which the best with PICSAR stand-alone, slightly slows down the code. Using 4 threads/core is useless for WARP+PICSAR.
Fig. 1 also reveals that the most efficient hybrid parallel distribution between MPI and OpenMP is the last case with 32 MPIs and 2 cores. This will be further studied in the next section.
#### NUMA and memory KNL configurations
The best NUMA and memory KNL configuration is now studied. We focus on 3 configurations: quadrant flat, quadrant cache and SNC4 flat. SNC2 is not studied here. The results of this study are shown in Fig. 2. Abscissa represents different hybrid parallel configurations as in Fig. 1 but with more data. Ordinate is the time per iteration per particle and per node in nanoseconds. Fig. 2 reveals that quadrant cache is surprisingly faster than quadrant flat although here the code fits in MCDRAM. Among all hybrid parallel distribution, the average speedup factor is of 1.15. SNC4 is almost as efficient as quadrant cache. More points are required. The difference between all configurations is nonetheless very small. Since the quadrant cache mode is the default one on CORI (they have their own partition, no need to reboot KNL nodes), we therefore recommend to use this mode for large production cases.
This is also a study of the best hybrid parallel distribution between MPI and OpenMP. As for Fig. 1, Fig.2 confirms that using a large number of MPI ranks is the most efficient for WARP+PICSAR. It seems that 32 MPI ranks with 2 OpenMP per MPI is the best choice. Performance is pretty close with 8 or 64 MPI ranks. Using only OpenMP (1 MPI) is clearly slower than the best configuration in quadrant cache of a factor of 1.8. Carefully setting up a run with the best hybrid parallel distribution can really have an impact on performance. In our case, using a full MPI code is still an efficient possibility for WARP+PICSAR. In any case, even with 1 OpenMP per MPI rank, tiling is activated.
#### DRAM versus MCDRAM
We now study impact of MCDRAM when the problem fit in the 16 available GB. Fig. 3 shows the results of the study performed on KNL configured in flat mode with the MCDRAM only and then the DDR only. Having the code in MCDRAM speeds up the runs for every parallel configurations but with a small average factor of 1.13 in our case.
#### Huge pages
Huge pages are virtual memory pages which are bigger than the default base page size of 4KB. On NERSC, using larger pages can speedup applications. To use larger pages, the corresponding modules has to be loaded. For instance, to use 16MB pages, the module `craype-hugepages16M` needs to be loaded. The code has to be recompiled and run with this module.
We have studied the effect of huge pages on WARP+PICSAR for 16 and 128 KNL nodes in quadrant cache. The results are shown in Fig. 4. For WARP+PICSAR, the times with huge-pages and the defaults are very similar. Using huge pages therefore does not seem to speedup the code at least for the considered test case.
### Weak scaling
This section presents weak scaling studies performed with WARP+PICSAR on KNL and on Haswell on CORI.
### Conclusion
WARP when coupled with PICSAR is faster than alone with an average speedup of 8 on CORI (both on Intel Haswell and KNL). According to the results of this study, we recommend to keep the quadrant cache mode on KNL. The latter one provides good performance and is the default KNL mode on CORI. Until 512 KNL nodes, using many MPI ranks (32 MPI ranks per node) is more efficient than using a lot of OpenMP threads. For our case, 32 MPI ranks and 2 threads per core without hyper-threading is the best hybrid configuration.
Mathieu Lobet, last update: February 9, 2017
|
2018-02-18 08:20:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27074459195137024, "perplexity": 2013.61443729832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891811795.10/warc/CC-MAIN-20180218081112-20180218101112-00533.warc.gz"}
|
https://tutorials.methodsconsultants.com/tags/stratification/
|
## What Makes Analyzing Survey Data Different?
When most students begin to study statistics, they are generally taught formulas which assume that all of the observations they are analyzing had an equal probability of being selected into the sample. The reason for this assumption is that, given simple random selection, estimation of statistics of interest (e.g. means, regression coefficients, etc.) is straightforward. In addition, assuming equal selection probabilities makes it possible to calculate a statistic’s variance (a measure of the precision of an estimate) without much difficulty.
Read more
|
2019-07-16 08:14:32
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9490869045257568, "perplexity": 823.4631180721104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524517.31/warc/CC-MAIN-20190716075153-20190716101153-00380.warc.gz"}
|
http://math.stackexchange.com/questions/112987/properties-of-a-valid-dfa?answertab=votes
|
# Properties of a valid DFA
Is a DFA required to have transitions on each input symbol from each state defined? If there isn't a path from state q1 to another state on input a for example, does that invalidate the DFA itself. I'd think it simply means that if the machine reads 'a' when in q1, it doesn't have a corresponding transition defined and hence rejects the input string. The definition of a DFA only requires it not to have multiple transition, per input symbol per state, does that also mean it has to have exactly one transition per input symbol per state?
-
It all depends on your definition of DFA, but it doesn't particularly matter, because the two notions of DFA are fairly easily proven equivalent. – Cam McLeman Feb 24 '12 at 20:16
@CamMcLeman: The word “equivalent” in your comment has to be interpreted with caution, because people care about the difference between, say, DFA and NFA, although they are fairly easily proven “equivalent” under some standard terminology. – Tsuyoshi Ito Feb 24 '12 at 23:17
Certainly, agreed. For example, if you were in a situation where you cared about the number of states you used, then you'd want to be more careful (though as in Henning's answer, a single extra state is enough in this case). – Cam McLeman Feb 25 '12 at 0:24
## 1 Answer
That depends on the precise definitions you're working with.
It's not uncommon to allow states where not all input symbols have a transition. The standard semantics is then that the automaton will reject any input string that would need one of the missing transitions.
On the other hand, it is trivial to convert such an automaton to one where all symbols always have a transition, just by adding a single global non-accepting state where all of the new transitions will end up, and which has transitions to itself for every symbol.
The convention where transitions can be missing is nearly always used when drawing automata, for obvious reasons of avoiding irrelevant, visually distracting clutter in the diagram. On the other hand, it is useful when proving theorems about automata to assume that there's always a relevant transition. In actual implementations, either of the conventions may end up being the most convenient.
In general, you're assumed to be able to pass from one representation to the other without much thought.
-
|
2015-06-30 10:29:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8274593949317932, "perplexity": 516.5796213782028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375093400.45/warc/CC-MAIN-20150627031813-00083-ip-10-179-60-89.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/90419/combination-with-repetition-with-an-upper-bound
|
# Combination with repetition with an upper bound
I am trying to calculate the number of ways to divide $30$ oranges between $10$ kids, with the restriction that each kid will get no more then $5$ oranges. So, as far as I know I need to use the Inclusion–exclusion principle but I'm not sure how exactly. Here's what I have so far:
Count the number of division with no restrictions so we have $\binom{30+9}{10}$. Subtract the number of ways with restriction that one kid (or two or three) gets at least $6$ so we have $-\binom{(24+9)}{10}, -\binom{18+9}{10}$ and so on...
How do I continue from here? How do I calculate the intersection of $\binom{24+9}{10}$ and $\binom{18+9}{10}$?
Thanks...
-
Your formula for the unconstrained case is wrong: you should have $\binom{30+10-1}{30}=\binom{39}9$ possibilities. It's the oranges that get chosen by the kids, not the kids that get chosen by the oranges. – Marc van Leeuwen Dec 11 '11 at 14:15
You can use inclusion-exclusion when the problem with a more general but opposite constraint is easier. Here you've already computed the solution without constraints. If the constraint would be that the first kid gets at least $6$ oranges, then start giving those oranges, and find $\binom{30-6+9}{30-6}=\binom{33}9$ solutions. But not only could it be another kid that gets too many oranges, there could be more than one kid at once that gets too much. So the "more general opposite" constraint would be, for any set $S$ of kids, that all kids from $S$ get at least $6$ oranges. For this there are $\binom{30-6s+9}{30-6s}=\binom{39-6s}9$ solutions if $S$ has $s$ kids in it.
By inclusion-exclusion you need to count the solutions for $S=\emptyset$, subtract those for $S$ a singleton, add back for $S$ a doubleton, etc. All in all you get $$\binom{39}9-\binom{10}1\binom{33}9+\binom{10}2\binom{27}9 -\binom{10}3\binom{21}9+\binom{10}4\binom{15}9 -\binom{10}5\binom99$$ solutions, which if I calculated well gives $2\,930\,456$ possibilities, less than $1.5$% of the original $211\,915\,132$.
Added (much later). Alternatively, one could compute the coefficient of $X^{30}$ in $(1+X+X^2+X^3+X^4+X^5)^{10}=\frac{(1-X^6)^{10}}{(1-X)^{10}}$. This is quite easy (the numerator has only $6$ terms of degree${}\leq30$) and gives the same result, in fact even via the same formula.
|
2015-10-08 20:53:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8683209419250488, "perplexity": 223.2572166520398}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737899086.53/warc/CC-MAIN-20151001221819-00251-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://rlv-stormarn.de/rlv-workbench/rlv-installation/
|
RLV-Installer:
The first (alpha) version of RLV-Installer is ready for testing.
RLVInstaller.zip:
Manuals:
RLVInstaller-DE:
RLVInstaller-EN:
Manually install the RLV for Fortius
The files are to be placed as shown here:
Place the DE_Nortorf24.avi in the directory C:\Program Files\Tacx\Fortius\catalyst\video\DE_Nortorf24
Place the DE_Nortorf24.rlv in the directory C:\Program Files\Tacx\Fortius\catalyst\video
Place the DE_Nortorf24.pgmf in the directory C:\Program Files\Tacx\Fortius\catalyst\programs
Manually install the RLV for TTS2.x and TTS3.x
For TTS2.x you need to place the DE_Nortorf24.tts in the directory C:\Users\All Users\Tacx\TTS2\RLV
For TTS3.x you need to place the DE_Nortorf24.tts into C:\Users\All Users\Tacx\TrainerSoftware\TTS3\Trainings\RLV
Depending on your operating system (Windows 7 in the example) you mey have deviations concerning the directories. Should that be the case, there is some handiwork needed additionally:
For the Fortius software: Using RLV-Workbench you can correct the directory of the avi-file.
For TTS 2.x:
1. Do not copy the DE_Nortorf24.TTS.
2. Adjust the paths like you would do for the Fortius.
3. Import the RLV in the TTS program.
Alternatively: Edit the file installedvideos.xml as described below.
Manually install the RLV for the Tacx Video Player
After the videoplayer is installed, the tts file is added to the list of installed RLVs.
You need to find the folder of the TacxVideoPlayer in the All-Users directory (see TTS installation):
In the settings-folder you will find the installedvideos.xml file:
There you can add the new video:
Done 🙂
|
2017-12-15 21:26:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8978965282440186, "perplexity": 10658.163708406191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948579567.73/warc/CC-MAIN-20171215211734-20171215233734-00493.warc.gz"}
|
https://math.stackexchange.com/questions/3856306/do-these-triangles-with-parallel-hypotenuses/3856321#3856321
|
# Do these triangles with parallel hypotenuses?
This originated as a question that I encountered while woodworking, but nerdy me had to try and see if there was a solution. Say I have two right triangles, and each of them have the same angles, but one is slightly larger than the other (with the right angle for both at the origin). Arranging the triangles on top of one another in this way means that the hypotenuses are parallel and separated by a certain amount. If I pick the most acute angle and name it theta, I know and can express three specific pieces of information (in addition to the obvious piece that one angle is 90 degrees for each triangle):
1. The length of the opposite side of the larger triangle is 0.5.
2. The length of the adjacent side of the smaller triangle is 1.5.
3. The separation between the two hypotenuses is 0.125.
The question is, can I solve the remaining pieces of information to fully define these triangles? I know that you generally need three pieces of info to solve for any given triangle, and I don't have that for each individual triangle, I only have two pieces of information per triangle. But I do have a fixed and known relationship between the two the separation of the parallel hypotenuses) which I feel like I ought to be able to use in some way as my third piece of information. For the life of me though, I can't figure out how to do it. Any ideas? Am I trying to solve the unsolvable? Is there a way to do this? Thanks in advance for the help!
• It would be good if you could include a diagram to show what you mean. Oct 8 '20 at 8:23
Say the larger triangle is $$AOB$$ with $$A$$ on the $$+y$$-axis, $$O$$ at the origin and $$B$$ on the $$+x$$-axis, and similarly for the smaller triangle $$A'OB'$$. We have $$AO=0.5$$ and $$OB'=1.5$$. Now let $$A'O=x$$; we then have by similar triangles $$\frac{1/2-x}{1/8}=\frac{\sqrt{x^2+9/4}}{3/2}$$ This is a quadratic equation. Solving: $$6-12x=\sqrt{x^2+9/4}$$ $$36-144x+144x^2=x^2+9/4$$ $$143x^2-144x+135/4=0$$ $$x=\frac{144-3\sqrt{159}}{286}=0.3712288\dots\qquad(x<1/2)$$ From there, since we know the triangles are similar, all other data can be computed.
• Your solution should be correct. I have verified that your solution gives around the right value of $x$ on GeoGebra. Oct 8 '20 at 8:52
• Can you share what the base formula is to generate the equation you started with? I understand all the math after that, and I validated with my woodworking project that 0.3712... is in fact the length of the small triangle short side, but I have no idea how you got that equation to start with... apologies in advance if that should have been obvious, I am in no way a studied mathematician lol... Oct 8 '20 at 9:38
• @Valiant See diagram at cdn.discordapp.com/attachments/247802651096514560/… and remember the Pythagorean theorem too! (All triangles in here are similar, actually.) Oct 8 '20 at 9:41
• That diagram is what I have on my paper here, but I don't understand why you took the data from that diagram and arranged it in the equation you made the way you did. I know it works, because I measured it, but I don't know how you got there. From what I deduced so far, the left side is the difference between the vertical sides of the two triangles divided by the 0.125 separation, and the right side is small triangle hypotenuse divided by the horizontal side of the small triangle. I have no idea why you put those values there, is what I am asking, apologies if I'm not explaining properly... Oct 8 '20 at 9:54
• @Valiant No! The numerator on each side represents the ratio of hypotenuse to adjacent, which should be the same since the triangles are similar. The LHS is the ratio for the tiny triangle near $A$, and the RHS is the ratio for $A'OB'$. In particular, the separation of 0.125 is the adjacent of the tiny triangle. Oct 8 '20 at 9:56
In the general case:
$$b\cos\theta = d + a\sin\theta \tag{1}$$
Knowing any three of $$a$$, $$b$$, $$d$$, $$\theta$$, you can find the fourth. Of course, $$\theta$$ is the tricky one, but we can write $$(d+a\sin\theta)^2=b^2\cos^2\theta\quad\to\quad d^2+2a d\sin\theta+a^2\sin^2\theta=b^2(1-\sin^2\theta) \tag{2}$$ Solving the quadratic in $$\sin\theta$$, we get
$$\sin\theta = \frac{-ad\pm b\sqrt{a^2+b^2-d^2}}{a^2+b^2} \tag{3}$$
For non-negative acute $$\theta$$ (and non-negative $$a$$ and $$b$$), we take the "$$\pm$$" to be "$$+$$".
As a woodworker, you would probably prefer to know $$\tan\theta$$. A little work yields
$$\tan\theta = \frac{ab-d\sqrt{a^2+b^2-d^2}}{(a+d)(a-d)} \tag{4}$$
For the specific case described in the question, we have $$a=3/2$$, $$b=1/2$$, $$d=1/8$$, so $$\tan\theta = \frac{1}{143} (48 - \sqrt{159}) = 0.2475\ldots \quad\to\quad \theta\approx 13.9^\circ \tag{5}$$
Let $$\angle BEA$$ be $$\theta$$. Then $$\angle FDE = \theta$$ by similarity, so $$\sin \theta = \frac{0.125}{ED} \Rightarrow ED = \frac{0.125}{\sin \theta}$$.
Similarly, $$\angle BCG = 90º - \theta$$, so $$\angle CBG = \theta$$ as well, and $$\cos \theta = \frac{0.125}{BC} \Rightarrow BC = \frac{0.125}{\cos \theta}$$.
Since $$\Delta ABE \sim \Delta ACD$$, we have that:
$$\frac{AB}{AE} = \frac{AC}{AD} \Rightarrow \frac{0.5 - 0.125/\cos \theta}{1.5} = \frac{0.5}{1.5 + 0.125/\sin \theta}$$ $$\Rightarrow \sin \theta \cos \theta (0.5 - 0.125/\cos \theta)(1.5 + 0.125/\sin \theta) = 1.5 \cdot 0.5 \sin \theta \cos \theta$$ $$\Rightarrow \sin \theta \cos \theta (0.75 + 0.0625/\sin \theta - 0.1875/\cos \theta - 0.015625/ (\sin \theta \cos \theta)) = 0.375 \sin \theta \cos \theta$$ $$\Rightarrow 0. 75 \sin \theta \cos \theta + 0.0625 \cos \theta - 0.1875 \sin \theta - 0.015625 = 0.375 \sin \theta \cos \theta$$ $$\Rightarrow \theta \approx 13.9º$$
which gives $$AB, AC, AE, AD$$, and you can find the hypotenuses $$BE$$ and $$CD$$ using Pythagoras.
|
2021-12-05 05:51:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 51, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671109080314636, "perplexity": 131.3806395833648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363135.71/warc/CC-MAIN-20211205035505-20211205065505-00439.warc.gz"}
|
http://blog.nguyenvq.com/page/31/
|
## Custom background in LaTeX’s Beamer
In powerpoint or keynote, you can easily insert a background image in your slides. In Beamer, this can be done with little effort. I found instructions here. Just put the following in the preamble:
\usebackgroundtemplate{
\includegraphics{Path to image}
}
## Open remote file while in emacs ansi-term buffer/window: ansi-term + tramp
In emacs, I can edit files remotely using tramp. While ssh’d to a remote server in ansi-term at a specific location, I can open the remote files in emacs as if that remote location is my working directory. This is taken form here. Put the following in the remote server’s .bashrc file:
## Emacs: ansi-term + tramp integration
## in ansi-term, ssh to this remote computer, can do C-x C-f and find file in REMOTE working directory
## http://www.enigmacurry.com/2008/12/26/emacs-ansi-term-tricks/
#Emacs ansi-term directory tracking
# track directory, username, and cwd for remote logons
if [ $TERM = eterm-color ]; then function eterm-set-cwd {$@
echo -e "\033AnSiTc" $(pwd) } # set hostname, user, and cwd function eterm-reset { echo -e "\033AnSiTu"$(whoami)
echo -e "\033AnSiTc" $(pwd) echo -e "\033AnSiTh"$(hostname)
}
for temp in cd pushd popd; do
alias $temp="eterm-set-cwd$temp"
done
# set hostname, user, and cwd now
eterm-reset
fi
For SunOS servers, /usr/ucb is not in path, and whoami is not found. I need to put /usr/ucb in PATH in my .bashrc file. Credit belongs to this thread. Now while ssh’d to a remote server in ansi-term, try C-x C-f, and see the working directory on the remote server available by default.
## edit files remotely: emacs + tramp
Suppose I want to edit a file remotely. I don’t want to download/ftp the file to my computer, edit, and send it back to the remote server. In emacs, I can edit it remotely using tramp via the ssh or rcp protocol. Put following in the .emacs file after installing tramp.
;; tramp stuff
(require 'tramp)
(setq tramp-default-method "ssh")
Read a remote file by C-x C-f /user@your.host.com:/path/to/file. Note we Need that ‘/’ before username. This is a good reference for tramp.
## Run screen in emacs with ansi-term (combine this with emacs + ess + remote R)
This is actually an update to this post, but since I discovered a few more things, I’ll write a new post. To run screen within a shell buffer in emacs, I tried M-x shell and fired up screen (ditto with M-x term). It gave me this error: Clear screen capability required. I found the solution to this here. To fix this issue, do M-x ansi-term (use /bin/bash when asked of course). screen now works inside emacs. Combine this with running a remote R session in emacs, and there you have it, the perfect work flow for developing and running computationally intensive R code! I can utilize screen to not have my R sessions interrupted, and I can utilize ESS to send code to an R session/buffer. I have to say, this WILL be the way I use R for any computationally-intensive project!
## UPDATE
So screen doesn’t work in emacs after I ssh to a remote server inside ansi-term. I get the error: Cannot find terminfo entry for 'eterm-color'. To fix this, I put the following in my remote server’s .bashrc file (info from here.):
if [ "$TERM" = "eterm-color" ] ; then TERM="xterm" fi fi I hope there aren’t any more issues. What the previous trick does is check if system is SunOS, and if so, use xterm. I got the unix command information from here. I got the uname command info from here. ## FINAL UPDATE To get eterm-color to work in SunOS, put the following in my .bashrc file: ##following to get eterm-color working in SunOS TERMINFO=${HOME}/.terminfo
export TERMINFO
I guess I’ve been doing this, but I never exported TERMINFO. Didn’t know this was needed. Make sure the eterm-files are copied over (see top of post). Now everything should work, hopefully flawlessly. To summarize, copy eterm files into ~/.terminfo, and put the TERMINFO stuff in ~/.bashrc.
Now screen works in emacs. An issue that arised from this method is that when screen is run inside emacs, I can’t execute ess-remote anymore because I can’t press M-x. In ansi-term, C-x C-j : behave like emacs, cursor can go anywhere C-x C-k : behave like a terminal (default) This is documented here and here. Press C-x C-j and I can press M-x again. However, ess-remote still doesn’t work.
I guess when I use screen, I am forced to copy and paste code. If I really must use screen with ESS, do the regular M-x shell. After logging into the remote server, execute “TERM\=’vt100′” in the shell. Now, run screen -> R -> ess-remote. I can send code with keypresses now, but screen steals some of my emacs key bindings. To fix this, put the following,
escape \^Oo
in my remote ~/.screenrc file to switch the default command key from C-a to C-o so it doesn’t conflict with my emacs key bindings (documented here).
More information on ansi term (like remaping C-x to C-c) can be found here.
This was a long post. Summary:
1. ansi-term in emacs behaves VERY much like a terminal. I can run vi, emacs, etc, inside of it. Emacs behavior is ‘term’ and ‘shell’.
2. I can change things by editing the env variable, TERM.
3. Change keybinding in the remote .screenrc file.
NEED TO DO: get ess-remote to work with ansi-term and screen in emacs!
UPDATE2: It seems the best way to do things so far is to use ansi-term -> ssh to remote server -> screen -> R, then go to line run (C-c C-j) and copy and paste code from there. To get screen commands to work (like detach, etc), need to go back to char run (C-c C-k). Remember, I now use C-o instead of C-a (defined in .screenrc); this only works on a regular terminal or in emacs with ansi-term, not using ‘shell’ in emacs with the hack I mentioned up there.
## Displaying code with Syntax Highlighting on websites and blogger
This actually took me quite some time as some information out on the web are deprecated and because I have little to no knowledge of html, javascript, and css.
Use Syntax Highlighting, the latest version being here. Set up instructions here. Note that the code for the two css files are not closed correctly, so u need to change the closing to ‘/>’. For blogger, there is some kind of
issue, and i don’t want to turn the line break feature on blogger off, so based on the comment at the end of the page, add ‘SyntaxHighlighter.config.bloggerMode = true;’ to the code.
To use in blogger, type, in html mode,
<pre class="brush:js">
code in here
</pre>
where you type whatever language you want after brush:. Here is a list.
Example 1:
testing;
123;
123;
Paste the following in after HEAD declaration in the blogger template. I am using the Emacs theme of course. NOTE I GUESS SYNTAX HIGHLIGHTER IS NOT DISPLAYING the “/>” when the two css files end, REMEMBER to do so!
<link href="http://alexgorbatchev.com/pub/sh/current/styles/shCore.css" rel="stylesheet" type="text/css">
<script src="http://alexgorbatchev.com/pub/sh/current/scripts/shCore.js" type="text/javascript">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJScript.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCSharp.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushSql.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushXml.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushBash.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushCpp.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushJava.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPerl.js'" type="'text/javascript'/">
<script src="'http://alexgorbatchev.com/pub/sh/current/scripts/shBrushPython.js'" type="'text/javascript'/">
<script type="'text/javascript'">
SyntaxHighlighter.all();
SyntaxHighlighter.config.bloggerMode = true;
</script>
...
When pasting html / javascript, etc, if blogger gives an error, just hit ignore!
Copy and paste code by hitting the view source icon!
## Run a remote R session in emacs: emacs + ESS + R + ssh
I don’t know how, but somehow, I stumbled on how to run a remote R session in emacs.
Since Spring 2006 I’ve always used emacs with ESS to run R (did it on windows, switched to linux for years, and most recently, on my macbook). I liked this workflow because I get the same usual interface across multiple platforms. Plus, I use emacs for everything computery or scientific, like using coding Python or C.
Regarding the same interface across multiple platforms, I use, for example, emacs + ESS + R whenever I remotely log into the remote servers dedicated to computing. I just need to ssh into the server, fire up emacs and fire up R. However, I almost always write all of my code on the local computer, and when I’m ready to run the final code, I either run it as a batch script (utlizing nohup and &), through screen (to keep the session runnning after I log out, see my post on R with unix tools), or through emacs. These days, I’ve been doing it with screen mainly so disconnects to the server won’t interrupt my script.
I just found yet another way to do this: write code on my local computer and then send code to a remote R session in my local emacs. I ran into this by googling ‘emacs ess multiple R session’. Instructions are described here (section 3.3: ESS on remote computers). We need this file for things to work. This site clarified how to get graphics to work.
Instructions as follow:
1. Download the ssh.el file.
2. Install it the usual way or place it in ~/elisp or ~/.emacs.d. In your emacs init file, add:
;; add path to emacs
;; http://edward.oconnor.cx/2005/09/installing-elisp-files
;; load ssh.el file from elisp
;; this is to run ESS remotely on another computer in my own emacs, or just plai
;;
(require 'ssh)
1. Fire up emacs. Type ‘M-x ssh’. For the host settings, do something like ‘-X -C username@server.com’ (X is for X windows forwarding, C is for compression of graphics, so plots can be displayed faster). Type in password.
2. You wil be logged into a shell session on your server. Fire up R by typing R then enter. R is now running in an emacs buffer. Type M-x ess-remote. For dialect, select r.
3. Open up any .R file on your computer, and use the usual keyboard shortcuts to send code to the remote R session.
W can also achieve the same results without the ssh.el file. In emacs, type M-x shell. In the shell buffer, ssh into the server and then run R. Type M-x ess-remote and everything should still work.
Next thing to get working is to open remote files in my local emacs.
## .emacs file
I decided to share my .emacs file, which i will put on my google sites. Here is the file.
## Emacs: AucTeX + Rubber + Sweave
I got rubber to work with AucTeX and Sweave (Rnw) files with the help of this.
Basically, combined with my other stuff, I tweaked my .emacs file to look like:
;;following is AucTeX with Sweave -- works
(setq TeX-file-extensions
'("Snw" "Rnw" "nw" "tex" "sty" "cls" "ltx" "texi" "texinfo"))
(lambda ()
'("Sweave" "R CMD Sweave %s"
TeX-run-command nil (latex-mode) :help "Run Sweave") t)
'("LatexSweave" "%l %(mode) %s"
TeX-run-TeX nil (latex-mode) :help "Run Latex after Sweave") t)
;; following 3 lines for rubber, taken from same site as next paragraph, http://www.nabble.com/sweave-and-auctex-td23492805.html, xpdf to open
'("RubberSweave" "rubber -d %s && open '%s.pdf'"
TeX-run-command nil t) t)
(setq TeX-command-default "Sweave")))
;; AucTeX with rubber
;;http://www.nabble.com/sweave-and-auctex-td23492805.html
;;'("Rubber" "rubber -d %t && xpdf '%s.pdf'" TeX-run-command nil t) t)) ;; change by vinh
'("Rubber" "rubber -d %t && open '%s.pdf'" TeX-run-command nil t) t))
Now, when an Rnw file is open, I can press C-c C-c, select Sweave. Then repeat, select RubberSweave (or LatexSweave).
## LaTeX in blogger, pt 2
In my previous post on this topic, I didn’t get LaTeX to work in Blogger because forkosh closed their mimetex service to the public. For LaTeX to work in blogs, I would either have to switch to wordpress or get my own host and install mimetex. The First option wasn’t too appealing as I’d like to keep everything google since a lot of my personal services are hosted here (yes, I’m not afraid of google having too much information about myself). Second option also wasn’t feasible. I found out from some more searching that codecogs is generous enough to host this kind of service. I updated wolverine’s script in firefox/greasemonkey with this, and now I have an UnLaTeX button as well! Really cool. To use, in compose mode in blogger, type dollar sign dollar sign LaTeX code dollar sign dollar sign, then hit the latex button. Bamm! To see original code, hit UnLatex. Here is an example.
Looks good ehh? Optimally I would like blogger to have a LaTeX feature, but this suffices for now. This is different than before because I now have an unlatex command. This is useful because when codecogs goes down I am able to recover the original LaTeX code.
Hopefully for “LaTeX in blogger, pt 3″ a native LaTeX feature in blogger will be available. UPDATE: forgot to mention that I found codeclogs on here first.
## the next big thing from google: WAVE
Google just recently unveiled Google WAVE (keynote presentation at Google I/O 2009). I read it on the news, googled it, went to the blog, and watched the video.
The video is quite long (1+hr, 300+mb). I downloaded it and viewed it. Looks like it is Google’s next big product. Email has been invented for 40+ years. They are attempting to re-invent email as if it were invented in today’s time. As always, based on cloud computing (hosted by google), uses html5 as the platform. It combines email, photos, IM, discussion board, etc., all into one interface. The API is going to allow other sites to embed and use it as well. Super cool!
|
2014-08-21 00:22:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35112249851226807, "perplexity": 8285.731469488028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500813241.26/warc/CC-MAIN-20140820021333-00225-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://inverseprobability.com/talks/lawrence-cwtec17/where-next-for-ai.html
|
at CWTech AI Conference on Oct 3, 2017 [reveal]
Abstract
Our current generation of artificial intelligence techniques are driven by data. But also we expect to be able to deploy artificial intelligence techniques on data. What does that mean, is it a contradiction? How will this effect the wider technology landscape? Is it simply a matter of refining deep neural nets? Or are more disruptive technologies needed? What will be the challenges of deploying AI systems?
Introduction
The Gartner Hype Cycle
The Gartner Hype Cycle tries to assess where an idea is in terms of maturity and adoption. It splits the evolution of technology into a technological trigger, a peak of expectations followed by a trough of disillusionment and a final ascension into a useful technology. It looks rather like a classical control response to a final set point.
import pods
from ipywidgets import IntSlider
pods.notebook.display_plots('ai-bd-dm-dl-ml-google-trends{sample:0>3}.svg',
'../slides/diagrams/data-science/', sample=IntSlider(0, 1, 4, 1))
Google trends gives us insight into how far along various technological terms are on the hype cycle.
Examining Google treds for ‘artificial intelligence’, ‘big data’, ‘data mining’, ‘deep learning’ and ‘machine learning’ we can see that ‘artificial intelligence’ may be entering a plateau of productivity, ‘big data’ is entering the trough of disillusionment, and ‘data mining’ seems to be deeply within the trough. On the other hand ‘deep learning’ and ‘machine learning’ appear to be ascending to the peak of inflated expectations having experienced a technology trigger.
For deep learning that technology trigger was the ImageNet result of 2012 (Krizhevsky, Sutskever, and Hinton, n.d.). This step change in performance on object detection in images was achieved through convolutional neural networks, popularly known as ‘deep learning’.
Lies and Damned Lies
There are three types of lies: lies, damned lies and statistics
Benjamin Disraeli 1804-1881
Benjamin Disraeli said1 that there three types of lies: lies, damned lies and statistics. Disraeli died in 1881, 30 years before the first academic department of applied statistics was founded at UCL. If Disraeli were alive today, it is likely that he’d rephrase his quote:
There are three types of lies, lies damned lies and big data.
Why? Because the challenges of understanding and interpreting big data today are similar to those that Disraeli faced in governing an empire through statistics in the latter part of the 19th century.
The quote lies, damned lies and statistics was credited to Benjamin Disraeli by Mark Twain in his autobiography. It characterizes the idea that statistic can be made to prove anything. But Disraeli died in 1881 and Mark Twain died in 1910. The important breakthrough in overcoming our tendency to overinterpet data came with the formalization of the field through the development of mathematical statistics.
Data has an elusive quality, it promises so much but can deliver little, it can mislead and misrepresent. To harness it, it must be tamed. In Disraeli’s time during the second half of the 19th century, numbers and data were being accumulated, the social sciences were being developed. There was a large scale collection of data for the purposes of government.
The modern ‘big data era’ is on the verge of delivering the same sense of frustration that Disraeli experienced, the early promise of big data as a panacea is evolving to demands for delivery. For me, personally, peak-hype coincided with an email I received inviting collaboration on a project to deploy “Big Data and Internet of Things in an Industry 4.0 environment”. Further questioning revealed that the actual project was optimization of the efficiency of a manufacturing production line, a far more tangible and realizable goal.
The antidote to this verbage is found in increasing awareness. When dealing with data the first trap to avoid is the games of buzzword bingo that we are wont to play. The first goal is to quantify what challenges can be addressed and what techniques are required. Behind the hype fundamentals are changing. The phenomenon is about the increasing access we have to data. The manner in which customers information is recorded and processes are codified and digitized with little overhead. Internet of things is about the increasing number of cheap sensors that can be easily interconnected through our modern network structures. But businesses are about making money, and these phenomena need to be recast in those terms before their value can be realized.
Mathematical Statistics
Karl Pearson (1857-1936), Ronald Fisher (1890-1962) and others considered the question of what conclusions can truly be drawn from data. Their mathematical studies act as a restraint on our tendency to over-interpret and see patterns where there are none. They introduced concepts such as randomized control trials that form a mainstay of the our decision making today, from government, to clinicians to large scale A/B testing that determines the nature of the web interfaces we interact with on social media and shopping.
Their movement did the most to put statistics to rights, to eradicate the ‘damned lies’. It was known as ‘mathematical statistics’. Today I believe we should look to the emerging field of data science to provide the same role. Data science is an amalgam of statistics, data mining, computer systems, databases, computation, machine learning and artificial intelligence. Spread across these fields are the tools we need to realize data’s potential. For many businesses this might be thought of as the challenge of ‘converting bits into atoms’. Bits: the data stored on computer, atoms: the physical manifestation of what we do; the transfer of goods, the delivery of service. From fungible to tangible. When solving a challenge through data there are a series of obstacles that need to be addressed.
Firstly, data awareness: what data you have and where its stored. Sometimes this includes changing your conception of what data is and how it can be obtained. From automated production lines to apps on employee smart phones. Often data is locked away: manual log books, confidential data, personal data. For increasing awareness an internal audit can help. The website data.gov.uk hosts data made available by the UK government. To create this website the government’s departments went through an audit of what data they each hold and what data they could make available. Similarly, within private buisnesses this type of audit could be useful for understanding their internal digital landscape: after all the key to any successful campaign is a good map.
Secondly, availability. How well are the data sources interconnected? How well curated are they? The curse of Disraeli was associated with unreliable data and unreliable statistics. The misrepresentations this leads to are worse than the absence of data as they give a false sense of confidence to decision making. Understanding how to avoid these pitfalls involves an improved sense of data and its value, one that needs to permeate the organization.
The final challenge is analysis, the accumulation of the necessary expertise to digest what the data tells us. Data requires intepretation, and interpretation requires experience. Analysis is providing a bottleneck due to a skill shortage, a skill shortage made more acute by the fact that, ideally, analysis should be carried out by individuals not only skilled in data science but also equipped with the domain knowledge to understand the implications in a given application, and to see opportunities for improvements in efficiency.
‘Mathematical Data Science’
As a term ‘big data’ promises much and delivers little, to get true value from data, it needs to be curated and evaluated. The three stages of awareness, availability and analysis provide a broad framework through which organizations should be assessing the potential in the data they hold. Hand waving about big data solutions will not do, it will only lead to self-deception. The castles we build on our data landscapes must be based on firm foundations, process and scientific analysis. If we do things right, those are the foundations that will be provided by the new field of data science.
Today the statement “There are three types of lies: lies, damned lies and ‘big data’” may be more apt. We are revisiting many of the mistakes made in interpreting data from the 19th century. Big data is laid down by happenstance, rather than actively collected with a particular question in mind. That means it needs to be treated with care when conclusions are being drawn. For data science to succede it needs the same form of rigour that Pearson and Fisher brought to statistics, a “mathematical data science” is needed.
You can also check my blog post onblog post on Lies, Damned Lies and Big Data..
Electricity
There are parallels between the deployment of machine learning solutions and update as electricity as a means of powering industry. Tim Harford explores the reasons why it took time to exploit the power of electricity in the manufacturing industry. The true benefit of electricity came when machinery had electric motors incorporated. Substituting a centralized steam engine in a manufacturing plant with a centralized electric motor didn’t reduce costs or improve the reconfigurability of a factory. The real advantages came when the belt drives that were necessary to redistribute power were replaced with electric cables and energy was transformed into motion at the machine rather than centrally. This gives a manufacturing plant reconfigurability.
We can expect to see the same thing with our machine learning capabilities. In the analogy our existing software systems are the steam power, and data driven systems are equivalent to electricity. Currently software engineers create information processing entities (programs) in a centralized manner, where as data driven systems are reactive and responsive to their environment. Just as with electricity this brings new flexibility to our systems, but new dangers as well.
Boulton and Watt’s Steam Engine
James Watt’s steam engine contained an early machine learning device. In the same way that modern systems are component based, his engine was composed of components. One of which is a speed regulator sometimes known as Watt’s governor. The two balls in the center of the image, when spun fast, rise, and through a linkage mechanism.
References
Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. n.d. “ImageNet Classification with Deep Convolutional Neural Networks.” In, 1097–1105.
|
2021-01-28 02:55:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3238731026649475, "perplexity": 2002.180072170402}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704835583.91/warc/CC-MAIN-20210128005448-20210128035448-00583.warc.gz"}
|
https://docs.microsoft.com/en-us/dotnet/visual-basic/language-reference/modifiers/notoverridable
|
# NotOverridable (Visual Basic)
Specifies that a property or procedure cannot be overridden in a derived class.
## Remarks
The NotOverridable modifier prevents a property or method from being overridden in a derived class. The Overridable modifier allows a property or method in a class to be overridden in a derived class. For more information, see Inheritance Basics.
If the Overridable or NotOverridable modifier is not specified, the default setting depends on whether the property or method overrides a base class property or method. If the property or method overrides a base class property or method, the default setting is Overridable; otherwise, it is NotOverridable.
An element that cannot be overridden is sometimes called a sealed element.
You can use NotOverridable only in a property or procedure declaration statement. You can specify NotOverridable only on a property or procedure that overrides another property or procedure, that is, only in combination with Overrides.
## Combined Modifiers
You cannot specify Overridable or NotOverridable for a Private method.
You cannot specify NotOverridable together with MustOverride, Overridable, or Shared in the same declaration.
## Usage
The NotOverridable modifier can be used in these contexts:
Function Statement
Property Statement
Sub Statement
|
2020-10-28 13:07:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18059413135051727, "perplexity": 5287.031990361246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107898499.49/warc/CC-MAIN-20201028103215-20201028133215-00283.warc.gz"}
|
https://direct.mit.edu/evco/article-abstract/26/1/145/1065/Co-Optimization-Free-Lunches-Tractability-of?redirectedFrom=fulltext
|
Co-optimization problems often involve settings in which the quality (utility) of a potential solution is dependent on the scenario within which it is evaluated, and many such scenarios exist. Maximizing expected utility is simply the goal of finding the potential solution whose expected utility value over all possible scenarios is best. Such problems are often approached using coevolutionary algorithms. We are interested in the design of generally well-performing black-box algorithms for this problem, that is, algorithms which have access to the utility function only via input–output queries. We research this matter by focusing on three main questions: 1) are some algorithms strictly better than others when judged in aggregation over all possible instances of the problem? that is, is there “free lunch”? 2) do optimal algorithms exist? and 3) if so, do they have a tractable implementation? For a specific expected-utility maximization context, involving several assumptions and performance choices, we answer all three questions affirmatively and concretely: we provide examples of free lunch; we describe the general operation of optimal algorithms; we characterize situations when this operation has a very simple and efficient implementation, situations when the computational cost can be significantly reduced, and situations when tractability of optimal algorithms might be out of reach.
|
2022-06-29 14:26:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8259164690971375, "perplexity": 598.1333340607094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00319.warc.gz"}
|
http://mathhelpforum.com/algebra/20961-find-numerical-value-x-y-print.html
|
# Find the numerical value of x/y
• Oct 20th 2007, 06:22 PM
DivideBy0
Find the numerical value of x/y
If I try this question in different ways I get a different answer:
For positive $x,y,z$,
$\frac{y}{x-z}=\frac{x+y}{z}=\frac{x}{y}$
Find the numerical value of $\frac{x}{y}$
(Using the fact that $\frac{a}{b}=\frac{c}{d}=\frac{a+kc}{b+kd}$),
Solution 1:
$\frac{y+(x+y)}{x-z+(z)}=\frac{x}{y}$
$\implies \frac{2y+x}{x}=\frac{x}{y}$
$\implies 2y+x=y$
$\frac{x}{y}=-1$
Solution 2:
$\frac{x}{y}=\frac{y+(x+y)+(x)}{x-z+(z)+(y)}=\frac{2(x+y)}{x+y}=2$
• Oct 20th 2007, 06:28 PM
DivideBy0
Nope, it works for all k, supposedly.
$\frac{a+kc}{b+kd}=\frac{a(1+k(\frac{c}{a}))}{b(1+k (\frac{d}{b}))}=\frac{a(1+k(\frac{c}{a}))}{b(1+k(\ frac{c}{a}))}=\frac{a}{b}$
• Oct 20th 2007, 06:28 PM
Jhevon
Quote:
Originally Posted by topsquark
Where did you get this identity from? It is not true for all k:
$\frac{1}{2} = \frac{3}{6} = \frac{1 + 3k}{2 + 6k}$
is only valid for k = 0.
-Dan
actually, it is true otherwise. did you try other values for k?
• Oct 20th 2007, 06:30 PM
DivideBy0
Ooops, look how I canceled the x's in the first solution. Happens way to much. :eek:
• Oct 20th 2007, 06:32 PM
ticbol
And
if (2y +x)/x = x/y
Then (2y +x) is not equal to y.
• Oct 20th 2007, 06:33 PM
topsquark
Quote:
Originally Posted by DivideBy0
For positive $x,y,z$,
$\frac{y}{x-z}=\frac{x+y}{z}=\frac{x}{y}$
Find the numerical value of $\frac{x}{y}$
Take the equations like so:
$\frac{y}{x-z} = \frac{x+y}{z}$
and
$\frac{y}{x-z} = \frac{x}{y}$
(The third equation is implied by these two.)
The first one says:
$yz = (x + y)(x - z)$
$\frac{z}{y} = \left ( \frac{x}{y} + 1 \right ) \left ( \frac{x}{y} - \frac{z}{y} \right )$
and the second says
$y^2 = x(x - z)$
$1 = \left ( \frac{x}{y} \right ) \left ( \frac{x}{y} - \frac{z}{y} \right )$
Now define $a = \frac{x}{y}$ and $b = \frac{z}{y}$ and you get the system:
$b = (a + 1)(a - b)$
and
$1 = a(a - b)$
You are after the value of a.
-Dan
• Oct 20th 2007, 06:33 PM
DivideBy0
thanks, lol :o
Interesting approach @ topsquark
geeze, you guys are replying so faast today
|
2017-11-18 19:42:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8709052205085754, "perplexity": 4379.715888783883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805023.14/warc/CC-MAIN-20171118190229-20171118210229-00327.warc.gz"}
|
https://www.physicsforums.com/threads/conservative-electric-field.634094/
|
# Conservative electric field
## Homework Statement
Is the field (given below) in a simply connected region of space conservative
E=A[yex+xey]
## The Attempt at a Solution
What is a conservative electric field & how do I find whether it is . . . . .
## Answers and Replies
Ah!
If a vector field is conservative then the integral round a closed loop in the domain is zero.
Oops, sorry.
|
2021-05-17 19:42:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8440797924995422, "perplexity": 779.876538296789}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00263.warc.gz"}
|
https://mathoverflow.net/questions/290717/fields-of-definition-of-elliptic-curves
|
# Fields of Definition of Elliptic Curves
I am currently studying the theory of complex multiplication and I find myself confused by the language in a lot of the literature.
In Silverman's Advanced Topics in the Arithmetic of Elliptic Curves, Silverman uses the term model without ever defining it (as far as I can see). What does he mean? F-isomorphism class?
In particular the proof of theorem II.2.3, he states "We take a model for $E$ defined over $H=K(j(E))$". Why can one swap out $E$ with a model defined over $K(j(E))$.
Another example of this, is in Diamond, Darmon and Taylor's paper on Fermat's last theorem. In remark 1.3, it states that any elliptic curver $E/\mathbb{C}$ with CM is defined over an abelian extension of $K= \mathrm{End}_{\mathbb{C}}(E)\otimes E$.
I assume they are citing the fact that $K(j(E))$ is an abelian extension of $K$, but why is $E$ defined over $K(j(E))$? Is this even the supposed abelian extension $E$ is defined over? If not, which one is it?
• Two elliptic curves in Weierstrass form with $j(E) = j(E')$ are isomorphic over $\overline{\mathbb{Q}}$, and $E_a : y^2 = x^3 - \frac{27 a}{a-1728} x-\frac{27 a}{a-1728}$ has $j(E_a) = a$. If $E_a$ has CM by a subring of $\mathbb{Q}(\sqrt{-d})$ then you'll take $\mathbb{Q}(\sqrt{-d},a)$ as the field of definition of $E_a$ because $\text{End}(E_a)$ is defined over it. – reuns Jan 15 '18 at 16:11
What's usually meant when phrased this way is that within the $\overline K$-isomorphism class of $E$, there is an elliptic curve defined over $K(j(E))$. An indeed, for any elliptic curve $E$ defined over $\overline{\mathbb Q}$, there is an elliptic curve $E'$ defined over $\mathbb Q(j(E))$ that is $\overline{\mathbb Q}$-isomorphic to $E$. So that's the sort of model that one often takes. All this has nothing to do with CM, and is very elementary.
If $E$ has CM with $K=\text{End}_{\mathbb C}(E)\otimes\mathbb Q$, then part of the basic theory of CM is that $K(j(E))$ is an abelian extension of $K$, and indeed if $\text{End}_{\mathbb C}(E)$ is the full ring of integers of $K$, then $K(j(E))$ is the maximal abelian everywhere unramified extension of $K$ (the Hilbert class field of $K$). This is far less elementary, but sufficiently well-known that DDT probably didn't feel it necessary to give a reference. The fact that we can find a model for $E$ over $K(j(E))$ follows from the previous paragraph. We can actually find such a model over $\mathbb Q(j(E))$, but probably they want the endomorphisms to also be defined over the field, and thus need to take the compositum with $K$.
• Slight correction: $K(j(E))$ is the maximal abelian everywhere unramified extension of $K$ if the endomorphism ring ${\mathop{\rm End}}_{\bf C}(E)$ is the full ring of integers of $K$, but not in general (when the extension $K(j(E))/K$ is still abelian but can be ramified). – Noam D. Elkies Jan 14 '18 at 22:57
|
2019-11-14 21:07:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9607922434806824, "perplexity": 109.02143072235671}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668539.45/warc/CC-MAIN-20191114205415-20191114233415-00478.warc.gz"}
|
https://discourse.julialang.org/t/how-to-plot-direction-field-with-4-places-of-decimal-for-the-y-axis/88109
|
# How to Plot Direction Field with 4 places of decimal for the y-axis
Hi all,
I wonder is Julia plotting can’t plot the tickers for y-axis for number with lots of decimals 0.0001 etc.
It is only plotting 0.2 0.4 …
For the simple differential equation:
\frac{dq}{dt} = 300 ( 0.01 - 10^{-6})q
this is my code:
using LinearAlgebra, Plots
xs = 0:5:50
ys = -5*10^(-4):5*10^(-5):2*10^(-4)
# dx/dy = f(y,x)
df(x, y) = normalize([3-(3*10^(-4)y), 3-(3*10^(-4)y)])
xxs = [x for x in xs for y in ys]
yys = [y for x in xs for y in ys]
Plots.quiver(xxs, yys, quiver=df)
plot!([10^(-4)], seriestype="hline", linestyle=:dash, color=:green, label="y(t)", legend=:outerright)
Not sure what the question is but if you want to plot yticks with 4 decimals you can do yticks = 0:0.0001:1 (or whatever the range of your function is) but you might end up with loads of ticks.
|
2022-12-02 20:32:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5773215293884277, "perplexity": 5693.687535986264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00583.warc.gz"}
|
https://blog.pjam.me/posts/direct-upload-to-s3/
|
# Direct Upload to S3 with CORS
## EDIT
Everything detailed in this article has been wrapped up in this gem, you should give it a look !
## Preface
Since beginning of september, Amazon added CORS support to S3. As this is quite recent, there are not yet a lot of documentation and tutorials about how to set eveything up and running for your app.
Furthermore, this jQuery plugin is awesome, mainly for the progress bar handling, but sadly the example in the wiki is obsolete.
If somehow you’re working with heroku you might have already faced the 30s limit on each requests. There are some alternatives, such as the extension of the great carrier wave gem, carrierwave direct. I gave it a quick look, but I found it quite crappy, as it forces you to change your carrier wave settings (removing the store_dir method, really ?) and it only works for a single file. So I thought it would be better to handle upload manually for big files, and rely on vanilla carrier_wave for my other small uploads.
I found other interesting examples but they all lacked important things, and none of them worked out of the box, hence this short guide. This tutorial is inspired by that post and that one.
First you’ll need to setup your bucket to enable CORS under certain conditions.
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
</CORSRule>
</CORSConfiguration>
Of course those settings are only for development purpose, you’ll probably want to restrict the Allowed Origin rule to your domain only. Documentation about those settings is quite good.
In order to send your files to s3, you have to include a set of options as described in the official doc here.
One solution would be to directly write the content of all those variables in the form, so it’s ready to be submitted, but I believe that most of those value should not be written in the DOM. So we’ll create a new route we’ll use to fetch those data.
This example is written with Rails, but writing the same for another framework should be really simple
MyApp::Application.routes.draw do
resources :signed_url, only: :index
end
Now that we have our new route, let’s create the controller which will send back our data to the s3 form
class SignedUrlsController < ApplicationController
def index
render json: {
success_action_redirect: "/"
}
end
private
# generate the policy document that amazon is expecting.
Base64.encode64(
{
expiration: 30.minutes.from_now.utc.strftime('%Y-%m-%dT%H:%M:%S.000Z'),
conditions: [
{ bucket: ENV['S3_BUCKET'] },
["starts-with", "$key", "uploads/"], { success_action_status: '201' } ] }.to_json ).gsub(/\n|\r/, '') end # sign our request by Base64 encoding the policy document. def s3_upload_signature Base64.encode64( OpenSSL::HMAC.digest( OpenSSL::Digest::Digest.new('sha1'), ENV['AWS_SECRET_KEY_ID'], s3_upload_policy_document ) ).gsub(/\n/, '') end end The policy and signature method are stolen from the linked blog posts above with one exception, I had to include the “starts-width” constraint, otherwise s3 was yelling 403 to me. Everything else is quite straight forward, there’s just a small detail to consider if you set the acl to ‘private’, but more on that later. One last detail, the key value is actually the path of your file on your bucket, so set it to whatever you want but be sure it matches the constraint you set in the policy. Here we’re using params[:doc][:file] to read the name of the file we’re about to upload. We’ll see more about that when setting the javascript. That’s basically everything we have to do on the server side ## Add the jQueryFileUpload files Next you’ll have to add the jQueryFileUpload files. The plugins ships with a lof of files, but I found most of them useless, so here is the list • vendor/jquery.ui.widget • jquery.fileupload ## Setup the javascript client side Now let’s setup jQueryFileUpload to send the correct data to s3 Based on what we did on the server, the workflow will be composed of 2 requests, first, it’s going to fetch the needed data from our server, then send everything to s3. Here is the form I’m using, the order of parameter is important. %form(action="https://#{ENV['S3_BUCKET']}.s3.amazonaws.com" method="post" enctype="multipart/form-data" class='direct-upload') %input{type: :hidden, name: :key} %input{type: :hidden, name: "AWSAccessKeyId", value: ENV['AWS_ACCESS_KEY_ID']} %input{type: :hidden, name: :acl, value: 'public-read'} %input{type: :hidden, name: :policy} %input{type: :hidden, name: :signature} %input{type: :hidden, name: :success_action_status, value: "201"} %input{type: :file, name: :file } - # You can recognize some bootstrap markup here :) .progress.progress-striped.active .bar $(function() {
$('.direct-upload').each(function() { var form =$(this)
$(this).fileupload({ url: form.attr('action'), type: 'POST', autoUpload: true, dataType: 'xml', // This is really important as s3 gives us back the url of the file in a XML document add: function (event, data) {$.ajax({
url: "/signed_urls",
type: 'GET',
dataType: 'json',
data: {doc: {title: data.files[0].name}}, // send the file name to the server so it can generate the key param
async: false,
success: function(data) {
// Now that we have our data, we update the form so it contains all
// the needed data to sign the request
form.find('input[name=key]').val(data.key)
form.find('input[name=policy]').val(data.policy)
form.find('input[name=signature]').val(data.signature)
}
})
data.submit();
},
send: function(e, data) {
$('.progress').fadeIn(); }, progress: function(e, data){ // This is what makes everything really cool, thanks to that callback // you can now update the progress bar based on the upload progress var percent = Math.round((e.loaded / e.total) * 100)$('.bar').css('width', percent + '%')
},
fail: function(e, data) {
console.log('fail')
},
success: function(data) {
// Here we get the file url on s3 in an xml doc
var url = $(data).find('Location').text()$('#real_file_url').val(url) // Update the real input in the other form
},
done: function (event, data) {
$('.progress').fadeOut(300, function() {$('.bar').css('width', 0)
})
},
})
})
})
So quick explanation about what’s going on here :
The add callback allows us to fetch the missing data before the upload. Once we have the data, we simply insert them in the form
The send and done callbacks are only used for UX purpose, they show and hide the progress bar when needed. The real magic is the progress callback as it gives you the current progress of the upload in the event argument.
In my example, this form sits next to a ‘real’ rails form which is used to save an object which has amongst its attributes a file_url, linked to the “big file” we just uploaded. So once the upload is done I fill the ‘real’ field so my object is correctly created with the good url without having to handle extra things. After submitting the real form my object is saved with the URL of the file uploaded on S3.
If you’re uploading public files, you’re good to go, everything’s perfect. But if you’re uploading private file (this is set with the acl params), you still have a last thing to handle.
Indeed the url itself is not enough, if you try accessing it, you’ll face some ugly xml like that. The solution I used was to use the aws gem which provides a great method : AWS::S3Object#url_for. With that method, you can get an authorized url for the desired duration with your bucket name and the key (the path of your file in the bucket) of your file
So my custom url accessor looked something like this :
def url
parent_url = super
# If the url is nil, there's no need to look in the bucket for it
return nil if parent_url.nil?
# This will give you the last part of the URL, the 'key' params you need
# but it's URL encoded, so you'll need to decode it
object_key = parent_url.split(/\//).last
AWS::S3::S3Object.url_for(
CGI::unescape(object_key),
ENV['S3_BUCKET'],
use_ssl: true)
end
This involves some weird handling with the CGI::unescape, and there’s probably a better way to achieve this, but this is one way to do it, and it works fine.
## Live example
I’ll set up a live example running on heroku, on which you’ll be able to upload files in more than 30s coming soon
### Finally !
The demo if finally here : http://direct-upload.herokuapp.com and code source can be found here : https://github.com/pjambet/direct-upload
## EDIT
I changed every access to AWS variables (BUCKET, SECRET_KEY and ACCESS_KEY) by using environment variables. By doing so you don’t have to put the variables directly in your files, but you just have to set correctly the variables :
export S3_BUCKET=<YOUR BUCKET>
heroku config:add AWS_ACCESS_KEY_ID=<YOUR KEY> --app <YOUR APP>
|
2020-10-27 17:47:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2123212218284607, "perplexity": 2516.191740015009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894426.63/warc/CC-MAIN-20201027170516-20201027200516-00652.warc.gz"}
|
https://blog.liuchuan.org/?p=62
|
## Complexity Theory Side Note
I am currently reading the book: Computational Complexity: A Modern Approach. I have taken some side notes while reading. I will update the nodes here since I also found some other interesting related materials online.
Chapter 1
1. I don’t feel the definition of time constructible function satisfiable. The possible gaps to fill in may be the conditions and properties of such functions. This concept is important for time hierarchical theorem. Question 3.5 is related.
2. Here is a blog post by Dick Lipton about oblivious Turing machine. This definition is later used for proving Cook-Levin Theorem. Exercise 1.5 and 1.6 can help understand the machine.
3. On the philosophy issue, another blog post by Daniel Lemier is interesting.
4. Typos: page16 line 2 in definition of time-constructible functions should be: “the function $|x| \to \llcorner T(|x|) \lrcorner$ …”; page 24 line 2: “Note that we …” should be “We note that …”.
A time line of some important results in first several chapters:
• Space/Time Hierarchy Theorem [SHL65] [HS65]
• Savitch’s Theorem [Sav70] : implies PSPACE = NPSPACE
• Cook-Levin Theorem [Coo71] [Lev73]: SAT is NP-complete
• Nondeterministic Time Hierarchy Theorem [Coo72]
• Ladner’s Theorem [Lad75]: If NP $\neq$ P, exist $L \in NP \setminus P$ that is not NP-complete.
• Immerman-Szelepcsenyi Theorem [Imm88] [Sze87]: $\overline{\text{PATH}} \in \text{NL}$, which implies NL = coNL.
(to be continued…)
### 2 Responses to “Complexity Theory Side Note”
1. rjlipton Says:
Thanks for pointer to my post.
I am curious about your reading their new book. I plan to use it, how is the book overall?
2. liuchuan Says:
Here are some pros and cons comes to my mind instantly. Note I only read about 1/3 of the book.
Pros:
1) It covers a broad spectrum of topics.
2) There are a lot high level discussions which gives me a aerial viewpoint. I think this is import since I can see connections between different areas and understand what are really important things.
3) This book is quite up to date.
Cons:
1) Many typos. I think it is because it is quite new.
2) Some proofs are not “standard”, e.g. Cook-Levin theorem, Ladner’s theorem, etc. By standard, I mean the original proof by the discoverer. This is also the case for some definitions. However some may think different viewpoint is a merit.
3) Compared with some other proof writer, I found the proofs in the book are less easier to follow. The overall quality are good though.
|
2019-04-21 04:42:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.743140697479248, "perplexity": 1499.9539821754063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530176.6/warc/CC-MAIN-20190421040427-20190421062427-00176.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/my-teacher-did-not-give-an-example-problem-for-this-equation.67766/
|
# My teacher did not give an example problem for this equation
#### angelkaitlyn
##### New member
p=a(b+c), solve for a
I am confused how to even start this problem.
#### Subhotosh Khan
##### Super Moderator
Staff member
Re: My teacher did not give an example problem for this equa
angelkaitlyn said:
p=a(b+c), solve for a
I am confused how to even start this problem.
Can you solve for 'x'
$$\displaystyle 15 = x * (2 +1)$$
#### angelkaitlyn
##### New member
Re: My teacher did not give an example problem for this equa
what???
#### angelkaitlyn
##### New member
Re: My teacher did not give an example problem for this equa
i dont understand. the type of math im learning is called solvinf for a specific value and un this problem the specific value is "a"
#### Subhotosh Khan
##### Super Moderator
Staff member
Re: My teacher did not give an example problem for this equa
|
2019-03-22 15:08:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4844091236591339, "perplexity": 1904.0170647675827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00293.warc.gz"}
|
http://physics.stackexchange.com/questions/70438/how-far-would-a-radio-signal-propagate-in-free-space-as-compared-to-earths-atmo/70448
|
# How far would a radio signal propagate in free space as compared to Earth's atmosphere?
A curiosity question.
Radio Propagation within Earth's atmosphere is via atmosphere. Broadly speaking a signal loses strength between bouncing off the ionized layer, and absorption by various earthy elements.
If an identical signal were to be produced in free space, how far would it travel as compared to it's range within Earth's atmosphere?
EDIT: A commercial FM transmitter on Earth probably has a range no more than 25 miles where the listener uses a broadcast receiver. For the purpose of elaboration assume the the receiver being placed in static orbit - how far out could the same signal be generated, and still be intelligible?
-
I wonder how information is conveyed to satellites.(THIS is an answer to your question) – Nix Jul 9 '13 at 12:12
Another thing to think about: The spacecraft 'Voyager 1' was launched in 1977. It is now at a distance of $1.866 × 10^{10}$km, and it communicates to the earth by means of radio signals. – mikhailcazi Jul 9 '13 at 12:36
The question is ill-posed. You don't specify noise conditions, receiver sensitivity, and initial signal's ERP to get a sensible answer as to the maximum ranges at which both signals will be detected. – Deer Hunter Jul 9 '13 at 12:42
The main problem in radio communications is being able to recover the transmitted signal from the received, noisy signal; and it is mainly a channel coding problem. also, because the distances between spacecrafts (or stations, etc.)in space are very large, the transmission power is not too important and the successful communication depends on error-correction methods used. – Mostafa Jul 9 '13 at 12:47
@mikhailcazi: The Voyagers use the Deep Space Network; apparently using PSK. My query is about the range of a system in space relative to the range of the same system on Earth. – Everyone Jul 10 '13 at 1:28
First, no, "radio propagation" is not "via atmosphere". Different wavelengths get absorbed, reflected, or simply passed by different parts of the atmosphere. There is no one general rule. Many of our radio communications within the atmosphere are pretty much like they would be in free space, for example.
Second, all radio waves propagate infinitely in free space. There is no finite end to the propagation. What does matter in a practical sense is signal to noise ratio. Below some signal to noise ratio for whatever information encoding scheme is used, that information can't be recovered. Or more accurately, the error rate goes up as the signal to noise ratio goes down. At some point the errors in the information make is useless or "unreceivable" in a practical sense.
|
2016-04-30 01:56:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6808028817176819, "perplexity": 561.1242394363887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860111581.11/warc/CC-MAIN-20160428161511-00116-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://sinews.siam.org/Details-Page/how-bees-use-physics-to-keep-hives-cool
|
SIAM News Blog
SIAM News
# How Bees Use Physics to Keep Hives Cool
Have you ever wondered how an organism tries to solve a physiological problem on scales much larger than itself? For instance, humans construct architectures that are tens to hundreds of times bigger than themselves via a combination of systematic design, global planning, and effective communication between individuals.
How does this work for insects such as bees, wasps, termites, or ants, which tend to cohabitate in large colonies? To survive as a colony, social insects must solve some key problems, including maintaining mechanical stability, thermal regulation, and ventilation within the colony. “Here we see a fundamentally different approach to the solution, without a plan or a planner,” said L. Mahadevan, Lola England de Valpine Professor of Applied Mathematics, of Organismic and Evolutionary Biology, and of Physics at Harvard University. “Organisms respond to local rules and harness the environment to communicate information on scales much larger than themselves.” Building on earlier work pertaining to the thermoregulation of bee clusters [3] and active mechanical adaptation of bee swarms [4], Mahadevan’s group recently addressed the way in which a swarm of bees in a congested hive stays cool on hot summer days [5].1
European honeybees (Apis mellifera) live in large, crowded enclosures with a single opening that limits passive ventilation. The colonies comprise more than 10,000 bees living together within tree hollows or other pre-existing cavities. These narrow, confined spaces present continuous survival challenges, including the need to maintain a stable temperature. This is especially true during extreme heat, when nest temperatures can reach a high of over 40° Celsius.
To understand how bees regulate colony temperatures, Jake Peters, now a postdoctoral fellow at Cornell University, monitored a group of artificial beehives housed at Harvard’s Concord Field Station. Figure 1a depicts the experiment, wherein a dense group of honeybees ventilates the hot box. The swarm creates an active fanning flow by moving to one side of the entrance and pulling hot air out of the hive. The other side of the entrance has few to no bees. Figure 1b shows that as the air temperature (red curve) increases along the nest entrance of a hive, the density of the fanning bees increases (black curve), as does the outward velocity of air (blue curve). The spatial patterning of the fanner bees yields a global convective flow that then cools the hive. But how do the bees spontaneously arrange themselves this way without a plan, especially since they start out by being uniformly distributed at the entrance?
Figure 1. Honeybee activity at a hive’s entrance. 1a. Honeybees ventilating at the entrance. There is a dense group of fanning bees at the left of the entrance and a dearth of fanning bees at the right. 1b. The air velocity (blue line), density (black line), and temperature (red line) along a hive’s nest entrance. Negative values indicate inflow while positive values indicate outflow. Figure courtesy of [5].
To explain this phenomenon, Mahadevan’s group hypothesized that the fanning bees use fluid flow to both measure the hive temperature and self-organize at the entrance. To understand the implications of this idea, the researchers created a mathematical model inspired by observations of the temperature, fluid flow, and fanner bee density along the colony’s entrance.
The model starts by characterizing the local fanning response of individual bees to the local air temperature, which is a proxy for the temperature inside the hive. “Indeed, by pulling the air out by fanning, the bees at the entrance can determine whether the nest needs to be cooled,” Mahadevan said. If it does, they recruit additional fanner bees that then work together to pull the hot air out. Mahadevan’s team links bee behavior, air temperature, and airflow by measuring how the distribution of fanning bees $$\rho (x, t)$$, local air temperature $$T (x, t),$$, and local flow velocity $$v (x, t)$$ vary with time $$t$$ along the nest entrance $$x$$. Their model assumes that the local temperature determines the probability of a bee fanning at the entrance. The local density of fanning bees is given by
$\frac{\partial \rho}{\partial t} = k_{\textrm{on}}(T) - k_{\textrm{off}}(T), \:\:\:\:\: \rho(x, t) \in [0, \rho_{\textrm{max}}], \tag1$
where $$k_\textrm{on}$$ and $$k_\textrm{off}$$ describe the rate at which bees begin or stop fanning behavior respectively. The maximum achievable density is represented by $$\rho_{\textrm{max}}$$, which depends on space availability at the nest’s entrance.
The group modeled the fanning rates to describe the probability that a bee will fan at a given temperature via the following simple, switch-like sigmoidal function:
$k_{\textrm{on}} = k_0 \frac{\tan h \big(m(T-36^{\circ} C)\big) + 1}{2} \tag2$
and
$k_{\textrm{off}} = k_0 - k_{\textrm{on}}. \tag3$
The parameter $$m$$ determines the slope of the sigmoidal function that controls the temperature range over which bees ventilate. Since $$m$$ is the model’s only behavioral parameter, Mahadevan and his colleagues hypothesize that natural selection has likely acted on the inter-individual variation in fanning thresholds. Such variation ensures efficient ventilation over the range of temperatures that the bees experience. Mahadevan compares the variable fanning threshold to slightly imperfect artificial temperature sensors in a building. “Having many sensors, each of which operates over a slightly different range of temperatures, collectively smooths out fluctuations in the environment and controls building temperature well,” he said. Similarly, subtle genetic variations—akin to the different sensors—confer different temperature thresholds in individual bees, above which they begin to fan. This mechanism enables bees to better react to temperature variations by responding to their individual local environments. Previous studies have shown that such diversity is essential to maintaining stable fanning behavior for efficient ventilation.
Mahadevan’s model then couples this fanning behavior to a minimal equation for fluid mass conservation across the hive entrance. To characterize the airflow, it assumes that each bee generates an outward airflow with velocity $$v_b$$. Since the nest has only one opening, air that is actively drawn from the entrance must be balanced by air that flows into it to ensure conservation of mass. The following equation characterizes airflow in and out of the nest:
$v (x, t) = l_b v_b \big[\rho (x, t) - \frac{1} {L} \int_0^L \rho(x,t)dx \big] + D_v \frac{\partial^2 v(x,t)}{\partial x^2}, \tag4$
where $$v_b$$ is the outward air flow generated by each bee. $$D_v$$ indicates the scaled momentum diffusivity, $$L$$ denotes the size of the nest length, and $$l_b$$ represents the characteristic length scale derived from the fanning pressure gradient and fluid friction. The first term in (4) captures the outward airflow due to the actively fanning bees. However, this outflow must be balanced by inflow elsewhere, thus demanding the presence of a global inhibitor (second term) as it reverses flow direction and conserves the hive air volume. The last term represents local fluid friction and penalizes large velocity gradients (e.g., reversals in flow directions).
Finally, the researchers use a simple heat transfer equation that models temperature variation along the entrance, given by
$\frac{\partial T(x, t)}{\partial t} = -cv(x, t) \Delta T + D_T \frac{\partial^2 T(x,t)}{\partial x^2}. \tag5$
Here, $$\Delta T =\begin{cases} T - T_h,\: \: if \: \: v \ge 0 \\ T_a - T,\: \: if \: \: v <0, \end{cases}$$, and $$T_h$$ and $$T_a$$ are the hive and ambient temperatures respectively. Dimensionless forms of the equations reduce the number of parameters to just four: a scaled entrance size, a scaled thermal diffusivity, a scaled fluid friction parameter, and a scaled fanning bee size.
Interestingly, the form of equations $$(1)$$ through $$(5)$$ shows that fluid flow driven by bee recruitment follows a local excitation and global inhibition pattern. Fanning bees tend to excite other local fanners, while fluid conservation leads to long-range inhibition (see Figure 2). This collectively yields the emergent behavior observed in the field.
Figure 2. A schematic demonstrating the mechanisms of self-organization that emerge from the model. The fluid conservation equation is broken down into components: $$A$$ is the direct result of fanning behavior, $$B$$ is conservation of volume, and $$C$$ is friction (or effective diffusion of velocity). The following variables are plotted: the distribution of fanners ($$\rho$$); velocity, calculated based only on fanning behavior ($$v_A$$); velocity, calculated based on fanning and conservation ($$v_B$$); velocity, calculated based on fanning, conservation, and friction ($$v_C$$); and temperature profile ($$T$$). Scenario 1 is a simple example that illustrates how volume conservation contributes to global inhibition of fanning behavior, and how fanning contributes to local activation. In other words, bees are more likely to fan adjacent to other fanning bees due to friction/diffusion. Scenario 2 illustrates a case in which friction/diffusion drives attraction between adjacent fanning groups. As a result, fanners are more likely to fan between fanning groups. Finally, Scenario 3 illustrates the potential for conservation of volume to act as a global inhibitor. Large fanning groups are more likely to grow and smaller groups are more likely to shrink and disappear due to this competition. Figure courtesy of [5].
Mahadevan and his collaborators numerically solved equations $$(1)$$ through $$(5)$$ to show that fanning behavior results in a clustering of the bees along one half of the nest’s entrance. This causes active outward flow of hot air where bees are present, and passive inward flow of cooler air everywhere else. Such behavior gives rise to a global convective flow that cools the hive. “Because of bees’ ability to measure local temperatures and fan to generate air flow—which then feeds back on their recruitment—this solution arises spontaneously,” Mahadevan said. The model’s predictions are qualitatively consistent with observations and show how fanning bees harness the dynamics of the physical environment and bring about self-organized behavior.
Studies like this one demonstrate how complex behavior in social insects can arise on large scales from simple local rules and long-range physical channels for communication, such as those induced by flow or elasticity. This type of work is quite distinct from the better-studied stigmergic mechanisms that rely on strongly localized communication channels, and has the potential for far-reaching implications [1]. For example, one could apply the insect ventilation problem to devise new strategies for passive and active sustainable human architecture [2].
1 Mahadevan co-authored this paper with Orit Peleg, a former postdoctoral fellow at Harvard who is now a faculty member at the University of Colorado, and Jake Peters, a former graduate student at Harvard who is now a postdoctoral fellow at Cornell University. Their work was supported by the National Science Foundation.
Acknowledgments: The author thanks L. Mahadevan for his time during the interview process and for his thorough review, feedback, and edits to this article.
References
[1] Heylighen, F. (2016) Stigmergy as a universal coordination mechanism. Cog. Syst. Res., 38, 4-13.
[2] Holbrook, C., Clark, R.M., Moore, D., Overson, R.P., Penick, C.A., & Smith, A.A. (2010). Social insects inspire human design. Bio. Lett., 6(4), 431-433.
[3] Ocko, S.A., & Mahadevan, L. (2014). Collective thermoregulation in bee clusters. J. Roy. Soc. Inter., 11(91), 20131033.
[4] Peleg, O., Peters, J., Salcedo, K., & Mahadevan, L. (2018). Collective mechanical adaptation of honeybee swarms. Nat. Phys., 14, 1193-98.
[5] Peters, J., Peleg, O., & Mahadevan, L. (2019). Collective ventilation in honeybee nests. J. Roy. Soc. Inter., 16(150), 20180561.
Lakshmi Chandrasekaran received her Ph.D. in mathematical sciences from the New Jersey Institute of Technology. She earned her masters in science journalism from Northwestern University and is a freelance science writer whose work has appeared in several outlets. She can be reached on Twitter at @science_eye.
|
2020-01-26 21:24:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43429046869277954, "perplexity": 3030.066680600231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251690379.95/warc/CC-MAIN-20200126195918-20200126225918-00097.warc.gz"}
|
https://dml.cz/handle/10338.dmlcz/107989
|
# Article
Full entry | PDF (0.1 MB)
Keywords:
weak-bases; $cs$-networks; $k$-networks; $g$-first countable spaces; weakly open mappings; msss-mappings.
Summary:
In this paper, spaces with $\sigma$-locally countable weak-bases are characterized as the weakly open msss-images of metric spaces (or $g$-first countable spaces with $\sigma$-locally countable $cs$-networks).
References:
[1] Arhangel’skii A.: Mappings and spaces. Russian Math. Surveys 21 (1966), 115–162. MR 0227950
[2] Liu C., Dai M.: $g$-metrizability and $S_\omega$. Topology Appl. 60 (1994), 185–189. MR 1302472
[3] Michael E.: $\sigma$-locally finite mappings. Proc. Amer. Math. Soc. 65 (1977), 159–164. MR 0442878
[4] Siwiec F.: On defining a space by a weak-base. Pacific J. Math. 52 (1974), 233–245. MR 0350706 | Zbl 0285.54022
[5] Nagata J.: General metric spaces I. in Topics in General Topology, North-Holland, Amsterdam, 1989. MR 1053200
[6] Foged L.: On $g$-metrizability. Pacific J. Math. 98 (1982), 327–332. MR 0650013 | Zbl 0478.54025
[7] Lin S.: Locally countable families, locally finite families and Alexandroff’s problems. Acta Math. Sinica 37 (1994), 491–496. MR 1337096
[8] Lin S.: Generalized metric spaces and mappings. Chinese Sci. Bull., Beijing, 1995. MR 1375020
[9] Lin S.: On Lašnev spaces. Acta Math. Sinica 34 (1991), 222–225. MR 1117082 | Zbl 0760.54023
[10] Lin S., Tanaka Y.: Point-countable $k$-networks, closed maps, and related results. Topology Appl. 59 (1994), 79–86. MR 1293119 | Zbl 0817.54025
[11] Lin S., Li Z., Li J., Liu C.: On $ss$-mappings. Northeast. Math. J. 9 (1993), 521–524. MR 1274005 | Zbl 0817.54024
[12] Xia S.: Characterizations of certain $g$-first countable spaces. Adv. Math. 29 (2000), 61–64. MR 1769127 | Zbl 0999.54010
[13] Tanaka Y., Xia S.: Certain $s$-images of locally separable metric spaces. Questions Answers Gen. Topology 14 (1996), 217–231. MR 1403347 | Zbl 0858.54030
[14] Tanaka Y., Li Z.: Certain covering-maps and $k$-networks, and related matters. Topology Proc. 27 (2003), 317–334. MR 2048941 | Zbl 1075.54010
[15] Li Z., Lin S.: On the weak-open images of metric spaces. Czechoslovak Math. J. 54 (2004), 393–400. MR 2059259 | Zbl 1080.54509
[16] Li Z.: Spaces with a $\sigma$-locally countable base. Far East J. Math. Sci. 13 (2004), 101–108. MR 2069831 | Zbl 0402.54016
Partner of
|
2019-05-26 22:14:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9872008562088013, "perplexity": 5528.750009513945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00518.warc.gz"}
|
https://manual.q-chem.com/latest/sec_FEED-EFP.html
|
Searching....
# 11.5.4 Pairwise Fragment Energy Decomposition and Pairwise Fragment Excited-State Energy Decomposition Analysis
(July 14, 2022)
Decomposition of the interaction energy of the QM and EFP regions in the energy components and in the contributions of individual solvent molecules is available for the ground and excited states. The ground state QM/EFP energy is decomposed as:
\displaystyle\begin{aligned} \displaystyle E_{\text{QM--EF, gr}}&\displaystyle% =E_{\text{elec}}^{(1)}+E_{\text{pol-solute}}^{(0)}+E_{\text{pol-solute}}^{(1)}% +E_{\text{pol-frag}}+E_{\text{QM--EFP}}^{\text{disp}}+E_{\text{QM--EF}}^{\text% {ex-rep}}\\ &\displaystyle=\langle\Psi_{\text{gr}}^{0}|\hat{V}^{\text{Coul}}|\Psi_{\text{% gr}}^{0}\rangle+\Big{[}\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{H}^{\text{QM}% }|\Psi_{\text{gr}}^{\text{sol}}\rangle-\langle\Psi_{\text{gr}}^{0}|\hat{H}^{% \text{QM}}|\Psi_{\text{gr}}^{0}\rangle\Big{]}\\ &\displaystyle\qquad+\Big{[}\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{V}^{% \text{Coul}}|\Psi_{\text{gr}}^{\text{sol}}\rangle-\langle\Psi_{\text{gr}}^{0}|% \hat{V}^{\text{Coul}}|\Psi_{\text{gr}}^{0}\rangle\Big{]}+\Big{[}E_{\text{QM--% EF, gr}}^{\text{pol}}+\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{V}^{\text{pol}% }|\Psi_{\text{gr}}^{\text{sol}}\rangle\Big{]}\\ &\displaystyle\qquad+E_{\text{QM-EF}}^{\text{disp}}+E_{\text{QM--EF}}^{\text{% ex-rep}}\end{aligned} (11.79)
where the terms (from left to right) mean the first-order electrostatic energy, solute polarization energy of the zero- and first orders, solvent polarization energy, and additive dispersion and exchange-repulsion terms. Superscripts “sol” and “0” denote QM wavefunction optimized in a solvent and gas phase, respectively. Each of the integrals involving $\hat{V}^{\text{Coul}}$ and $\hat{V}^{\text{pol}}$ operators can be decomposed into individual fragment contributions, e.g.,
$E_{\text{elec}}^{(1)}=\langle\Psi_{\text{gr}}^{0}|\hat{V}^{\text{Coul}}|\Psi_{% \text{gr}}^{0}\rangle=\sum_{A}^{\text{fragments}}\langle\Psi_{\text{gr}}^{0}|% \sum_{k\in A}\hat{V}_{k}^{\text{Coul}}|\Psi_{\text{gr}}^{0}\rangle$ (11.80)
and similarly for the other terms. Polarization energy can be approximately decomposed into individual fragment contributions as:
$E_{\text{QM--EF, gr}}^{\text{pol}}=\frac{1}{2}\sum_{A}^{\text{fragments}}\sum_% {p\in A}(-\mu^{p}F^{\text{ai,nuc,p}}+{\bar{\mu}^{p}F^{\text{ai,elec},p}})$ (11.81)
where $p$ are polarizability expansion points. Dispersion and exchange-repulsion terms are also pairwise-additive.
The only term that cannot be similarly split into fragment contributions is the zero-order solute polarization energy:
$E_{\text{pol-solute}}^{(0)}=\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{H}^{% \text{QM}}|\Psi_{\text{gr}}^{\text{sol}}\rangle-\langle\Psi_{\text{gr}}^{0}|% \hat{H}^{\text{QM}}|\Psi_{\text{gr}}^{0}\rangle\;.$ (11.82)
This term is referred to as "non-separable term" in the output printout. From perturbation theory, this term is expected to be about twice smaller and of the opposite sign than the first-order solute polarization term:
$E_{\text{pol-solute}}^{(1)}=\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{V}^{% \text{Coul}}|\Psi_{\text{gr}}^{\text{sol}}\rangle-\langle\Psi_{\text{gr}}^{0}|% \hat{V}^{\text{Coul}}|\Psi_{\text{gr}}^{0}\rangle\;.$ (11.83)
Application of the energy decomposition analysis to the electronically excited states is described below. The zero-order total solvatochromic shift can be represented as:
$E_{\text{solv}}^{\text{QM/EFP}}=\sum_{A}^{\text{fragments}}\big{(}\Delta E_{% \text{ex/gr}}^{\text{elec(1),A}}+\Delta E_{\text{ex/gr}}^{\text{pol-solute(1)}% ,A}+\Delta E_{\text{ex/gr}}^{\text{pol-frag(1)},A}\big{)}+\Delta E_{\text{ex/% gr}}^{\text{pol-solute(0)},A}\;.$ (11.84)
The various terms are defined as
$\displaystyle\Delta E_{\text{ex/gr}}^{\text{elec(1)},A}$ $\displaystyle=\sum_{k\in A}\big{(}\langle\Psi_{\text{ex}}^{0}|\hat{V}_{k}^{% \text{Coul}}|\Psi_{\text{ex}}^{0}\rangle-\langle\Psi_{\text{gr}}^{0}|\hat{V}_{% k}^{\text{Coul}}|\Psi_{\text{gr}}^{0}\rangle\big{)}$ (11.85) $\displaystyle\Delta E_{\text{ex/gr}}^{\text{pol-solute(1)},A}$ $\displaystyle=\sum_{k\in A}\big{(}\langle\Psi_{\text{ex}}^{\text{sol}}|\hat{V}% _{k}^{\text{Coul}}|\Psi_{\text{ex}}^{\text{sol}}\rangle-\langle\Psi_{\text{ex}% }^{0}|\hat{V}_{k}^{\text{Coul}}|\Psi_{\text{ex}}^{0}\rangle-\langle\Psi_{\text% {gr}}^{\text{sol}}|\hat{V}_{k}^{\text{Coul}}|\Psi_{\text{gr}}^{\text{sol}}% \rangle+\langle\Psi_{\text{gr}}^{0}|\hat{V}_{k}^{\text{Coul}}|\Psi_{\text{gr}}% ^{0}\rangle\big{)}$ (11.86) $\displaystyle\Delta E_{\text{ex/gr}}^{\text{pol-frag(1)},A}$ $\displaystyle=\sum_{p\in A}\big{(}\langle\Psi_{\text{ex}}^{\text{sol}}|\hat{V}% _{p,\text{gr}}^{\text{pol}}|\Psi_{\text{ex}}^{\text{sol}}\rangle-\langle\Psi_{% \text{gr}}^{\text{sol}}|\hat{V}_{p,\text{gr}}^{\text{pol}}|\Psi_{\text{gr}}^{% \text{sol}}\rangle\big{)}$ (11.87) $\displaystyle\Delta E_{\text{ex/gr}}^{\text{pol-solute(0)},A}$ $\displaystyle=\langle\Psi_{\text{ex}}^{\text{sol}}|\hat{H}_{\text{QM}}|\Psi_{% \text{ex}}^{\text{sol}}\rangle-\langle\Psi_{\text{ex}}^{0}|\hat{H}_{\text{QM}}% |\Psi_{\text{ex}}^{0}\rangle-\langle\Psi_{\text{gr}}^{\text{sol}}|\hat{H}_{% \text{QM}}|\Psi_{\text{gr}}^{\text{sol}}\rangle+\langle\Psi_{\text{gr}}^{0}|% \hat{H}_{\text{QM}}|\Psi_{\text{gr}}^{0}\rangle\;.$ (11.88)
Fragment contribution of the perturbative polarization correction to the excited states [Eq. (11.78)] can be obtained as follows:
$\Delta E_{\text{pol},A}=\frac{1}{2}\sum_{p\in A}\Bigl{[}-(\mu_{\rm{ex}}^{p}-% \mu_{\rm{gr}}^{p})(F^{\text{mult},p}+F^{\text{nuc},p})+(\tilde{\mu}_{\text{ex}% }^{p}F_{\text{ex}}^{\text{ai},p}-\tilde{\mu}_{\text{gr}}^{p}F_{\text{gr}}^{% \text{ai},p})-(\mu_{\text{ex}}^{p}-\mu_{\text{gr}}^{p}+\tilde{\mu}_{\text{ex}}% ^{p}-\tilde{\mu}_{\text{gr}}^{p})F_{\text{ex}}^{\text{ai},p}\Bigr{]}$ (11.89)
where $A$ is a fragment of interest.
The energy is decomposed separately for all computed excited states. The excited state analysis is implemented for CIS/TD-DFT and EOM-CCSD methods both in ccman and ccman2. Energy decomposition analysis is activated by keyword EFP_PAIRWISE. Both ground and excited state energy decompositions are conducted in two steps, controlled by keyword EFP_ORDER. In the first step (EFP_ORDER = 1), the first-order electrostatic energy and $\langle\Psi_{\text{gr}}^{0}|\hat{H}^{\text{QM}}|\Psi_{\text{gr}}^{0}\rangle$ (or $\langle\Psi_{\text{ex}}^{0}|\hat{H}^{\text{QM}}|\Psi_{\text{ex}}^{0}\rangle$ for the excited states) part of the non-separable term are computed and printed. In the second step (EFP_ORDER = 2), the remaining terms are evaluated. Thus, for a complete analysis, the user is required to conduct two consequent simulations with EFP_ORDER set to 1 and 2, respectively. Table 11.8 shows notations used in the output to denote various terms in Eqs. (11.79)–(11.89).
|
2022-08-19 10:11:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 36, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7185041904449463, "perplexity": 2714.9864757593796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573667.83/warc/CC-MAIN-20220819100644-20220819130644-00414.warc.gz"}
|
https://www.solvermax.com/blog/model/vaccination-plan-for-hong-kong
|
#### 3 May 2021 (1,664 words)
In this article we replicate an academic paper's model formulation of a COVID-19 vaccination plan for Hong Kong.
The model represents the supply of, and demand for, vaccine doses as a transportation problem, with doses "transported" from month-to-month given a storage cost rate. The objective is to minimize the total storage cost, while matching monthly supply and demand.
The paper's author solves the model using Excel and Solver. We do the same, though we also use OpenSolver – to see how it behaves differently. To incentivize the model to produce an intuitively better solution, we extend the model to include an escalating cost over time.
We've implemented the paper's formulation as an example optimization model in Excel.
### Situation
A COVID-19 vaccination program needs to be planned for Hong Kong. With a limited supply of vaccine, this case study explores an optimization approach to planning the vaccination program. The author develops a delivery plan based on a "transportation problem" model formulation.
Vaccine doses are scheduled to be supplied over a six month period, as shown in Figure 1. The figure also shows that, starting in the first month that doses are available, vaccinations are scheduled over a seven month period.
#### Allocate supply of doses to meet demand
Mirroring the matrix structure used in the paper, Figure 4 shows how the monthly supply of vaccines could be allocated to meet the monthly demand. For example, we expect to receive supply of 1,500,000 doses in Month 2, of which we plan to use 700,000 doses in Month 2, 200,000 in Month 3, 200,000 in Month 4, and 400,000 doses in Month 5.
Given a plan for the use of vaccine doses, we can calculate the total storage cost.
### Solver model
#### Objective function
The Solver model is shown in Figure 5. Our objective is to minimize total cost of storing vaccine doses from when they arrive until when they are used.
#### Variables
The model has one set of variables:
• vAllocation. The number of vaccine doses supplied in a month allocated to meet demand in that month or later months.
#### Constraints
The constraints are:
• fDemandPerMonth = dDemandForDoses. Ensure that demand is met.
• fSupplyPerMonth = dSupplyOfDoses. Ensure that all supply is used.
#### Solution method
All of the model's relationships are linear, so we can use the Simplex method.
This model is sufficiently small that either Solver or OpenSolver can be used.
### Analysis
#### Optimal solution
The optimal solution found by Solver is shown in Figure 6. The total cost of storing vaccines in \\$69,850,000. This is the same allocation and cost found by Solver as described in the paper.
#### Alternative optima
We already know that this problem has alternative optima – the heuristic and Solver find different solutions that have the same objective function value.
In this situation, there are many alternative optima. For example, if we solve the model using OpenSolver, rather than Solver, then we get another alternative optima, as shown in Figure 7. It is common for Solver and OpenSolver to find different optimal solutions – where alternative optima exist.
We ran the model on the 64-bit version of Excel 365 on Windows 10. If you're running different versions of the software or operating system, or with different hardware, then it is possible that you'll find different optima. This happens because the software and hardware environment can influence the specific steps that the solver takes, which may lead to different solutions.
#### Extending the model to minimize storage time
In the paper, the author initially used a heuristic to find a solution. The heuristic's solution has an intuitively favorable attribute: supplied doses are used as soon as they can be, given the supply and demand schedules. That is, batches of vaccine doses are stored for the minimum time possible for each batch.
Conversely, the solutions found by both Solver and OpenSolver store vaccines for up to two months, with gaps between when a batch of doses arrives and when some of it is used. Although the solutions are still optimal in terms of our objective of minimizing storage costs, they don't feel like the best solutions.
The solvers store doses because the cost of storing vaccine doses is linear. That is, if we need to store some doses until next month, then the solver is indifferent between storing vaccine doses that we already have versus storing vaccine doses that arrive this month or subsequently. This is the key reason why this model has many alternative optima – there are many ways to arrange the dose use over time, while storing doses for the same number of months overall.
However, we might prefer to use older doses first, perhaps because there is a risk of some doses becoming unusable, or simply because that intuitively seems like a more sensible solution. We can incentivize the solvers to use doses as soon as possible by adding an increasing penalty – such as a compounding percentage – on the storage costs. This is a useful technique for breaking the symmetry implied by having a constant storage cost rate.
If we set the penalty at +1% per month, then the storage costs are as shown in Figure 8.
Note that the model is still linear, even though the storage cost is non-linear, because the cost each month is a constant rather than being a non-linear function of variables. Also note that the numerical value of the penalty rate isn't especially important – any positive rate produces the same solution.
With a positive penalty on storage, such as +1% per month, both Solver and OpenSolver find the same optimal solution shown in Figure 9. Even though the objective function value hasn't changed (if we ignore the artificial penalty factor), this solution is more satisfactory as there are no gaps in the schedule. This is also the solution found by the heuristic, as described in the paper.
#### What happens if we use a negative penalty rate?
Interestingly, we can apply a negative penalty rate, such as -1% per month. This reduces the marginal storage cost over time, which creates an incentive to use supplied doses as late as possible (while still meeting the demand schedule).
Applying a negative penalty to the storage cost leads to a very different solution, as shown in Figure 10. The solution defers until Month 7 some of the batch we received in Month 2 – it does this because we told the model that it is better to use doses later. Such a solution doesn't make much sense in this situation, but there are other situations where a declining marginal cost might be a useful adjustment to apply.
### Conclusion
This model is an example of using a transportation model to solve a vaccine scheduling problem in Excel.
Our model demonstrates that such a model may have many alternative optima. We also introduce a method of applying a non-linear penalty to incentivize the solver to use resources as soon as possible, leading to an intuitively more satisfactory solution.
|
2022-05-18 22:09:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4241439402103424, "perplexity": 879.0476535292278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00139.warc.gz"}
|
https://proxieslive.com/tag/nodes/
|
## What is needed to secure a docker container that’s running on nodes in an AWS Private Subnet with internet access only via NAT?
I know securing a container is a big deal and a lot is needed to be done to secure a default container configuration. But having it in a private subnet should take care of a lots of risks.
So what major things does one need to start with to secure a docker container that’s running on nodes in an AWS Private Subnet with internet access only via NAT?
## Find the common ancestor between two nodes of a tree
``import unittest from Tree import * from list2BST import * def traverse_DFS(root, target_node_value, hash_route): # print('looking at node ' + str(root.value)) if root.value == target_node_value: # print('found node ' + str(target_node_value)) hash_route[root.value] = 1 return 1 else: if root.left_child: left_result = traverse_DFS(root.left_child, target_node_value, hash_route) if left_result == 1: hash_route[root.value] = 1 return 1 if root.right_child: right_result = traverse_DFS(root.right_child, target_node_value, hash_route) if right_result == 1: hash_route[root.value] = 1 return 1 common_ancestor = None def hash_check_DFS(root, target_node_value, hash_route): global common_ancestor if root.value == target_node_value: if root.value in hash_route: print('Found a common ancestor ' + str(root.value)) if common_ancestor is None: common_ancestor = root.value return 1 else: if root.left_child: left_result = hash_check_DFS(root.left_child, target_node_value, hash_route) if left_result == 1: if root.value in hash_route: if common_ancestor is None: print('Found a common ancestor ' + str(root.value)) common_ancestor = root.value return 1 if root.right_child: right_child = hash_check_DFS(root.right_child, target_node_value, hash_route) if right_child == 1: if root.value in hash_route: if common_ancestor is None: print('Found a common ancestor ' + str(root.value)) common_ancestor = root.value return 1 def find_common_node(Tree, node1, node2): global common_ancestor print('Running the common ancestry finder') # First run DFS v1 with Hash hash_route= {} print('This value of node1 is ' + str(node1)) traverse_DFS(Tree.root, node1, hash_route) print(hash_route) common_ancestor = None hash_check_DFS(Tree.root, node2, hash_route) if common_ancestor: return common_ancestor else: return None class Test(unittest.TestCase): def test_basic_odd_case(self): array = [1, 4, 5, 8, 11, 15, 18] result_tree = BinaryTree(insert_list_BST(0, array)) result_node = find_common_node(result_tree, 1, 18) self.assertEqual(result_node, 8) def test_basic_even_case(self): array = [1, 4, 5, 8, 11, 15, 18, 20] result_tree = BinaryTree(insert_list_BST(0, array)) result_node = find_common_node(result_tree, 1, 8) self.assertEqual(result_node, 5) if __name__ == '__main__': unittest.main() ``
Here is my code in python for a program that will find a common ancestor between two nodes in a particular tree. This is a question from Cracking the Coding Interview that I decided to implement on my own. No one has talked about the solution that I implemented above.
Basically, I do a DFS (depth-first search) of the tree for the first node (Time: O(n) and Space: O(1)) and then I get the recursive callbacks to add the path to a hashmap (Time: O(logn) Space: O(n)). The second time around while using DFS for the second node, once I find it – I check with the hashmap till a collision occurs, indicating the lowest common ancestor.
My Tree class is here, while my list2BST function is here. I am looking for feedback on a couple of things:
• Performance of code and how it could possibly be improved.
• My coding style and the readability of said code.
## What is a polynomial-time algorithm for determining whether two trees, with colored nodes, are isomorphic or not
Provide any polynomial-time algorithm (even a large degree polynomial) which determines whether two rooted colored trees are isomorphic to each-other or not.
For example, consider the following two trees:
Example trees `T` and `U` are isomorphic.
An isomorphism (bijection) is described in the table below:
`` T U 1 2 2 4 3 1 4 5 5 3 "white" "green" "blue" "white" ``
Below are some things to know about the problem:
• Nodes are colored
• edges are not colored.
• Nodes are free to be any color. Adjacent nodes are allowed to be the same color.
• which node is the root node of each tree cannot be changed.
• children are un-ordered.
• the tree is not necessarily a binary tree. a node could have 3 children, 4 children, 5, etc…
Formally, a colored tree is a tuple `(VS, ES, root, color_set, color_map)` such that:
• `VS` is the vertex set
• `ES` is the edge set
• `(VS, ES)` is a undirected tree
• `root` is a element of `VS`
• `color_set` is a set of objects called “colors”
• `color_map` is a mapping from `VS` to `color_set`
• every element of `color_set` appears in the range of `color_map` at least once. That is, every color is applied to at least one node.
colored trees `T` and `U` are isomorphic if and only if there exists a bijection, `PHI` from the vertex set of `T`, `VT`, to the vertex set of `U`, `VU` such that:
• the root of one tree is matched to the root of the other tree
• `for all nodes v, w in VT, {v, w} is an edge in tree T if and only if {PHI(v), PHI(w)} is an edge in tree U`
• `for all nodes v, w in VT, v and w are the same color in tree T if and only if PHI(v), PHI(w) are the same color in tree U`
## View escapes node’s text field HTML
I’ve looked at other posts but been unable to resolve the issue. I have a Text field (formatted, long) in a content type, when the node is viewed the HTML in the field displays correctly, when I view the fields in the database it is stored as raw HTML, however, when I add one of these fields to my view, the output is escaped as < and so on and therefore displayed in the browser as HTML rather than marking up the text.
I’ve seen a few posts suggesting modification of the twig template, however a) this didn’t seem to work for me and b) I’m looking to do this within the view / module so it is applicable regardless of which theme is in use.
Any suggestions?
## Menu tree from taxonomy terms and nodes which belong to them
i need to create a collapsible menu tree from taxonomy terms and nodes which belong to them (not nested terms, but content pages) i’m totally stuck about it i’ve found solutions for nested taxonomy terms only, but not for combined with content pages from the term. meaning menu tree like this:
## Removing Taint from kubernete nodes doesnot work
Ubuntu 18.04
Kubernete on JUJU
I tried to remove the taint form nodes: ` kubectl get nodes -o json | jq .items[].spec.taints [ { "effect": "NoSchedule", "key": "node.kubernetes.io/unreachable", "timeAdded": "2019-06-12T20:38:52Z" } ] [ { "effect": "NoSchedule", "key": "node.kubernetes.io/unreachable", "timeAdded": "2019-06-12T20:38:57Z" } ] [ { "effect": "NoSchedule", "key": "node.kubernetes.io/unreachable", "timeAdded": "2019-06-12T20:39:00Z" } ] `
`with this command:`
`kubectl patch node juju-06819a-0-lxd-70 -p '{"spec":{"taints":[]}}'`
` node/juju-06819a-0-lxd-70 patched rastin@cloudrnd1:~/.kube\$ kubectl patch node juju-06819a-0-lxd-71 -p '{"spec":{"taints":[]}}' node/juju-06819a-0-lxd-71 patched rastin@cloudrnd1:~/.kube\$ kubectl patch node juju-06819a-0-lxd-72 -p '{"spec":{"taints":[]}}' node/juju-06819a-0-lxd-72 patched `
Nothing happened all the taint still there!
## How do I convert PDF files to CSV to efficiently import into nodes? [on hold]
What is the most efficient way to convert PDF files to a CSV format to be imported to Drupal content type easily without manually copying and pasting the contents?
My semi-solution is to use https://pdftotext.com/ to extract the contents from PDF to TXT first. But then I want to add all of the TXT file contents into one CSV file where the first column is the title of the TXT filename and the second column are the texts. But not sure how I can convert all those TXT files at once to be added to one CSV file.
How would you go about extracting the contents on all those TXT files to be added to one CSV file?
Consider that there’s about 1000 PDF files that need to be imported …
## Not able to join worker nodes using kubectl with updated aws-auth configmap
I’m setting up AWS EKS cluster using terraform from an EC2 instance. Basically the setup includes EC2 launch configuration and autoscaling for worker nodes. After creating the cluster, I am able to configure kubectl with aws-iam-authenticator. When I did
``kubectl get nodes ``
It returned
No resources found
as the worker nodes were not joined. So I tried updating `aws-auth-cm.yaml` file
``apiVersion: v1 kind: ConfigMap metadata: name: aws-auth namespace: kube-system data: mapRoles: | - rolearn: <ARN of instance role (not instance profile)> username: system:node:{{EC2PrivateDNSName}} groups: - system:bootstrappers - system:nodes ``
with IAM role ARN of the worker node. And did
``kubectl apply -f aws-auth-cm.yaml ``
It returned
ConfigMap/aws-auth created
Then I understood that role ARN configured in `aws-auth-cm.yaml` is the wrong one. So I updated the same file with the exact worker node role ARN.
But this time I got 403 when I did `kubectl apply -f aws-auth-cm.yaml` again.
It returned
Error from server (Forbidden): error when retrieving current configuration of: Resource: “/v1, Resource=configmaps”, GroupVersionKind: “/v1, Kind=ConfigMap” Name: “aws-auth”, Namespace: “kube-system” Object: &{map[“apiVersion”:”v1″ “data”:map[“mapRoles”:”- rolearn: arn:aws:iam::XXXXXXXXX:role/worker-node-role\n username: system:node:{{EC2PrivateDNSName}}\n groups:\n – system:bootstrappers\n – system:nodes\n”] “kind”:”ConfigMap” “metadata”:map[“name”:”aws-auth” “namespace”:”kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:””]]]} from server for: “/home/username/aws-auth-cm.yaml”: configmaps “aws-auth” is forbidden: User “system:node:ip-XXX-XX-XX-XX.ec2.internal” cannot get resource “configmaps” in API group “” in the namespace “kube-system”
I’m not able to reconfigure the ConfigMap after this step.
I’m getting 403 for commands like
``kubectl apply kubectl delete kubectl edit ``
for configmaps. Any help?
## How to associate tree nodes with other objects before they have unique identifiers
I’ve been thinking about a simple software design problem. Imagine I am writing a web application to edit a tree of objects. Each node of this tree has an ID property that is filled in when the node object is POST’ed to the backend. A user can create a tree hierarchy with multiple nodes before anything is sent to this backend, leaving all nodes with an empty ID field.
Now imagine that whenever I select a node of the tree in this application, the node object is passed to a method of a class that wants to associate different objects with individual nodes of a tree in an internal dictionary. Suppose I, for some reason, cannot depend on reference equality. Is there then a better way to identify unique nodes than giving them a temporary ID that the backend should ignore?
I don’t see anything wrong with using temporary ID’s that are ignored on the server, but I want to avoid changing our model classes. Another option I can think of in this particular context would be extracting the position of the node in the tree with some representation and using this as a key for the dictionary. While this would work, it is mildly complex to implement and not very efficient.
I’m really curious to hear what you think. Thanks in advance for your input, Joshua
## Number of nodes of a complete intersection lie on a plane
Suppose $$X$$ is a general smooth hypersurface of degree $$\ge 6$$ and $$Y$$ be an irreducible hypersurface of degree $$\ge 2$$. Let $$X \cap Y$$ has at least $$5$$ nodes. Is it possible that $$4$$ nodes of $$X \cap Y$$ lie on a plane ?
I guess not.
The reason i have in my mind is the following: If possible let $$H$$ be a plane containing $$4$$ nodes. In this case $$H$$ is itself tangent to $$X$$ at these $$4$$ points which is a contradiction as a plane can be tangent at at most $$3$$ points. Please correct me if i am wrong.
|
2019-06-20 04:08:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24243725836277008, "perplexity": 1238.6011342414693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999130.98/warc/CC-MAIN-20190620024754-20190620050754-00487.warc.gz"}
|
https://physics.stackexchange.com/questions/240439/centre-of-gravity-adjusting-position
|
# Centre of gravity: Adjusting position
If an object is freely suspended from a pivot, why does the centre of gravity fall directly below that pivot? Would this be the same in non-uniform gravitational fields?
• We define "below" by the direction of gravitational attraction. The center of gravity by definition is along that direction. – Lewis Miller Feb 28 '16 at 22:55
F is the force due to gravity. r is the distance between the point of application of the force and the pivot. $$\theta$$ is the angle between the direction of F and r.
The torque $$\tau$$ is then $$\tau = \mathbf {Fr}\sin\theta$$ When the centre of gravity is exactly below the pivot then the angle, $$\theta=0$$ so $$\tau=0$$. The body stays in equilibrium due to absence of net rotational force and net translational force(the weight of the rod is counteracted by the reaction force of the pivot).
|
2021-06-24 10:18:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6213446855545044, "perplexity": 362.0820089514395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488552937.93/warc/CC-MAIN-20210624075940-20210624105940-00543.warc.gz"}
|
http://physics.stackexchange.com/tags/quantum-electrodynamics/hot
|
# Tag Info
11
(I henceforth assume $c= \hbar=1$.) It is forbidden by the four-momentum conservation law. Put yourself in the centre of mass reference frame of the couple of massive particles (electron and positron). There $P_{e\overline{e}} = (2E,\vec{0})$ with $E\geq m_e>0$. Just because four momentum is conserved, this four-momentum must be the same as the one of the ...
3
We already know that QED is incomplete. That's why we needed to develop the standard model that includes the electroweak force and QCD. We know that the standard model is incomplete and that we may need supersymmetry or other similar extensions to make it better, and even then gravity would be missing on the level of current quantum field theory. How to ...
3
We can write the Fourier transform of $\langle 0|\mathcal{T}A_{\nu}(x)\psi(x_1)\bar\psi(x_2)|0\rangle$ as $$S(p) D_{\nu\alpha}(q) \ e\,\Gamma^{\alpha}(p,q,p+q)S(p+q)$$ where $S(p)$ is the full fermion propagator, $D_{\nu\alpha}(q)$ is the full photon propagator, $\Gamma^{\alpha}(p,q,p+q)$ is the proper vertex function, and an overall momentum conservation ...
3
Today we know that Collins is wrong. He appears to be unaware of Newton's finding, and of course, advances made after he wrote his book.
2
Indeed, do not take Feynman diagrams as literal representations of what is happening in a particle picture. Only the external lines of a diagram correspond to real particles - the internal lines, though called virtual particles, are little more than artifacts of the perturbative expansion we do to calculate QFT amplitudes, and there is little reason to ...
2
There is the (non-genetal) relation between the free energy of interacting of two currents $J^{a}, J^{b}$ and the propagator: $$U = -\frac{1}{2} \int d^{4}xd^{4}y J^{a}(x) D_{ab}(x - y)J^{b}(y).$$ It's not general, but it realizes the simple example which can help you to understand how to get the expression for force. The structure of field which causes ...
2
The classical Coulomb potential can be recovered in the non-relativistic limit of the tree-level Feynman diagram between two charged particles. Applying the Born approximation to QM scattering, we find that the scattering amplitude for a process with interaction potential $V(x)$ is $$\mathcal{A}(\lvert p \rangle \to \lvert p'\rangle) - 1 = 2\pi \delta(E_p ... 1 I'll give you a draft of the answer to put you on the right track. You should then be able to fill the hole and complete the details. I think you just need to take the non-relativistic limit of a field theory where instead of the tree-level propagator you use the full 1PI propagator \frac{1}{p^2-m^2-\Pi(p^2)} where \Pi(\pi^2) is the self-energy. If the ... 1 At the classical level, the global gauge invariance leads via Noether's theorem to electric charge conservation, cf. e.g. this Phys.SE post. The Ward-Takahashi identity (WTI) can roughly speaking be thought of as a quantum version of this. In particular, we stress that the WTI is intimately tied to electric charge conservation. OP's observation that ... 1 The kinetic term$$\mathcal{L}_\text{fermion} = i\overline{\psi}\gamma^\mu \partial_\mu \psi changes too, by precisely the right quantity to cancel the change in the interaction term. Thus, the total Lagrangian is invariant, and this is what matters.
1
As @RobinEkman mentioned, the kinetic term changes as well. This can be easily computed \begin{split} {\cal L}_D = i {\bar \psi} \gamma^\mu \partial_\mu \psi &\to i {\bar \psi} e^{- i \alpha} \gamma^\mu \partial_\mu \left( e^{i \alpha} \psi \right) = i {\bar \psi} \gamma^\mu \partial_\mu \psi - \partial_\mu \alpha ( {\bar \psi} ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-10-23 04:00:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8886079788208008, "perplexity": 465.4265297051318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507449999.25/warc/CC-MAIN-20141017005729-00295-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/79852/why-does-friction-cause-a-car-to-turn
|
# Why does friction cause a car to turn?
I've had a lot of difficulty conceptually understanding the physics of how a car turns on an unbanked curve, so I'm hoping you could help me out. When a car is moving in uniform circular motion, we know that $|\vec{a}| = \frac{v^2}{r}$, and the direction of acceleration is towards the radius of the circle about which the car is moving. Drawing a free body diagram for the car shows that there are only three forces acting on it: gravity $(\vec{F_g})$, the normal force $(\vec{F_n})$, and friction $(\vec{F_f})$. Since gravity and the normal force negate each other, the car isn't accelerating in the $y$ direction. Because it is in uniform circular motion we know it is accelerating in the $x$ direction, and summing up the forces in this direction yields $$\vec{F_{net x}}=m\vec{a}=\vec{F_f}$$ which implies that the centripetal acceleration is due to the frictional force.
What I am having difficulty understanding is why this intuitively makes sense. I've read some other people's answers on this question but I haven't found anything satisfactory. In particular, many people talk about how wheels "are pushing the pavement to the left or right", and this causes the pavement to exert a force on the car wheels by Newton's third law, but this hasn't made sense to me.
Another way of putting this might be that I don't understand why friction should be directed inwards towards the center of the circle about which one is turning. I would expect that, since the wheels have been turned, that friction would be directed in the opposite direction of where the car is moving to prevent the car from continuing to move forward and skidding on the road.
I hope this makes sense, thanks.
-
How hard is it to push a car (in neutral, no brakes, level ground) in the direction the wheels are pointed, compared to moving it sideways under the same conditions? – DJohnM Oct 6 '13 at 20:39
I had fun trying to make this as intuitive as possible. I hope I've succeeded without doing the physics of the situation much injustice.
When a car is driving straight ahead, the plane in which the wheels are rotating is aligned with the direction of movement. Another way of saying this is that the rotation axis is perpendicular to the momentum vector $\vec{p}=m\vec{v}$ of the car. So the friction merely makes it harder for the car to move, which is part of the reason why you need to put your foot on the gas pedal to maintain a constant speed. At the same time, the friction is what allows you to maintain that constant speed because the rotating tires sort of grab onto the ground, which is the intuitive picture of friction. The tires grab the ground and pull/push it backwards beneath themselves, as you would do when dragging yourself over the floor (if it had handles to grab onto). Those grabbing and pulling/pushing forces are what keeps you going.
Things change when the wheels are turned. The plane in which they are rotating now is at an angle with the direction of motion. Alternatively but equivalently, we could say the rotation axis now makes an angle with the momentum vector of the car. To see how friction then makes the car turn, think again in terms of the wheels grabbing onto the ground. The fact that they now make an angle with the direction of motion, means the force the tires are exerting is also at an angle with the direction of motion - or equivalently, the momentum vector.
Now, a force is a change in momentum$^1$ and so (because the wheels are part of the rigid body that is a car) this force will change the direction of the car's momentum vector until it is aligned with the exerted force. Imagine dragging yourself forward on a straight line of handles on the floor and then suddenly grabbing hold of a handle slightly to one side instead of the one straight ahead. You'll steer yourself away from the original direction in which you were headed.
$^1$ Mathematically: $$\vec{F}=\frac{d\vec{p}}{dt}$$
-
If we want to keep a body in a circular orbit a central force is needed. If that is intuitive enough then,when you turn the wheel you are effectively changing the direction of the driving force ( if we are travelling at uniform speed this force is still there though it cancels off exactly with total frictional forces ) changing the force direction will not change the velocity direction instantly. if you make only one angle change of the driving force, it will take a finite amount of time to get the vehicle adjusted to the new angle. So it is the Inertia that gives this apparent slipping. Circular orbit is bit different, you keep on changing the direction of the driving force, now since inertia makes the vehicle slip all the time the Frictional force is active. But the maximum frictional force is limited by mu*R , R = mg , If the central force needed for our intended orbit is less than that my*R then vehicle sustains minimum slip, but if it is insufficient, slipping and turning occur successively until we reach a larger orbit with a higher Radius of curvature, that will make the central force lesser ( since it is mv^2/r ) so that it could be sustained by mu* R ( mu is the coefficient of friction ) Alternatively V could be made smaller so that the frictional force could sustain necessary central force. Main thing there are no two forces centrifugal and centripetal, It just makes this whole thing confusing. The thing is that central force that is needed for a particular orbit and a speed. It has to come from some external means.
P.S. Elaborate further on a sudden direction change to a vehicle which is traveling at a uniform speed. As soon as we turn it you can resolve the velocity in two directions. one along the new direction and perpendicular to that direction. Now the perpendicular component will push the vehicle outward, with a friction force in the direction opposite to slip, Because of that 'perpendicular velocity' will decrease. However if this remains the same, the constant speed of the vehicle in the new direction does not increase back to v because the driving force is barely sufficient to keep the velocity constant i.e. No acceleration. Because of the turn kinetic energy is lost. The key here is Inertia.
-
## protected by Qmechanic♦Mar 20 '14 at 17:33
Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
|
2015-08-04 05:33:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7138941884040833, "perplexity": 269.3869166145619}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990445.44/warc/CC-MAIN-20150728002310-00068-ip-10-236-191-2.ec2.internal.warc.gz"}
|
https://www.australiancurriculum.edu.au/f-10-curriculum/mathematics/Glossary/?term=Rounding
|
# Glossary
The decimal expansion of a real number is rounded when it is approximated by a terminating decimal that has a given number of decimal digits to the right of the decimal point.
Rounding to n decimal places is achieved by removing all decimal digits beyond (to the right of) the $$n^{th}$$ digit to the right of the decimal place, and adjusting the remaining digits where necessary.
If the first digit removed (the $${(n+1)}^{th}$$ digit) is less than 5 the preceding digit is not changed; for example, 4.02749 becomes 4.027 when rounded to 3 decimal places.
If the first digit removed is greater than or equal to 5, then the preceding digit is increased by 1; for example, 6.1234586 becomes 6.12346 when rounded to 5 decimal places.
|
2021-08-04 23:51:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7936347723007202, "perplexity": 263.6065961210142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00140.warc.gz"}
|
https://mathoverflow.net/questions/324290/different-definitions-of-the-linking-number
|
# Different definitions of the linking number
Assume that $$\iota_1:\mathbb{S}^k\to\mathbb{R}^n, \quad \iota_2:\mathbb{S}^\ell\to\mathbb{R}^n, \quad k+\ell=n-1,$$ are two embeddings of spheres with disjoint images $$\iota_1(\mathbb{S}^k)\cap\iota_2(\mathbb{S}^\ell)=\emptyset$$. Then, the linking number $$Lk(\iota_1,\iota_2)$$ of the embedded spheres can be defined as the degree of the mapping $$\mathbb{S}^k\times\mathbb{S}^\ell\ni (x,y)\longmapsto \frac{\iota_1(x)-\iota_2(y)}{\Vert \iota_1(x)-\iota_2(y)\Vert} \in \mathbb{S}^{n-1}.$$ There is however, another approach.
By the Alexander duality $$H_k(\mathbb{R}^n\setminus \iota_2(\mathbb{S}^\ell))=\mathbb{Z}$$ and the embedding $$\iota_1$$ induces a homomorphism of the homology groups: $$\iota_{1*}:\underbrace{H_k(\mathbb{S}^k)}_{\mathbb{Z}}\to H_k(\mathbb{R}^n\setminus\iota_2(\mathbb{S}^\ell))=\mathbb{Z}.$$ Thus, the image of a generator of $$H_k(\mathbb{S}^k)$$ is an integer.
It is well known that this integer equals (up to a sign) to the linking number $$Lk(\iota_1,\iota_2)$$.
While this is a well known result, I could not find any reference for a proof in a book.
Question. Do you know any reference where this result is stated and proved explicitly?
I find is quite surprising that except for the case of links and knots (dimension one) there are almost no references for the linking number.
The point is that the linking number is often used by researchers and analysis and geometry who have a limited knowledge in algebraic topology (speaking of myself) and a straightforward reference would be of a great help.
• I don't have the reference here, but I believe it's all spelled out in Bredon's Geometry and Topology textbook. I imagine someone will stop by with a precise reference soon. In the text he build the links between degrees intersection theory, Thom classes, Poincare duality, etc. – Ryan Budney Feb 27 '19 at 22:10
• Is Proposition 3.3 in DeTurck and Gluck's "Linking Integrals in the n-sphere" explicit enough? mat.unb.br/~matcont/34_10.pdf The main focus of the paper is on an integral formula for the linking number though. – j.c. Mar 18 '19 at 16:44
• @j.c. Thank you for the reference. I was not aware of this paper. If you post it as an answer, I will accept it. – Piotr Hajlasz Mar 18 '19 at 16:57
Let $$K^k$$ and $$L^\ell$$ be disjoint closed oriented smooth submanifolds of $$S^n$$ with $$k+\ell=n-1$$ and let $$f:K^k\ast L^\ell \rightarrow S^n$$ be [the map sending the line segment $$\{(\mathbf{x}, \mathbf{y}, u)\mid0\leq u \leq 1\}$$ connecting $$\mathbf{x}$$ and $$\mathbf{y}$$ in $$K\ast L$$ proportionally to the geodesic arc connecting $$\mathbf{x}$$ and $$-\mathbf{y}$$ in $$S^n$$]. Then
$$\deg f = - \operatorname{Lk}(K^k, L^\ell).$$
In this paper, the linking number $$\operatorname{Lk}$$ is defined as an intersection number (Eq. 2.1) (thus, related to the definition you gave using homology).
|
2020-02-20 03:03:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 23, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8935691714286804, "perplexity": 239.9768933962898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144498.68/warc/CC-MAIN-20200220005045-20200220035045-00044.warc.gz"}
|
https://math.stackexchange.com/questions/1974378/change-in-angle-with-varying-height-trigonometry
|
# Change in angle with varying height (trigonometry)
I am working on a microscopy project and keep running into the same roadblock.
Some context: The microscope has a detector base with a variable distance from the source beam (you can move it up and down). There are 56 [0-55] distinct positions that the base can take. I know the angle of the beam across the base for the top and bottom positions but need to find a way to figure it out for all positions (this is dark field electron microscopy if you want to look into it more).
How can I determine what $\theta_3$ and $\theta_4$ are?
My trigonometry is not very strong so I am not sure if there is a relationship between the two which I am missing.
Progress so far: In addition to googling around, I have tried writing down equations for the situation but haven't gotten something I could work with yet. Here is a list of them.
$$tan(100) = (x+y)/a$$ $$tan(50) = x/a$$ $$tan(\theta_3) = (x+y)/(a+b)$$ $$tan(\theta_4) = x/(a+b)$$ $$tan(700) = (x+y)/(a+b+c)$$ $$tan(350) = x/(a+b+c)$$
As you can see this provides me with 7 variables and 6 equations, not enough to solve for anything.
I also looked into seeing if there was a relationship between the angle and step position (e.g. if linear: if you went to the middle, step 27, $\theta_4$ = 200rad and $\theta_3$ = 400rad) but I wasn't able to find anything online or on paper to convince me this would work.
• from the first two equations, you can have$y$ and so on. – hamam_Abdallah Oct 18 '16 at 16:41
• Your image seems to imply that you don't know how far you're moving the base at each step, and that the amount being moved is not consistent from step to step. If that's the case, then you just don't have enough information. If it's not, than some of your variables are actually known values. – Gabriel Burns Oct 18 '16 at 16:56
• @GabrielBurns I don't know how far each step is, but it's safe to assume it's the same distance each time. Using that I was able to figure it out. Thank you! – p.kubik Oct 18 '16 at 18:02
|
2019-08-17 11:22:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4947466552257538, "perplexity": 203.94116938903153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027312128.3/warc/CC-MAIN-20190817102624-20190817124624-00047.warc.gz"}
|
https://geladinhogourmetoficial.com/australian-sksi/tm4l4.php?ec5650=how-to-find-the-area-of-a-triangle-with-coordinates
|
2020/05/07 03:50 In geometry, a barycentric coordinate system is a coordinate system in which the location of a point is specified by reference to a simplex (a triangle for points in a plane, a tetrahedron for points in three-dimensional space, etc. Circumcenter calculator is used to calculate the circumcenter of a triangle by taking coordinate values for each line. is equal to the area of the rectangle (call it v Take the given points as (x1, y1) (x2, y2) and (x3, y3). [2] The user cross-multiplies corresponding coordinates to find the area encompassing the polygon, and subtracts it from the surrounding polygon to find the area of the polygon within. B − or Duplicate zero’s without expanding the array. , R cancel with the first positive term and the first negative term of To find the area between two curves in the polar coordinate system, first find the points of intersection, then subtract the corresponding areas. , Given the coordinates of the three vertices of any triangle are (X1, Y1), (X2, Y2) and (X3, Y3). For cartesian coordinate systems the lengths are calculated as follows For triangle on a plane let z coordinate to remain zero Incorporated here is an array of topics like finding the area of a triangle with dimensions in integers, decimals and fractions, finding the area involving unit conversions, finding the area of the three types of triangles and more! Finding the area of a quadrilateral demonstrates how the shoelace formula is generalized to any polygon by dividing the polygon into triangles. Use of the different formulas to calculate the area of triangles, given base and height, given three sides, given side angle side, given equilateral triangle, given triangle drawn on a grid, given three vertices on coordinate plane, given three vertices in 3D space, in video … {\displaystyle \mathbf {A} } It may be printed, downloaded or saved and used in your classroom, home school, or other educational environment to help someone learn math. ( To calculate the area of a triangle, multiply the height by the width (this is also known as the 'base') then divide by 2. In fact, the calculation is quite generic, so it can also calculate the area of parallelogram, square, rhombus, trapezoid, kite, etc. This method uses matrices. A = (1/2) [0 (b – d) + a (d – 0) + c (0 – b)] A = (ad – bc)/2. In earlier classes, we have studied that the area of a triangle whose vertices are (x1, y1), (x2, y2) and (x3, y3), is given by the expression $$\frac{1}{2} [x1(y2–y3) + x2 (y3–y1) + x3 (y1–y2)]$$. Area of an equilateral triangle. (adsbygoogle = window.adsbygoogle || []).push({}); Enter your email address to subscribe to this blog and receive notifications of new posts by email. Maximum Depth of Valid Nested Parentheses, Minimum Increments to make all array elements unique, Add digits until number becomes a single digit. Area of a triangle can simply be evaluated using following formula. Consider the figure of a quadrilateral whose coordinates are labeled in counterclockwise order. 2 for xi and yi representing each respective coordinate. {\displaystyle -\mathbf {A} .}. [6], The formula can be represented by the expression. ( {\displaystyle (x_{1},y_{1}),(x_{2},y_{2}),} , Finding the Area of a Triangle Using Its Coordinates By Mary Jane Sterling The first formula most encounter to find the area of a triangle is A = 1 ⁄ 2bh. Although we didn't make a separate calculator for the equilateral triangle area, you can quickly calculate it in this triangle area calculator. Given the coordinates of the three vertices of any triangle, the area of the triangle is given by: where A x and A y are the x and y coordinates of the point A etc.. The last positive term and the last negative term of y 1 {\displaystyle (x_{3},y_{3}).} 10 "Shoelace" by Cindy Xi, Mathologer video about Gauss' shoelace formula, Forest Dynamics, Growth and Yield: From Measurement to Model, "Generalia de genesi figurarum planarum et inde pendentibus earum affectionibus", https://en.wikipedia.org/w/index.php?title=Shoelace_formula&oldid=1001677329, Articles with incomplete citations from March 2018, Articles with unsourced statements from March 2018, Creative Commons Attribution-ShareAlike License, This page was last edited on 20 January 2021, at 19:53. For example, a pentagon will be defined up to x5 and y5: A quadrilateral will be defined up to x4 and y4: Consider the polygon defined by the points (3,4), (5,11), (12,8), (9,5) and (5,6), and illustrated in the following diagram: The reason this formula is called the shoelace formula is because of a common method used to evaluate it. Organizing the numbers like this makes the formula easier to recall and evaluate. the Cartesian plane) then, Referring to the figure, let Triangle area calculator by points. The height is the line perpendicular to the base, through the opposite vertex. double area = Math. Find the number of distinct Islands OR connected components. With all the slashes drawn, the matrix loosely resembles a shoe with the laces done up, giving rise to the algorithm's name. Uses Heron's formula and trigonometric functions to calculate area and other properties of a given triangle. You use the formula and the determinant to find the area. , n Using the triangle formula on each triangle we get, Since both triangles were traced in a counterclockwise direction, both areas are positive and we get the area of the quadrilateral by adding the two areas. Obviously Find the Circumference of a Circle - Java Program, Graph – Find Number of non reachable vertices from a given vertex, Graph – Depth First Search in Disconnected Graph, Graph – Find Cycle in Undirected Graph using Disjoint Set (Union-Find), Check if given undirected graph is connected or not, Dijkstra’s – Shortest Path Algorithm (SPT) - Adjacency Matrix - Java Implementation, Graph – Detect Cycle in a Directed Graph using colors, Count number of subgraphs in a given graph, Replace all vowels with next consonant in a given string, Non-decreasing Array with one allowed change. [2] It is also sometimes called the shoelace method. Calculator solve the triangle specified by coordinates of three vertices in the plane (or in 3D space). Do the same thing with slashes diagonal down and to the left (shown below with downwards slashes): (4 × 3) + (−8 × 1) + (2 × 2) = 8. Practice: Finding area of a triangle from coordinates. [full citation needed] It can be verified by dividing the polygon into triangles, and can be considered to be a special case of Green's theorem. You can use a matrix to find the area of a triangle if you are given the coordinates of the matrix. Draw the minimum area rectangle around the triangle so its sides are parallel to the This is why the formula is called the surveyor's formula, since the "surveyor" is at the origin; if going counterclockwise, positive area is added when going from left to right and negative area is added when going from right to left, from the perspective of the origin. , ( ... Equations Inequalities System of Equations System of Inequalities Polynomials Rationales Coordinate Geometry Complex Numbers Polar/Cartesian Functions Arithmetic & … consecutive vertices of the polygon (regarded as vectors in be the area of the triangle whose vertices are given by the coordinates It does not matter which points are labelled A,B or C, and it will work with any triangle, including those where some or all coordinates are negative. and {\displaystyle \mathbf {E} .} This is the currently selected item. {\displaystyle \mathbf {C} ,\mathbf {D} ,} giving, The user must know the points of the polygon in a Cartesian plane. If the points are labeled sequentially in the counterclockwise direction, then the sum of the above determinants is positive and the absolute value signs can be omitted;[1] if they are labeled in the clockwise direction, the sum of the determinants will be negative. At least one vertex of the triangle will be on a corner of the rectangle. abs ( X [ 0] * ( Y [ 1] -Y [ 2 ]) + X [ 1] * ( Y [ 2] -Y [ 0 ]) + X [ 2] * ( Y [ 0] -Y [ 1 ])) /2; 1 Coordinate geometry is defined as the study of geometry using the coordinate points on the plane with any dimension. A v . Using those values we will calculate the Perimeter of a triangle, Semi Perimeter of a triangle and then Area of a Triangle. Free Triangle Area & Perimeter Calculator - Calculate area, perimeter of a triangle step-by-step. and multiply the two numbers connected by each slash, then add all the products: (2 × −8) + (3 × 2) + (1 × 4) = −6. , Start by measuring the length of the base of the triangle. . 1 The area of a region in polar coordinates defined by the equation $$r=f(θ)$$ with $$α≤θ≤β$$ is given by the integral $$A=\dfrac{1}{2}\int ^β_α[f(θ)]^2dθ$$. ) I am doing one question where i need to find area of triangle with given 3 sets of coordinates. This math worksheet was created on 2016-04-06 and has been viewed 62 times this week and 216 times this month. ) 2 Here is my code . It has applications in surveying and forestry,[3] among other areas. Perimeter of a Triangle = a + b + c Python Program to find Area of a Triangle & Perimeter of a Triangle This Python program allows the user to enter three sides of the triangle. In general, the term “area” is defined as the region occupied inside the boundary of a flat object or figure. The number of variables depends on the number of sides of the polygon. {\displaystyle \mathbf {R} } x If one of the vertices of the triangle is the origin, then the area of the triangle can be calculated using the below formula. If the triangle was a right triangle, it would bepretty easy to compute the area of the triangle by findingone-half the product of the base and the height. Area = sqrt (s* (s-a)* (s-b)* (s-c)) where a, b and c are lengths of sides of triangle and s = (a+b+c)/2 Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. Practice: Collinearity of three points. Proof for a quadrilateral and general polygon, IMSA JHMC Guide, Page. If These are successively applied and combined, and the parameters of the triangle calculate. y In the figure, the areas of the three surrounding triangles are , Given the coordinates of the three vertices of any triangle are (X1, Y1), (X2, Y2) and (X3, Y3) Java Code: Run On IDE. {\displaystyle \mathbf {A} } xn+1 = x1 and x0 = xn, , It is called the shoelace formula because of the constant cross-multiplying for the coordinates making up the polygon, like threading shoelaces. ) minus the areas of the other three triangles. Welcome to The Perimeter and Area of Triangles on Coordinate Planes (A) Math Worksheet from the Geometry Worksheets Page at Math-Drills.com. Find the Area of Triangle using base and height - Java Program, Find the Area of a Triangle Given Three Sides – Heron’s Formula, Java Program to find if Triangle can be formed using given 3 sides, Given two coordinates, Print the line equation, Check if interval is covered in given coordinates, Find the Area and Perimeter of Rectangle – Java Program. y We may follow the steps given below to find the missing coordinate of a triangle when its area is given. Area of a triangle with vertices are (0,0), P (a, b), and Q (c, d) is. Explore this assortment of the area of triangles worksheets to elevate the practice of students in grade 5 through high school. {\displaystyle \mathbf {B} } Area of a triangle. and {\displaystyle x} public class AreaOfTriangleCoordinates {. The measurement is done in square units with the standard unit being square meters (m 2).For the computation of area, there are pre-defined formulas for squares, rectangles, circle, triangles, etc. {\displaystyle y} Given the coordinates of a triangle, find its area .. Then take the difference of these two numbers: |(−6 )−( 8)| = 14. D Take the first x-coordinate and multiply it by the second y-value, then take the second x-coordinate and multiply it by the third y-value, and repeat as many times until it is done for all wanted points. E that is, the area of any convex quadrilateral. Using coordinate geometry, it is possible to find the distance between two points, dividing lines in a ratio, finding the mid-point of a line, calculating the area of a triangle in the Cartesian plane, etc. Finding Area of a Triangle Using Coordinates : When we have vertices of the triangle and we need to find the area of the triangle, we can use the following steps. A Halving this gives the area of the triangle: 7. x [5] Furthermore, a self-overlapping polygon can have multiple "interpretations" but the Shoelace formula can be used to show that the polygon's area is the same regardless of the interpretation. For example, take a triangle with coordinates {(2, 1), (4, 5), (7, 8)}. where We have a formula which can be directly used on the vertices of triangle to find its area. A This formula can be extended to find the area of any polygon since a simple polygon can be divided into triangles. As one wraps around the polygon, these triangles with positive and negative area will overlap, and the areas between the origin and the polygon will be cancelled out and sum to 0, while only the area inside the reference triangle remains. are the Using it, one can find that the area of the triangle equals one half of the absolute value of 10 + 32 + 7 − 4 − 35 − 16, which equals 3. Use the formula for area of triangle and apply the above values. which is the form of the shoelace formula. However, when the triangle is not a right triangle, there area couple of other ways that the area can be found. The quadrilateral is divided into two triangles with areas This formula allows you to calculate the area of a triangle when you know the coordinates of all three vertices. So what will be the logic to convert array to pair in (a1,b1) (a2,b2) (a3,b3) and how to find area of triangle using this vertices. … (i) Plot the points in a rough diagram. For example, from the given area of the triangle and the corresponding side, the appropriate height is calculated. //find area of triangle. {\displaystyle \mathbf {B} .} At least one vertex of the triangle will be on a corner of the rectangle. If, (x1, x2), (x2, y2) and (x3, y3) are the coordinates of vertices of triangle then Area of Triangle = Now, we can easily derive this formula using a small diagram shown below. It involves complicated equations and concepts this week and 216 times this week and 216 times this month |... Represented by the expression formula can be represented by the expression and solve for unknown )... Draw diagonal down and to the Perimeter of a triangle when you know the coordinates of triangle... Will calculate the Perimeter and area of a triangle when its area is given the study geometry. Other ways that the area of a triangle when its area is given } and B calculated! Area ” is defined as the region occupied inside the boundary of a triangle when its area is.... Calculate it in this triangle area, and ( x3, y3 ). ] it also... Triangle can simply be evaluated using following formula, draw diagonal down and to the,... = y1 and y0 = yn by Gauss in 1795 the formula easier to recall and evaluate Male/20 years level/An! Nested Parentheses, minimum Increments to make all array elements unique, digits. This formula can how to find the area of a triangle with coordinates found however, when the triangle will be on a corner the! Welcome to the Perimeter and area of triangles on coordinate Planes ( a ) Worksheet! Choose the triangle will be on a corner of the area of triangle! Use a matrix to find the area of any polygon since a polygon. X_ { 3 } ). of triangle with given 3 sets of coordinates distinct! Ending with the initial point. [ 10 ] lengths of edges, use! Area rectangle around the triangle specified by coordinates of all three sides 's. N'T make a separate calculator for the equilateral triangle you only need to have the side given: area a²! We will calculate the area of a given triangle Increments to make all array elements unique, Add digits number... On 2016-04-06 and has been viewed 62 times this week and 216 times this month array elements,! Until number becomes a single digit it how to find the area of a triangle with coordinates complicated equations and concepts below ). times this and. Add digits until number becomes a single digit triangle you only need to find the area of and... Providing results on one click is simple - first, determine lengths of edges, then use the formula. In a rough diagram yn+1 = y1 and y0 = yn 9.... Of the triangle: 7 because of the matrix formula easier to and! Of its vertices from the given area of the polygon triangle will be on a corner of the.. Using following formula ) and ( x3, y3 ). convex quadrilateral like this makes the formula be. 4 ] and by Gauss in 1795 formula: [ 9 ] it involves complicated equations and concepts x1... = 3 until the triangle and the corresponding side, the term “ ”... ) coordinates, write a program to find area of triangles on coordinate Planes ( a ) Worksheet... Missing coordinate of a triangle can simply be evaluated using following formula with vertices ( 2,4 ), (,. 1769 [ 4 ] and by Gauss in 1795 points in a rough diagram well as =. Making up the polygon into triangles of edges, then use the Heron formula, calculator. Edges, then use the formula can be represented by the expression with areas a \displaystyle. Calculate area and other properties of a triangle if you are given the coordinates the! A } } and B Purpose of use Corroborate the area of a rectangle, given coordinates of all vertices... Elevate the practice of students in grade 5 through high school xn+1 x1... Is given well as yn+1 = y1 and y0 = yn ) Plot the points in rough! The above values 1769 [ 4 ] and by Gauss in 1795 given in terms of the area a... Following formula triangle if you are given the coordinates of all three vertices coordinates or ( X, Y coordinates. For Heron formula, see calculator of area of the triangle will be on corner. Polygon, IMSA JHMC Guide, Page coordinate geometry is defined as the study geometry! The polygon into triangles is given constant cross-multiplying for the coordinates of all three sides the practice students! Shoelace formula because of the constant cross-multiplying for the coordinates of all three vertices coordinates or X..., choose the triangle how to find the area of a triangle with coordinates by coordinates of a quadrilateral demonstrates how the shoelace method, double [ Y. The term “ area ” is defined as the region occupied inside the boundary a! Until number becomes a single digit then use the formula for area of the constant cross-multiplying for the n! Write a program to find the area can be represented by the expression it has applications surveying! And B this week and 216 times this week and 216 times this week and times. If you are given the coordinates of all three sides to find the of. See calculator of area of the triangle is not a right triangle, find its... Worksheet was created on 2016-04-06 and has been viewed 62 times this and! ( i ) Plot the points in a rough diagram through high school single digit array elements unique Add. Shown below ). 1724–1788 ) in 1769 [ 4 ] and how to find the area of a triangle with coordinates Gauss in 1795 using values... At ( x1, y1 ) ( x2, y2 ), ( 3, −8 ) (. When you know the coordinates of the matrix Valid Nested Parentheses, minimum Increments to make all elements. Then area of a triangle and then area of the exterior algebra and evaluate opposite...., determine lengths of edges, then use the formula was described by Meister ( )! ( ii ) take the difference of these two numbers: | ( −6 ) (... Through high school area is given ) − ( 8 ) | = 14 making up the into! Organizing the numbers like this makes the formula was described by Meister ( )... Have the side given: area = a² * √3 / 4 it makes the process convenient by results. Geometry using the coordinate points on the plane with any dimension 2,4,! * √3 / 4 is defined as the study of geometry using the coordinate points on the number of Islands! Makes the process convenient by providing results how to find the area of a triangle with coordinates one click [ 10 ], through the opposite vertex this... Iterate until the triangle will be on a corner of the matrix defined as the region occupied inside boundary! On a corner of the rectangle organizing the numbers like this makes the formula can be as! Of students in grade 5 through high school has calculated all three vertices in counter clock-wise.. The line perpendicular to the base, through the opposite vertex evaluated using following.. Polygon, IMSA JHMC Guide, Page how to find the area of a triangle with coordinates when the triangle specified by coordinates of a flat object or.. Of Valid Nested Parentheses, minimum Increments to make all array elements unique, Add digits until number a... Plot the points in a rough diagram use a matrix to find the number of Islands. Space ). of triangles worksheets to elevate the practice of students in grade 5 through school. Old level/An engineer/Useful/ Purpose of use Corroborate the area of a flat object or figure the figure of a whose! A triangle when its area is given a triangle given by locations determinant to the. ) − ( 8 ) | = 14 initial point. [ 10 ] side. Space ). then area of a triangle, find its area }. I how to find the area of a triangle with coordinates Plot the points in a rough diagram practice of students in grade 5 high! Is my code area of triangle with vertices ( 2,4 ), ( 3 −8! Formula is generalized to any polygon by dividing the polygon into triangles connected components of its vertices triangle vertices! Triangle given by locations given 3 sets of coordinates perpendicular to the base of the area triangles... Be viewed as a special case of Green 's Theorem [ ],...: | ( −6 ) − ( 8 ) | = 14 task is simple - first, diagonal! Apply the above values the formula can be represented by the expression of triangle with 3. Or ( X, double [ ] X, double [ ] X, double [ ] X, )... A corner of the matrix dividing the polygon, like threading shoelaces of distinct Islands or connected components did make... Of circumcenter very difficult because it involves complicated equations and concepts, Y ) { given. You only need to have the side given: area = a² * √3 / 4 2020/12/02 Male/20... Imsa JHMC Guide, Page - first, draw diagonal down and to the right (. Geometry how to find the area of a triangle with coordinates defined as the region occupied inside the boundary of a triangle simply... The base of the exterior algebra, y2 ) and ( 1,2 ). of an equilateral you. 10 ] \displaystyle \mathbf { a } } and B calculate it this. [ ] X, Y ) { using those values we will calculate the of. Properties of a triangle using Hero 's formula and the corresponding side the. Be given in terms of the polygon, IMSA JHMC Guide, Page other properties of a given.... Consider the figure of a rectangle, given coordinates of its vertices below calculates the area of the triangle calculated... [ 10 ] find the missing coordinate of a triangle Nested Parentheses, minimum Increments make... Ii ) take the difference of these two numbers: | ( −6 −... 10 ] in grade 5 through high school viewed 62 times this month students in 5! ) | = 14 3 }, y_ { 3 }, y_ { 3 } ). demonstrates.
|
2021-10-18 05:18:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7954990267753601, "perplexity": 457.8270639684088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585196.73/warc/CC-MAIN-20211018031901-20211018061901-00480.warc.gz"}
|
https://stacks.math.columbia.edu/tag/0E68
|
Lemma 53.25.2. Let $k$ be an algebraically closed field. Let $X$ be an at-worst-nodal, proper, connected $1$-dimensional scheme over $k$. Let $\nu : X^\nu \to X$ be the normalization. Let $S \subset X^\nu$ be the set of points where $\nu$ is not an isomorphism. Then
$\text{Der}_ k(\mathcal{O}_ X, \mathcal{O}_ X) = \{ D' \in \text{Der}_ k(\mathcal{O}_{X^\nu }, \mathcal{O}_{X^\nu }) \mid D' \text{ fixes every }x^\nu \in S\}$
Proof. Let $x \in X$ be a node. Let $x', x'' \in X^\nu$ be the inverse images of $x$. (Every node is a split node since $k$ is algebriacally closed, see Definition 53.19.10 and Lemma 53.19.11.) Let $u \in \mathcal{O}_{X^\nu , x'}$ and $v \in \mathcal{O}_{X^\nu , x''}$ be uniformizers. Observe that we have an exact sequence
$0 \to \mathcal{O}_{X, x} \to \mathcal{O}_{X^\nu , x'} \times \mathcal{O}_{X^\nu , x''} \to k \to 0$
This follows from Lemma 53.16.3. Thus we can view $u$ and $v$ as elements of $\mathcal{O}_{X, x}$ with $uv = 0$.
Let $D \in \text{Der}_ k(\mathcal{O}_ X, \mathcal{O}_ X)$. Then $0 = D(uv) = vD(u) + uD(v)$. Since $(u)$ is annihilator of $v$ in $\mathcal{O}_{X, x}$ and vice versa, we see that $D(u) \in (u)$ and $D(v) \in (v)$. As $\mathcal{O}_{X^\nu , x'} = k + (u)$ we conclude that we can extend $D$ to $\mathcal{O}_{X^\nu , x'}$ and moreover the extension fixes $x'$. This produces a $D'$ in the right hand side of the equality. Conversely, given a $D'$ fixing $x'$ and $x''$ we find that $D'$ preserves the subring $\mathcal{O}_{X, x} \subset \mathcal{O}_{X^\nu , x'} \times \mathcal{O}_{X^\nu , x''}$ and this is how we go from right to left in the equality. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2022-07-03 06:24:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9862070679664612, "perplexity": 104.72272200223117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00062.warc.gz"}
|
https://www.physicsforums.com/threads/a-magnifying-glass-problem.397943/
|
# A magnifying glass problem
## Homework Statement
A lens having a focal length 50 cm is used as
a simple magnifier.
What is the angular magnification obtained
when the image is formed at the normal near
point (25 cm)?
## Homework Equations
1) m = N/d, where d is distance of object
2) m = 1 + N/f
## Homework Statement
A lens having a focal length 50 cm is used as
a simple magnifier.
What is the angular magnification obtained
when the image is formed at the normal near
point (25 cm)?
I don't really get what do they want as the answer.
Also, there are two equations, which one do I use?
THanks! :)
|
2021-06-21 10:47:27
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8164852857589722, "perplexity": 2046.0641741838135}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488269939.53/warc/CC-MAIN-20210621085922-20210621115922-00312.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-9th-edition/chapter-1-chemical-foundations-exercises-page-37/77
|
# Chapter 1 - Chemical Foundations - Exercises - Page 37: 77
Answer: Third Dimension size = 2.769 cm.
#### Work Step by Step
Given density = 22.57 $\frac{g}{cm^{3}}$ Mass = 1.0 kg = 1000 grams Since Density = $\frac{Mass}{Volume}$ then Volume = $\frac{Mass}{density}$ Thus volume = $\frac{1000}{22.57}$ = $44.3066 cm^{3}$ Since the osmium block is a rectangle, its volume = product of its three dimensions. So, 4 $\times$ 4 $\times$ (Third dimension) = 44.3066 Thus third dimension = $\frac{44.3066}{16}$ = 2.769 cm
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-12-14 22:00:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7796212434768677, "perplexity": 1715.157032207272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826354.54/warc/CC-MAIN-20181214210553-20181214232553-00562.warc.gz"}
|
https://www.tutorialspoint.com/energy-stored-in-a-capacitor-formula-and-examples
|
# Energy Stored in a Capacitor – Formula and Examples
A capacitor is an electronic circuit component that stores electrical energy in the form of electrostatic charge. Thus, a capacitor stores the potential energy in it. This stored electrical energy can be obtained when required. Ideally, a capacitor does not dissipate energy, but stores it.
A typical capacitor consists of two metallic plates separated by an insulating material, called dielectric. When these two metallic plates of the capacitor are connected to a source of electrical energy, the capacitor starts charging and stores electrical energy in its dielectric. Therefore, it is important to derive the expression of this stored energy in the capacitor so that we can select a suitable capacitor for our circuit designing.
## Energy Stored in a Capacitor
As discussed above, a capacitor stores electrical energy in the form of electrostatic charge. Thus, a charged capacitor produces an electrostatic field. When the capacitor of capacitance C farad is connected across a battery of V volts as shown in Figure-1. In this situation, the entire battery voltage V is applied across the capacitor plates. As a result, plate A of the capacitor becomes positively charged while plate B becomes negatively charged. This potential difference between the two plates establishes an electric field directed from plate A to B through the dielectric material of the capacitor.
Due to this electric force, the end of the dielectric near the positive plate will become negatively polarized, while the end near the negative plate will become positively polarized. Consequently, there is an electrostatic charge (and electrostatic field) is created within the capacitor. In this condition, the capacitor is said to be charged and stores a finite amount of energy.
Now, let us derive the expression of energy stored in the capacitor. For that, let at any stage of charging, the electric charge stored in the capacitor is q coulombs and the voltage the plates of the capacitor is v volts. Then,
$$\mathrm{q\propto v}$$
$$\mathrm{\Rightarrow q=C v}$$
By the definition of voltage, a work of v Joules is required to be done in storing a charge of 1 Coulomb in the capacitor. Hence, for storing a charge of dq Coulombs in the capacitor, the work done is,
$$\mathrm{dW=v\, dq}$$
$$\mathrm{\Rightarrow dW=v\, d\left ( Cv \right )}$$
$$\mathrm{\therefore dW=Cv\, dv}$$
Integrating on both side to get the total work done in raising the voltage of the uncharged capacitor to V volts.
$$\mathrm{W=C\int_{0}^{v}v\, dv=C\left [ \frac{v^{2}}{2} \right ]_{0}^{v}}$$
$$\mathrm{\therefore W=\frac{1}{2}CV^{2}}$$
This work done will be stored in the capacitor in the form of potential energy (electrostatic field).
Also,
$$\mathrm{C=\frac{Q}{V}\: and\: V=\frac{Q}{C}}$$
Thus, the energy stored in the capacitor can also be given by,
$$\mathrm{W=\frac{1}{2}QV=\frac{1}{2}\frac{Q^{2}}{C}}$$
The energy stored in the capacitor will be expressed in joules if the charge Q is given in coulombs, C in farad, and V in volts.
From equations of the energy stored in a capacitor, it is clear that the energy stored in a capacitor does not depend on the current through the capacitor.
Note − A pure or ideal capacitor does not dissipate energy, instead, it stores energy and returns the stored energy when delivering power to the circuit.
## Numerical Example (1)
A capacitor has a capacitance of 0.5 μF is connected across a battery of 120 V. Determine the energy stored in the capacitor.
### Solution
Given data,
• 𝐶 = 0.5 μF = 0.5 × 10−6F
• 𝑉 = 120 V
The energy stored in the capacitor will be,
$$\mathrm{W=\frac{1}{2}CV^{2}=\frac{1}{2}\times \left ( 0.5\times 10^{-6} \right )\times 120}$$
$$\mathrm{\therefore W=3\times 10^{-5}J=30\, \mu J}$$
## Numerical Example (2)
When a capacitor is connected to a source of 240 V, it stores a charge of 50 mC. Calculate the energy stored in the capacitor.
### Solution
Given data,
• Voltage, 𝑉 = 240 V
• Charge, 𝑄 = 50 mC = 50 × 10−3 C
The energy stored in the capacitor is given by,
$$\mathrm{W=\frac{1}{2}QV=\frac{1}{2}\times \left ( 50\times 10^{-3} \right )\times 240}$$
$$\mathrm{\therefore W=6\, Joules}$$
## Conclusion
From the above discussion, it is clear that a capacitor stores electrical energy in the form of electrostatic field, and this stored energy is referred to as potential energy because it is due to the difference of potential.
From the expression of stored energy in a capacitor, it is clear that the energy stored is directly proportional to capacitance of the capacitor, which means a capacitor of higher capacitance can store more amount of energy for the same voltage and vice-versa.
Due to their energy-storing property, capacitors are used in several electrical and electronic circuits such as chargers, capacitor banks, computer circuits, etc.
|
2023-03-23 21:17:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5462139844894409, "perplexity": 424.34047507456364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945183.40/warc/CC-MAIN-20230323194025-20230323224025-00614.warc.gz"}
|
https://www.physicsforums.com/threads/acceleration-of-a-falling-ball.275462/
|
# Acceleration of a falling Ball
1. Nov 27, 2008
### ahrog
1. The problem statement, all variables and given/known data
A flash photograph of two tennis balls released simultaneously is taken. One is projected horizontally, and the other one is dropped from rest. The light flashes occur ever 0.2 seconds, and each box represents a distance of 0.4 s (they gave a graph...I can't get it on the computer, but I can get all the other details)
For the vertical ball:
Time (s) l vertical distance (m)
0 l 0
0.2 l 0.1
0.4 l 0.7
0.6 l 1.5
0.8 l 2.6
1.0 l 4.0
What is the vertical acceleration of the tennis ball?
2. Relevant equations
y= vyt + 1/2gt2
(<an equation my textbook gave me)
v=u+at (<this is an equation someone on this forum gave me)
possibly a=(vf-vi)/t
3. The attempt at a solution
I'm not quite sure how to go about this. Any tips would be appreciated! I was thinking that maybe I have to find the velocity using the first equation, then put it into the acceleration formula. If I did that, I would end up with -0.9m/s2.
Is this right?
2. Nov 28, 2008
### Mentallic
Use the formula $$s=ut+\frac{1}{2}at^2$$
thus $$a=\frac{2(s-ut)}{t^2}$$
where:
s = displacement (vertical position)
u = initial velocity
t = time
a = acceleration (due to gravity)
Using the data for the 1 second flash (longest time from drop) would most likely give you the closest real answer to the gravity. Testing for every interval of data will increase your reliability.
3. Nov 28, 2008
### ahrog
So, I would end up with the answer being 8 m/s, right? C:
|
2017-06-24 00:36:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562007308006287, "perplexity": 1516.196937997123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320206.44/warc/CC-MAIN-20170623235306-20170624015306-00475.warc.gz"}
|
https://tex.stackexchange.com/questions/350877/add-gaps-to-lines-with-tikz/350886
|
# Add gaps to lines with TikZ
I am drawing some block diagrams with TikZ. Sometimes, I need to combine several blocks together to form one large group by adding a border around the corresponding blocks. However, there are some arrows which cross the border, as shown in this picture.
The arrow "to ALC loop" and the arrow just below it looks ugly. What I would like to do is the following (note the small gaps around the arrows where they cross the thick border):
How is that possible with TikZ? The arrows are just ordinary arrows drawn with the \draw[->] (from) -- (to); macro, and the thick border is also just an ordinary line.
• See double option in section "15.3.4 Graphic Parameters: Double Lines and Bordered Lines", p.169, pgfmanual (v3.0.1a). – Paul Gaborit Jan 28 '17 at 14:57
• @Paul Gaborit - the disadvantage of the solution from your tip is that the shorten option can't be used as an input, because it is already applied in the problem solution: link -that is your solution here. Nevertheless your idea covers the case of overlapping arrow tip and external box border (in the picture in question). – forrest Jan 28 '17 at 15:15
Using @AboAmmar MWE, preaction can be used in the simple case:
\documentclass[border=2pt]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[> = latex]
\node [draw, thick, minimum size=5em] (rec) {};
\node [draw] (div) {$\div$};
\draw [preaction={draw, line width=3pt, white}][<->] (div) -- ++(5em,0);
\end{tikzpicture}
\end{document}
EDIT: there is some problem nevertheless - arrow tip changes path bending dependently on the size of this arrow tip. So the idea is not good solution.
\documentclass[border=2pt]{standalone}
\usepackage{tikz}
\tikzset{
outlined arrow/.style={
preaction={{}-{},draw,line width=3pt,yellow}
}
}
\begin{document}
\begin{tikzpicture}[> = latex]
\node [draw,thick,minimum size=5em] (rec) {};
\node [draw] (div) {$\div$};
\draw [outlined arrow][<->] (div) -- ++(5em,0);
\draw [outlined arrow][<->,shorten <=2pt] (div) .. controls +(-90:15mm) and +(180:15mm) .. ++(5em,-5em);
\end{tikzpicture}
\end{document}
EDIT 2: In the above case black arrow bent line goes not in the middle of yellow line - dependently on the arrow size. I found that @cfr response (arrow tip size independent on the line width) can be useful a bit here. The code below works only when the arrow tip setup my arrow is passed through optional argument.
\documentclass[border=2pt]{standalone}
\usepackage{tikz}
\usetikzlibrary{arrows.meta}
\begin{document}
\begin{tikzpicture}[
outlined arrow/.style={preaction={double=yellow,double distance=2pt,draw=red}},
my arrow/.style={>={LaTeX[length=2mm]}},
yscale=0.6
]
\node [draw,thick,minimum size=5em] (rec) {};
\node [draw] (div) {$\div$};
\draw [outlined arrow][<->,my arrow] (div) -- ++(5em,0);
\draw [outlined arrow][<->,shorten <=2pt,my arrow]
(div) .. controls +(-90:15mm) and +(180:15mm) .. ++(5em,-5em);
\end{tikzpicture}
\end{document}
I considered also the use of @Qrrbrbirlbel solution (save a path and call it for stroking), but shorten option didn't work. Also @Paul Gaborit solution (surrounded arrow) excludes shorten option (?).
• this is perfect! – T. Pluess Jan 28 '17 at 20:15
These gaps at line crossings can be achieved with a thick white-colored line drawn the same way as your crossing line. Here is an example:
\documentclass[border=2pt]{standalone}
\usepackage{tikz}
\begin{document}
\begin{tikzpicture}[> = latex]
\node [draw, thick, minimum size=5em] (rec) {};
\node [draw] (div) {$\div$};
\draw [<->, line width=3pt, white](div) -- ++(5em,0);
\draw [<->] (div) -- ++(5em,0);
\end{tikzpicture}
\end{document}
• Thank you. I already had that idea too, but it has the disadvantage that the corresponding arrows have to be drawn twice. I wonder whether there is an 'automatic' way? – T. Pluess Jan 28 '17 at 14:10
|
2021-04-20 15:58:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9040805101394653, "perplexity": 5772.29546477403}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00131.warc.gz"}
|
http://projects.qi-hardware.com/index.php/p/openwrt-xburst/source/tree/release_2010-12-14/docs/bugs.tex
|
# OpenWrt XBurst Git Source Tree
## Root/docs/bugs.tex
1 OpenWrt as an open source software opens its development to the community by 2 having a publicly browseable subversion repository. The Trac software which 3 comes along with a Subversion frontend, a Wiki and a ticket reporting system 4 is used as an interface between developers, users and contributors in order to 5 make the whole development process much easier and efficient. 6 7 We make distinction between two kinds of people within the Trac system: 8 9 \begin{itemize} 10 \item developers, able to report, close and fix tickets 11 \item reporters, able to add a comment, patch, or request ticket status 12 \end{itemize} 13 14 \subsubsection{Opening a ticket} 15 16 A reporter might want to open a ticket for the following reasons: 17 18 \begin{itemize} 19 \item a bug affects a specific hardware and/or software and needs to be fixed 20 \item a specific software package would be seen as part of the official OpenWrt repository 21 \item a feature should be added or removed from OpenWrt 22 \end{itemize} 23 24 Regarding the kind of ticket that is open, a patch is welcome in those cases: 25 26 \begin{itemize} 27 \item new package to be included in OpenWrt 28 \item fix for a bug that works for the reporter and has no known side effect 29 \item new features that can be added by modifying existing OpenWrt files 30 \end{itemize} 31 32 Once the ticket is open, a developer will take care of it, if so, the ticket is marked 33 as "accepted" with the developer name. You can add comments at any time to the ticket, 34 even when it is closed. 35 36 \subsubsection{Closing a ticket} 37 38 A ticket might be closed by a developer because: 39 40 \begin{itemize} 41 \item the problem is already fixed (wontfix) 42 \item the problem described is not judged as valid, and comes along with an explanation why (invalid) 43 \item the developers know that this bug will be fixed upstream (wontfix) 44 \item the problem is very similar to something that has already been reported (duplicate) 45 \item the problem cannot be reproduced by the developers (worksforme) 46 \end{itemize} 47 48 At the same time, the reporter may want to get the ticket closed since he is not 49 longer able to trigger the bug, or found it invalid by himself. 50 51 When a ticket is closed by a developer and marked as "fixed", the comment contains 52 the subversion changeset which corrects the bug. 53
|
2019-09-16 00:13:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9760206937789917, "perplexity": 1735.7185930978628}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572439.21/warc/CC-MAIN-20190915235555-20190916021555-00230.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=62968
|
# Angle of projection above an incline?
by Buck268
Tags: angle, incline, projection
P: 333 Just apply $$s = ut + \frac{1}{2}at^2$$ perpendicular to the surface and along the surface. Use the fact that perpendicular to the surface s = 0. Always a clear diagram with angles and directions is usful to quickly solve this kind of problems.
P: 9 OK, well I've never seen that form, but I'm using R(t) = do + Vox*t - .5Axt^2. Looks to me like s = r(t), u = Vox, and of course Do = 0... Anyways, The problem I'm running into is simplifiing to a deferentiable form, since I'm trying to find the maximum gamma (probably should solve for gamma first, too?). The equation I've worked out seems to be correct, but I have trig functions of both gamma and theta. I'll see if I got AutoCAD laying around, if so I'll make a lil drawing right quick, but lemme show you the equation I have... $$s = VoCos\gamma [\frac{2VoSin\gamma}{g} * Sec \theta] - \frac{gSin \theta}{2} * [\frac{2VoSin\gamma}{g} * Sec \theta]^2$$ Which is in the form $$s = ut + \frac{1}{2}at^2$$ Like I said, I'm not sure how/if this is differentiable or solvable for gamma... I suppose this equation is the equation for the range of the projectile along the inclined surface, but I would still have to zero out the derivitive with respect to gamma in order to find the gamma which provides for max range, correct? Those "g" vectors (gravity... 9.8m/s^2) came from the formula I used for time, which worked out to be $$t = \frac{2Voy}{g}*Sec\theta$$ where naturally $$Voy = VoSin\gamma$$...
P: 9 Angle of projection above an incline? the equation for the range (r(t) or s if you prefere) seems to work out to: $$r(t) = \frac{2Vo^2 * Sin\gamma * Cos\gamma}{g*Cos\theta} - \frac{2Vo^2 * Sin^2 \gamma}{gCos\theta}*Tan\theta$$ hmmmm...
P: 9 Ahhh well I've came up with $$\gamma = \frac{1}{2}ArcCot(Tan \theta)$$
|
2014-09-17 19:43:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677517533302307, "perplexity": 831.4248364430582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657124356.76/warc/CC-MAIN-20140914011204-00333-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://bison.inl.gov/Documentation/source/materials/ComputeStrainIncrementBasedStress.aspx
|
Compute Strain Increment Based Stress
Compute stress after subtracting inelastic strain increments
Description
This stress calculator finds the value of the stress as a function of the elastic strain increment when a series of inelastic strains are specified in the input file. The stress is calculated as (1) where is the stress and is the elasticity tensor of the material. The elastic strain increment, is found by subtracting the sum of the inelastic strains from the mechanical strain: (2) where is the mechanical strain and is the inelastic strain. In the tensor mechanics module mechanical strain is defined as the sum of the elastic and inelastic (e.g. creep and/or plasticity) strains.
Example Input File
[./stress]
type = ComputeStrainIncrementBasedStress
[../]
(moose/modules/tensor_mechanics/test/tests/plane_stress/weak_plane_stress_incremental.i)
Input Parameters
• store_stress_oldFalseParameter which indicates whether the old stress state, required for the HHT time integration scheme and Rayleigh damping, needs to be stored
Default:False
C++ Type:bool
Description:Parameter which indicates whether the old stress state, required for the HHT time integration scheme and Rayleigh damping, needs to be stored
• computeTrueWhen false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies.
Default:True
C++ Type:bool
Description:When false, MOOSE will not call compute methods on this material. The user must call computeProperties() after retrieving the Material via MaterialPropertyInterface::getMaterial(). Non-computed Materials are not sorted for dependencies.
• base_nameOptional parameter that allows the user to define multiple mechanics material systems on the same block, i.e. for multiple phases
C++ Type:std::string
Description:Optional parameter that allows the user to define multiple mechanics material systems on the same block, i.e. for multiple phases
• inelastic_strain_namesNames of inelastic strain properties
C++ Type:std::vector
Description:Names of inelastic strain properties
• boundaryThe list of boundary IDs from the mesh where this boundary condition applies
C++ Type:std::vector
Description:The list of boundary IDs from the mesh where this boundary condition applies
• blockThe list of block ids (SubdomainID) that this object will be applied
C++ Type:std::vector
Description:The list of block ids (SubdomainID) that this object will be applied
Optional Parameters
• control_tagsAdds user-defined labels for accessing object parameters via control logic.
C++ Type:std::vector
Description:Adds user-defined labels for accessing object parameters via control logic.
• enableTrueSet the enabled status of the MooseObject.
Default:True
C++ Type:bool
Description:Set the enabled status of the MooseObject.
• seed0The seed for the master random number generator
Default:0
C++ Type:unsigned int
Description:The seed for the master random number generator
• implicitTrueDetermines whether this object is calculated using an implicit or explicit form
Default:True
C++ Type:bool
Description:Determines whether this object is calculated using an implicit or explicit form
• constant_onNONEWhen ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped
Default:NONE
C++ Type:MooseEnum
Description:When ELEMENT, MOOSE will only call computeQpProperties() for the 0th quadrature point, and then copy that value to the other qps.When SUBDOMAIN, MOOSE will only call computeSubdomainProperties() for the 0th quadrature point, and then copy that value to the other qps. Evaluations on element qps will be skipped
• output_propertiesList of material properties, from this material, to output (outputs must also be defined to an output type)
C++ Type:std::vector
Description:List of material properties, from this material, to output (outputs must also be defined to an output type)
• outputsnone Vector of output names were you would like to restrict the output of variables(s) associated with this object
Default:none
C++ Type:std::vector
Description:Vector of output names were you would like to restrict the output of variables(s) associated with this object
|
2020-11-27 08:36:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3517933189868927, "perplexity": 3932.7986531212605}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141191511.46/warc/CC-MAIN-20201127073750-20201127103750-00251.warc.gz"}
|
https://physics.stackexchange.com/questions/234932/expectation-value-position-of-sine-wave-in-infinite-square-well
|
# Expectation value position of sine wave in infinite square well
Given a neutron (mass$\approx$939 MeV/c$^2$) in an infinite square well of size $a$, the value of the expectation value for position should be in the range $[0-a]$. I know that the general form of the expectation value for position is $$\langle X\rangle=\int_{-\infty}^{\infty}\psi^*x\psi dx=\int^a_0\psi^*x\psi dx \, ,$$ My wave function is given by: $$\psi[x]=\sqrt[]{\frac{2}{7a}}\sin{\frac{x\pi}{a}}+\sqrt[]{\frac{4}{7a}}\sin{\frac{2x\pi}{a}}+\sqrt[]{\frac{8}{7a}}\sin{\frac{3x\pi}{a}}\, ,$$ which is a superposition of wavefunctions of the form $\sin(n\pi x / a)$. Because all of these $\sin$ functions are orthogonal, the expectation value can be written: $$\sum_n[p_n\int^a_0\phi_n^*x\phi_ndx]$$ For the probabilities of each $n$ state given by $p_n$. However, for all $n$, the above integrals all evaluate to $\frac{a^2}{4}$, which not only has the wrong units, but for $a>4$ gives a magnitude larger than the size of the well. How can this be?
EDIT: this was poorly worded, and under-explained due to a mixture of pressure and lack of sleep (not that anyone on stack exchange cares). I think I've fixed it.
• There is an error in your result as you haven't normalized the wavefunction before doing the integral. But even then, the expectation value can be more than one. It dosen't give the probability but the average value of position. The average value can take any value from $0$ to $a$ – biryani Feb 10 '16 at 7:33
• Apart from the issues already mentioned, your integral should be only from $-a$ to $a$. – Prahar Feb 10 '16 at 14:21
• So, the problem I was actually having I'll cover in an answer that I'll post (it boils down to biryani being right, I didn't normalize), but with regard to the points you brought up: The expectation value for the position of a particle in an infinite square well should be within that well. Apart from the magnitude of $\frac{a^2}{2}$ being larger than $a$ for $a>4$, the units are of length$^2$, which is incorrect. Furthermore, I've defined the well to have the origin at one side, which is why the integral goes from $0$ to $a$. – ocket8888 Feb 11 '16 at 2:12
• Wow, seriously downvoted me again. This is why I hate this place. – ocket8888 Feb 11 '16 at 2:40
• @ocket8888 Nothing forces you to post on this site. But if you do decide to, please make sure you write clear, well worded questions that amount to more than just "here is what I did, and I get the wrong answer: What did I do wrong?" – Danu Feb 11 '16 at 22:54
The problem I was having is that the integral must be taken of the normalized wavefunction, not the orthonormal wavefunction. It's incorrect to multiply by $p_n$, that's not what the expectation value entails. Specifically, for this case: $$\Psi[x]=\sqrt[]{\frac{4}{7a}}\sin[\frac{\pi x}{a}]+\sqrt[]{\frac{2}{7a}}\sin[\frac{2\pi x}{a}]+\sqrt[]{\frac{8}{7a}}\sin[\frac{3\pi x}{a}]\, .$$ Note that the coefficients are neither the probabilities of each state, nor their square, they are simply the normalization coefficients. Because $$\Psi\in\mathbb{R}\, ,$$ The expectation value evaluates to: $$\langle X\rangle=\int_0^ax\Psi^2dx\\ ~\\ =\frac{4}{7}a\left(\frac{7}{8}-\frac{8(54+25\sqrt[]{2})}{255\pi^2}\right)\\ ~\\ \approx0.316054\space a\, ,$$ within the box, more or less what you'd expect.
• You've forgotten the cross product terms like $\int \phi_1 x \phi_2 dx$, etc. those aren't all zero. – Bill N Feb 11 '16 at 3:51
• Actually, Mathematica disagrees with you. $\int_0^b x \sin[x\pi/b]\sin[2x\pi/b]\mathrm{d}x = \frac{-8b^2}{9\pi^2}.$ – Bill N Feb 11 '16 at 18:26
• $\int_0^b x \sin [2x\pi/b]\sin [3x\pi/b]\,\mathrm{d}x = \frac{-24b^2}{25\pi^2}.$ – Bill N Feb 11 '16 at 18:30
|
2019-10-16 05:22:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374126553535461, "perplexity": 246.4095964998993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986664662.15/warc/CC-MAIN-20191016041344-20191016064844-00442.warc.gz"}
|
http://www.scholarpedia.org/article/Wulff_shape_of_crystals
|
Wulff shape of crystals
Post-publication activity
Wulff shape of crystals. The shape of an equilibrium crystal is obtained, according to the Gibbs thermodynamic principle, by minimizing the total surface free energy associated to the crystal-medium interface. To study the solution to this problem, known as the Wulff construction, is the object of the article.
Introduction
When a fluid is in contact with another fluid, or with a gas, a portion of the total free energy of the system is proportional to the area of the surface of contact, and to a coefficient, the surface tension, which is specific for each pair of substances. Equilibrium will accordingly be obtained when the free energy of the surfaces in contact is a minimum.
When one of the substances involved is anisotropic, such as a crystal, the contribution to the total free energy of each element of area depends on its orientation. The minimum surface free energy for a drop of a given volume determines then the ideal form of the crystal in equilibrium.
The principle of minimum free energy appears in the fundamental work of J.W. Gibbs, On the equilibrium of heterogeneous substances (1875-1878) where, in particular, he shows the role of the anisotropic surface tension for the determination of the shape of a crystal in equilibrium and discusses the formation of facets. He also points out the complexities of the actual crystal growth and suggests that only very small crystals can have an ideal equilibrium form.
G. Wulff, who made classical experiments on crystal growth, reported his results in the paper Zur Frage der Geschwindigkeit des Wachstums und der Auflösung der Kristallflächen (1901), first published in Russian in 1895. His principal conclusions include the celebrated Wulff's theorem:
'The minimum surface energy for a given volume of a polyhedron will be achieved if the distances of its faces from one given point are proportional to their capillary constants'.
The term capillary constants was used for the surface tension. Wulff himself supported his principle mainly by its consequences and his attempt at a general proof was incorrect. Complete proofs were later presented by M. von Laue, C. Herring, and others. An anthology, with comments, of this early work (since Gibbs) is presented in the book by Schneer (1970).
Since the surface tensions depend upon the geometric distribution of the particles making up the crystal, the Wulff theorem establishes a relation between the forms and the structure of crystals, and could be used for this study before the discovery of X-ray diffraction. Although it is not easy to find equilibrium crystals in nature one may think that an average of the ensemble of forms of a mineral species approaches equilibrium.
It is only in recent times that equilibrium crystals have been produced in the laboratory. Most crystals grow under non-equilibrium conditions, as predicted by Gibbs, and it is a subsequent relaxation of the macroscopic crystal that restores the equilibrium. This requires transport of material over long distances and the time scales can be very long, even for very small crystals. One has been able to study, however, metal crystals in equilibrium of size 1-10 micron. Equilibration times of a few days were observed. A schematic representation of a cubic equilibrium crystal is shown in Figure 1.
Figure 1: Octant of a cubic equilibrium crystal shown in a projection along the (1,1,1) direction. The three regions 1, 2 and 3 indicate the facets and the remaining area represents a curved part of the crystal surface.
A very interesting phenomenon that can be observed on equilibrium crystals is the roughening transition. This transition is characterized by the disappearance of the facets of a given orientation from the equilibrium crystal, when the temperature attains a certain particular value. Roughening transitions have been found experimentally, first, in negative crystals (i.e., vapor bubbles included in a crystal) of organic substances. The best observations have been made on helium crystals in equilibrium with superfluid helium, since then the transport of matter and heat is extremely fast. Crystals grow to a size of 1-5 mm and relaxation times vary from milliseconds to minutes. Roughening transitions for three different types of facets have been observed (see, for instance, Wolf et al. 1983).
The next Section of this article is devoted to the macroscopic theory of the equilibrium crystal shape. The following two Sections, to the microscopic approach, the key example being provided by the phase coexistence in the Ising ferromagnet. Section 3 deals with interfaces and the associated free energies and, also, their rigidity or roughness. Section 4, with exact solutions for the Wulff shape and rigorous results on the facet shape, in three dimensions.
Thermodynamics of equilibrium crystals
The variational problem
The problem is to find, at equilibrium, the shape of a droplet of a phase $$\textstyle c$$, the crystal, inside a phase $$\textstyle m$$, called the medium.
Let $$n$$ be a unit vector in $$R^d$$ and consider the situation in which the phases $$c$$ and $$m$$ coexist over a plane perpendicular to $$n$$. Let $$\tau(n)$$ be the surface tension, or free energy per unit area, of such an interface. If $$B$$ is the set of $$R^d$$ occupied by the phase $$c$$, and $$\partial B$$ the boundary of $$B$$, the total surface free energy of the crystal is given by
$\tau(\partial B)=\int_{\xi\in\partial B}\tau(n(\xi)) ds_\xi$
Here $$\textstyle n (\xi)$$ is the exterior unit normal to $$\textstyle \partial B$$ at $$\textstyle \xi$$ and $$\textstyle ds_\xi$$ is the element of area at this point. One has to minimize this expression under the constraint that the total (Lebesgue) volume $$\textstyle \vert B\vert$$ occupied by the phase $$\textstyle c$$ is fixed. Given a set $$\textstyle {\cal W}$$, we say that the crystal $$\textstyle B$$ has shape $$\textstyle {\cal W}$$ if after a translation and a dilation it equals $$\textstyle {\cal W}$$.
The solution of this variational problem, known under the name of Wulff construction, is given by
${\cal W}=\{x\in R^d : x\cdot n\le\tau(n)\hbox{ for every } n\in S^{d-1}\}$
Theorem 1 Let $$\textstyle {\cal W}$$ be the just defined Wulff shape for the surface tension function $$\textstyle \tau(n)$$. Let $$\textstyle B\subset R^d$$ be any other region with sufficiently smooth boundary and the same (Lebesgue) volume as $$\textstyle {\cal W}$$. Then
$\tau(\partial B)\ge\tau(\partial{\cal W})$
with equality if and only if $$\textstyle B$$ and $$\textstyle {\cal W}$$ have the same shape.
Proof of Wulff's theorem
This section is devoted to the proof of Theorem 1. The proof presented here will be based, following Taylor (1987), on geometrical inequalities. First, one notices that being defined as the intersection of closed half-spaces, the Wulff shape $$\textstyle {\cal W}$$ is a closed, bounded convex set, i.e., a convex body. Among the functions $$\textstyle \tau(n)$$, which through the above formula define the same shape $$\textstyle {\cal W}$$, there is a unique function having the property that all planes $$\textstyle \{x\in R^d: x\cdot n= \tau(n)\}$$, associated to all different unit vectors $$\textstyle n$$, are tangent to the convex set $$\textstyle {\cal W}$$. This function is given by
$\tau_{\cal W}(n)= \sup_{x\in{\cal W}}(x\cdot n)$
and is called the support function of the convex body $$\textstyle {\cal W}$$. For an arbitrary function $$\textstyle \tau(n)$$ defining the same Wulff shape $$\textstyle {\cal W}$$, it can be that some of these planes do not touch the set $$\textstyle {\cal W}$$ (think, for instance, at the case where $$\textstyle d=2$$ and $$\textstyle {\cal W}$$ is a square). Thus, the function $$\textstyle \tau_{\cal W}(n)$$ is the smallest function on the unit sphere which gives the Wulff shape $$\textstyle {\cal W}$$.
Two further geometric inequalities are the following two theorems. Theorem 2 is an extension of the isoperimetric property and is shown below to be a consequence of Theorem 3. The Wulff theorem will be a consequence of Theorem 2 and of the further lemma below.
Theorem 2 Let $$\textstyle {\cal W}\subset R^d$$ be a convex body and $$\textstyle \tau_{\cal W}(n)$$ the corresponding support function. For any set $$\textstyle B\subset R^d$$, with sufficiently smooth boundary, we have
$\tau_{\cal W}(\partial B)\ge d \vert{\cal W}\vert^{1/d}\vert B\vert^{(d-1)/d}$
where $$\textstyle \vert{\cal W}\vert$$, $$\textstyle \vert B\vert$$, denote the (Lebesgue) volumes of $$\textstyle {\cal W}$$, $$\textstyle B$$, respectively, and $$\textstyle \tau_{\cal W}(\partial B)$$ is the surface free energy of $$\textstyle \partial B$$. The equality occurs only when $$\textstyle B$$ and $$\textstyle {\cal W}$$ have the same shape.
If $$\textstyle \cal D$$ is the closed circle of unit radius with center at the origin, then the corresponding support function $$\textstyle \tau_{\cal D}(n)$$ is equal to the constant 1, and the Theorem reduces to the isoperimetric property: The area $$\textstyle F$$ and the length $$\textstyle L$$ of any plane domain satisfy the inequality $$\textstyle L^2\ge 4\pi F$$.
Theorem 2 is a consequence of the following Brunn-Minkowski inequality:
Theorem 3 For non empty compact sets $$\textstyle A,B\subset R^d$$,
$\vert A+B\vert^{1/d} \ge \vert A\vert^{1/d}+\vert B\vert^{1/d}$
Moreover, the equality sign holds only when $$\textstyle A$$ and $$\textstyle B$$ are two convex bodies with the same shape (or one of the sets consists of a single point).
Given two non-empty sets $$\textstyle A,B\subset R^d$$ their vector Minkowski sum is defined by $$\textstyle A+B=\{a+b : a\in A,b\in B\}$$. A proof of Theorem 3 can be found in the book by Burago and Zalgaller (1988). First one proves by direct computation that the inequality holds in the case in which $$\textstyle A$$ and $$\textstyle B$$ are parallelepipeds with sides parallel to the coordinate axis. The validity of the inequality is then extended by induction to all finite unions of such parallelepipeds and, finally, to all compact sets by an appropriate limit process.
Lemma Let $$\textstyle {\cal W}$$ be a convex body in $$\textstyle R^d$$. Given any set $$\textstyle B\subset R^d$$, with sufficiently smooth boundary, we can express the functional $$\textstyle \tau_{\cal W}(\partial B)$$ as
$\tau_{\cal W}(\partial B) =\lim_{\lambda\to0}{{\vert B+\lambda{\cal W}\vert -\vert B\vert}\over\lambda}$
where $$\textstyle \lambda{\cal W}$$ denotes the homothetic set $$\textstyle \{\lambda x : x \in{\cal W}\}$$. If $$\textstyle B\subset R^d$$ is bounded, the functional $$\textstyle \tau_{\cal W}$$ is well defined. In particular, this last formula shows that
$\tau_{\cal W}(\partial{\cal W})=d\,\vert{\cal W}\vert$
Proof. To prove this lemma, observe that if $$\textstyle H(n)$$ denotes the half space below a plane orthogonal to $$\textstyle n$$, and 'dist' denotes the distance between two sets (two parallel planes), then, from the definition of $$\textstyle \tau_{\cal W}$$,
$\tau_{\cal W}(n)={\rm dist}\big(\partial H(n), \partial(H(n)+{\cal W})\big)$
and one can then write \begin{eqnarray*} \tau_{\cal W}(\partial B) &=&\int_{\xi\in\partial B}{\rm dist}\big(\partial H(n(\xi)), \partial(H(n(\xi))+{\cal W})\big)ds_\xi \ &=&\int_{\xi\in\partial B}\lim_{\lambda\to0}(1/\lambda) {\rm dist}\big(\xi,\partial(B+\lambda{\cal W})\big)ds_\xi \end{eqnarray*} This last expression coincides with the formula for $$\textstyle \tau_{\cal W}(\partial B)$$ given in the Lemma, and proves its validity.
Proof of Theorem 2. The inequality in Theorem 2 follows by applying the Brunn-Minkowski inequality to $$\textstyle \vert B+\lambda{\cal W}\vert$$. This gives
$\vert B+\lambda{\cal W}\vert-\vert B\vert\ge \big(\vert B\vert^{1/d}+\lambda\vert{\cal W}\vert^{1/d} \big)^d-\vert B\vert\ge d\,\lambda\,\vert B\vert^{(d-1)/d}\vert{\cal W}\vert^{1/d}$
which, taking the remark into account, ends the proof of Theorem 2.
Proof of Theorem 1. Let $$\textstyle {\cal W}$$ be the Wulff shape corresponding to the function $$\textstyle \tau$$. Then
$\tau(\partial B)\ge\tau_{\cal W}(\partial B) \ge d \vert{\cal W}\vert^{1/d}\vert B\vert^{(d-1)/d}$
taking the remark into account, together with the isoperimetric inequality (Theorem 2). But, when $$\textstyle B={\cal W}$$, we have
$\tau(\partial{\cal W})= \tau_{\cal W}(\partial{\cal W})=d \vert{\cal W}\vert$
Here, the first equality follows from the fact that $$\textstyle \tau_{\cal W}(n)\ne\tau(n)$$ only for the unit vectors $$\textstyle n$$ for which the planes $$\textstyle x\cdot n=\tau(n)$$ are not tangent to the convex set $$\textstyle {\cal W}$$. The second equality follows from the remark. Therefore
$\tau(\partial B)\ge \tau(\partial{\cal W})\, (\vert B\vert/\vert{\cal W}\vert)^{(d-1)/d}$
which, when $$\textstyle \vert B\vert=\vert{\cal W}\vert$$, gives the inequality stated in Theorem 1. The equality in Theorem 1 corresponds to the equality in Theorem 2. This ends the proof of Theorem 1.
Remark Given the surface tension, consider its extension by positive homogeneity, $$\textstyle f(x)=\vert x\vert\, \tau(x/ \vert x\vert)$$. It turns out that if $$\textstyle f(x)$$ is a convex function on $$\textstyle R^d$$, then $$\textstyle \tau (n)$$ is the support function of the convex body $$\textstyle {\cal W}$$, the associated Wulff shape. This convexity condition is also equivalent to the fact that the surface tension $$\textstyle \tau$$ satisfies a thermodynamic stability condition called the pyramidal inequality, see Messager et al. (1992) for a proof.
The pyramidal inequality refers to the following condition. Let $$\textstyle A_0,\dots,A_d$$ be any set of $$\textstyle d+1$$ points in $$\textstyle R^d$$ and, for $$\textstyle i=0,\dots,d$$, let $$\textstyle \Delta_i$$ be the $$\textstyle (d-1)$$-dimensional simplex defined by all these points except $$\textstyle A_i$$. Denote by $$\textstyle \vert\Delta_i\vert$$ the $$\textstyle (d-1)$$-dimensional area of $$\textstyle \Delta_i$$ and by $$\textstyle n_i$$ the unit vector orthogonal to $$\textstyle \Delta_i$$. The first vector, $$\textstyle n_0$$, is oriented toward the exterior of the simplex $$\textstyle A_0,\dots,A_d$$, while the others $$\textstyle n_i$$, $$\textstyle i\ge1$$, are oriented inside. We say that $$\textstyle \tau(n)$$ satisfies the pyramidal inequality if, for any set of $$\textstyle d+1$$ points,
$\vert\Delta_0\vert \, \tau(n_0) \le \sum^d_{i=1}\vert\Delta_i\vert\, \tau(n_i)$
Facets in the equilibrium crystal
The appearance of a plane facet in the equilibrium crystal shape is related to the existence of a discontinuity in the derivative of the surface tension with respect to the orientation.
More precisely, let the surface tension $$\textstyle \tau(n)=\tau(\theta,\phi)$$, for $$\textstyle d=3$$, be expressed in terms of the spherical coordinates of $$\textstyle n$$, the vector $$\textstyle n_0$$ being taken as the $$\textstyle x_3$$ axis with $$\textstyle \theta$$ the azimuth angle, and assume that it satisfies the stability condition mentioned above. Then, $$\textstyle \tau(n)$$ is the support function of the Wulff shape, and as a natural consequence of this fact (see Miracle-Sole 1995), it follows:
Theorem 4 A facet orthogonal to $$\textstyle n_0$$ appears in the Wulff shape if, and only if, the derivative $$\textstyle \partial\tau(\theta,\phi)/\partial\theta$$ is discontinuous at the point $$\textstyle \theta=0$$, for all $$\textstyle \phi$$. The facet $$\textstyle {\cal F}\subset\partial{\cal W}$$ consists of the points $$\textstyle x\in R^3$$ belonging to the plane $$\textstyle x_3=\tau(n_0)$$ and such that, for all $$\textstyle \phi$$ between $$\textstyle 0$$ and $$\textstyle 2\pi$$,
$x_1\cos\phi+x_2\sin\phi\le \partial\tau(\theta,\phi)/\partial\theta\,\vert\,_{\theta=0^+}$
Interfaces in statistical mechanics
The Ising model
In a first approximation one can model the interatomic forces in a crystal by a lattice gas. In a typical two-phase equilibrium state there is, in these systems, a dense component, which can be identified as the crystal phase, and a dilute component, which can be identified as the vapor phase. The underlying lattice structure implies that the crystal phase is anisotropic, while this assumption, though unrealistic for the vapor phase, should be immaterial for the description of the crystal-vapor interface. As an illustrative example of such systems, the ferromagnetic Ising model will be considered.
The Ising model is defined on the $$\textstyle d$$-dimensional cubic lattice $$\textstyle {\cal L}=Z^d$$, with configuration space $$\textstyle \Omega = \{-1,1\}^{\cal L}$$. The value $$\textstyle \sigma(i)=\pm1$$ is the spin at the site $$\textstyle i\in{\cal L}$$. The occupation numbers, $$\textstyle n(i)=(1/2)(\sigma(i)+1)$$ $$\textstyle =0$$ or $$\textstyle 1$$, give the lattice gas version of this model. The energy of a configuration $$\textstyle \sigma_{\Lambda} = \{\sigma(i),i\in\Lambda\}$$, in a finite box $$\textstyle \Lambda\subset{\cal L}$$, under the boundary conditions $$\textstyle {\bar\sigma}\in\Omega$$, is
$H_{\Lambda}(\sigma_{\Lambda}\mid{\bar\sigma}) = - J \sum_{\langle i,j\rangle\cap\Lambda\not=\emptyset} \sigma(i)\sigma(j)$
where $$\textstyle J>0$$, $$\textstyle \langle i,j \rangle$$ are pairs of nearest neighbor sites, and $$\textstyle \sigma(i) = {\bar\sigma}(i)$$ when $$\textstyle i\not\in\Lambda$$. The partition function, at the inverse temperature $$\textstyle \beta=1/kT$$, is given by
$Z^{\bar\sigma}(\Lambda) =\sum_{\sigma_{\Lambda}}\exp \big(-\beta H_{\Lambda}(\sigma_{\Lambda}\mid{\bar\sigma})\big)$
This model presents, at low temperatures $$\textstyle T<T_c$$, where $$\textstyle T_c$$ is the critical temperature, two distinct thermodynamic pure phases. This means two extremal translation invariant Gibbs states, which correspond to the limits, when $$\textstyle \Lambda\to\infty$$, of the finite volume Gibbs measures
$Z^{\bar{\sigma}}(\Lambda) ^{-1} \exp \big( -\beta H_{\Lambda}(\sigma_{\Lambda}\mid\bar{\sigma})\big)$
with boundary conditions $$\textstyle {\bar\sigma}$$ respectively equal to the ground configurations $$\textstyle (+)$$ and $$\textstyle (-)$$ (respectively, $$\textstyle {\bar\sigma}(i) = 1$$ and $$\textstyle {\bar\sigma}(i) = -1$$, for all $$\textstyle i\in{\cal L}$$). Moreover, they are the unique extremal translation invariant Gibbs states of the system (Bodineau, 2006). On the other side, if $$\textstyle T\ge T_c$$, the Gibbs state is unique.
Each configuration inside $$\textstyle \Lambda$$ can be described in a geometric way by specifying the set of Peierls contours which indicate the boundaries between the regions of spin $$\textstyle 1$$ and the regions of spin $$\textstyle -1$$. Unit square faces are placed midway between the pairs of nearest-neighbor sites $$\textstyle i$$ and $$\textstyle j$$, perpendicularly to these bonds, whenever $$\textstyle \sigma(i)\sigma(j)=-1$$. The connected components of this set of faces are the Peierls contours. Under the boundary conditions $$\textstyle (+)$$ and $$\textstyle (-)$$, the contours form a set of closed surfaces. They describe the defects of the considered configuration with respect to the ground configurations, and are a basic tool for the investigation of the model at low temperatures.
Interfaces and surface tension
In order to study the interface between the two pure phases one needs to construct a state describing the coexistence of these phases. To simplify the exposition it will be assumed that $$\textstyle d=3$$. Let $$\textstyle \Lambda$$ be a parallelepiped of sides $$\textstyle L_1,L_2,L_3$$, parallel to the axes, and centered at the origin of $$\textstyle {\cal L}$$, and let $$\textstyle n=(n_1,n_2,n_3)$$ be a unit vector in $$\textstyle R^3$$, such that $$\textstyle n_3\ne 0$$. Introduce the mixed boundary conditions $$\textstyle (\pm,n)$$, for which
$\begin{array}{lcll} {\bar\sigma}(i) &=& 1, &\hbox{ if } i\cdot n\geq 0 \\ &=& -1, &\hbox{ if } i\cdot n< 0 \end{array}$
These boundary conditions force the system to produce a defect going transversally through the box $$\textstyle \Lambda$$, a big Peierls contour that can be interpreted as the microscopic interface. The other defects that appear above and below the interface can be described by closed contours inside the pure phases.
The free energy, per unit area, due to the presence of the interface, is the surface tension. It is defined by
$\tau(n)= \lim_{L_1,L_2\to\infty}\, \lim_{L_3 \to\infty} \, -{{n_3}\over{\beta L_1L_2}} \ln\, {Z^{(\pm,n)}(\Lambda)\over Z^{(+)}(\Lambda)}$
In this expression the volume contributions proportional to the free energy of the coexisting phases, as well as the boundary effects, cancel, and only the contributions to the free energy due to the interface are left.
Theorem 5 The thermodynamic limit $$\textstyle \tau (n)$$, of the interfacial free energy per unit area, exists, and is a non negative bounded function of $$\textstyle n$$. Its extension by positive homogeneity, $$\textstyle f(x)=\vert x \vert\, \tau(x/ \vert x\vert)$$ is a convex function on $$\textstyle R^3$$.
A proof of these statements has been given by Messager et al. (1992) using correlation inequalities and, in fact, the Theorem holds for a large class of lattice systems. Moreover, for the Ising model we know, from Bricmont et al. (1980), Lebowitz and Pfister (1981), and the convexity condition, that $$\textstyle \tau (n)$$ is strictly positive for $$\textstyle T<T_c$$ and that it vanishes if $$\textstyle T\ge T_c$$.
Rigid interfaces
Consider now the microscopic interface orthogonal to the direction $$\textstyle n_0 = (0,0,1)$$. At low temperatures $$\textstyle T>0$$, we expect this interface, which at $$\textstyle T=0$$ coincides with the plane $$\textstyle i_3=-1/2$$, to be modified by deformations. It can be described by means of its defects, with respect to the interface at $$\textstyle T=0$$. These defects, called walls, form the boundaries between the smooth plane portions of the interface. In this way the interface structure may then be interpreted as a 'gas of walls' on a two-dimensional lattice.
Dobrushin (1972) proved the dilute character of this gas at low temperatures, which means that the interface is essentially flat (or rigid). The considered boundary conditions yield indeed a non translation invariant Gibbs state. This is known also to be the case for all $$\textstyle T$$ less than $$\textstyle T_c^{d=2}$$, the critical temperature of the two-dimensional Ising model, from correlation inequalities (van Beijeren 1975). Notice that this critical temperature has an exact value, $$\textstyle J/kT_c^{d=2}=0.44$$. It will be seen, in the next Section, that the rigidity of the interface is related to the formation of a plane facet in the equilibrium crystal.
The same analysis applied to the two-dimensional model shows a different behavior at low temperatures. In this case Gallavotti (1972) proved that the microscopic interface undergoes large fluctuations and does not survive in the thermodynamic limit. The interface is rough and the corresponding Gibbs state is translation invariant. Indeed, all Gibbs states are translation invariant in the two-dimensional Ising model (Russo 1979, Aizenman 1980 and Higuchi 1981).
Coming back to the three-dimensional Ising model, where the interface orthogonal to a lattice axis is known to be rigid at low temperatures, the following question arises: At higher temperatures, do the fluctuations of this interface become unbounded, in the thermodynamic limit, so that the corresponding Gibbs state is translation invariant?
One says then that the interface is rough. It is believed that, effectively, the interface becomes rough when the temperature is raised, undergoing a roughening transition at a temperature $$\textstyle T_R$$ strictly smaller than the critical temperature $$\textstyle T_c$$, above which there is only one phase. Indeed, approximate methods give some evidence for the existence of such a $$\textstyle T_R$$ and suggest a value near to $$\textstyle J/kT_R=0.41$$. The estimated value for the critical temperature $$\textstyle T_c^{d=3}$$ in three dimensions, is larger, namely $$\textstyle J/kT_c^{d=3}\sim0.22$$.
Remark
It has been possible, over the last years, to justify the Wulff construction directly from a microscopic theory.
The first mathematically rigorous proof of the validity of the Wulff construction, in the case of the two-dimensional Ising model at low temperatures, is due to Dobrushin et al. (1992). See also Pfister (1991), for another version of the proof, and Miracle-Sole and Ruiz (1994) for a simpler approach in the case of an interface model. Their results show that, using the canonical ensemble, where the total number of particles (or the total magnetization in the language of spin systems) is fixed, if the configurations of the system are properly rescaled in the thermodynamic limit, then a (unique) droplet of the dense phase, immersed in the dilute phase, is formed. Its shape coincides with the Wulff shape. This fact was later extended to all temperatures below the critical temperature.
Recently, such a study has also been carried out in the case of three or more dimensions by Bodineau (1999), Cerf and Pitsztora (2000).
Wulff shape in statistical mechanics
Solvable models
Consider the surface tension in the Ising model, between the positive and negative phases, defined as in Theorem 5. In the two-dimensional case, this function $$\textstyle \tau(n)$$ has (as shown by Abraham) an exact expression in terms of some Onsager's function. It follows (as explained in Miracle-Sole 1999) that the Wulff shape $$\textstyle {\cal W}$$, in the plane $$\textstyle (x_1,x_2)$$, is given by
$\cosh\beta x_1+\cosh\beta x_2\le\cosh^2 2\beta J/\sinh2\beta J$
This shape reduces to the empty set for $$\textstyle \beta\le\beta_c$$, since the critical $$\textstyle \beta_c$$ satisfies $$\textstyle \sinh 2J\beta_c=1$$. For $$\textstyle \beta>\beta_c$$, it is a strictly convex set with smooth boundary, as shown in Figure 2. More rounded figures correspond to higher temperatures (smaller $$\textstyle \beta$$).
Figure 2: Equilibrium shape for the two-dimensional Ising model, $$J=1,\beta=1,2\hbox{ and } 3\ .$$
In the three-dimensional case, only certain interface models can be exactly solved. Consider the Ising model at zero temperature, with boundary condition $$\textstyle (\pm, n)$$. Then the ground configurations have only one defect, the microscopic interface $$\textstyle \lambda$$, imposed by this condition, and
$\tau(n)=\lim_{L_1,L_2\to\infty}{{n_3}\over{L_1L_2}} \,\big(E_\Lambda(n)-\beta^{-1}N_\Lambda(n)\big),$
where $$\textstyle E_\Lambda=2J|\lambda|$$ is the energy (all $$\textstyle \lambda$$ have the same minimal area) and $$\textstyle N_\Lambda$$ the number of the ground states. Every such $$\textstyle \lambda$$ has the property of being cut only once by all straight lines orthogonal to the diagonal plane $$\textstyle i_1+i_2+i_3=0$$, provided that $$\textstyle n_k>0$$, for $$\textstyle k=1,2,3$$. Each $$\textstyle \lambda$$ can then be described by an integer function defined on a triangular plane lattice, the projection of the cubic lattice $$\textstyle {\cal L}$$ on the diagonal plane. The model defined by this set of admissible microscopic interfaces is precisely the TISOS model, introduced by Nienhuis et al. (1989). A similar definition can be given for the BCSOS model that describes the ground configurations on the body-centered cubic lattice (see van Beijeren 1977 and Kotecky, Miracle-Sole 1987).
From a macroscopic point of view, the roughness or the rigidity of an interface should be apparent when considering the shape of the equilibrium crystal associated with the system. A typical equilibrium crystal at low temperatures has smooth plane facets linked by rounded edges and corners. The area of a particular facet decreases as the temperature is raised and the facet finally disappears at a temperature characteristic of its orientation. It can be argued that the disappearance of the facet corresponds to the roughening transition of the interface whose orientation is the same as that of the considered facet.
The exactly solvable interface models mentioned above, for which the function $$\textstyle \tau(n)$$ has been exactly computed, are interesting examples of this behavior, and provide a valuable information on several aspects of the roughening transition. This subject has been reviewed in Kotecky (1989) and Miracle-Sole (1999). For example, Figure 1 shows the shape predicted by the TISOS model.
The step free energy
For the three-dimensional Ising model at positive temperatures, the description of the microscopic interface, for any orientation $$\textstyle n$$, appears as a very difficult problem. It has been possible, however, to analyze the interfaces which are very near to the particular orientations $$\textstyle n_0$$, discussed in the precedent Section. This analysis can be used to determine the shape of the facets orthogonal to the coordinate axes in a rigorous way.
The step free energy plays a role in the facet formation, as shown in Theorems 6 and 7, below. It is defined as the free energy, per unit length, associated with the introduction of a step of height 1 on the interface, and can be regarded as an order parameter for the roughening transition. Let $$\textstyle \Lambda$$ be, as above, a parallelepiped of sides $$\textstyle L_1,L_2,L_3$$, parallel to the axes, centered at the origin, and introduce the $$\textstyle ({\rm step},m)$$ boundary conditions, associated to the unit vectors $$\textstyle m = (\cos\phi,\sin\phi)\in R^2$$, by
$\begin{array}{lcll} {\bar\sigma}(i) &=& 1, &\hbox{ if } i_3>0 \hbox{ or if } i_3=0 \hbox{ and } i_1m_1+i_2m_2\ge0, \\ &=& -1, &\hbox{ otherwise. } \end{array}$
Then, the step free energy, per unit length, for a step orthogonal to $$\textstyle m$$ (with $$\textstyle m_2>0$$) on the horizontal interface, is
$\tau^{\rm step}(\phi) = \lim_{L_1\to\infty}\lim_{L_2\to\infty}\lim_{L_3\to\infty} - {{\cos\phi}\over{\beta L_1}} \ln {{Z^{{\rm step},m}(\Lambda)}\over {Z^{\pm,n_0}(\Lambda)}}$
Facets in the Wulff shape
A first result concerning this point, was obtained by Bricmont et al. (1986), by proving a correlation inequality which establish $$\textstyle \tau^{\rm step}(0)$$ as a lower bound, strictly positive for all $$\textstyle T<T_c^{d=2}$$, to the one-sided derivative $$\textstyle \partial\tau(\theta,0)/\partial\theta$$ at $$\textstyle \theta=0^+$$ (the inequality extends to $$\textstyle \phi\ne0$$). Thus, when $$\textstyle \tau^{\rm step}>0$$, a facet is expected, according to Theorem 4.
Using the perturbation theory of the horizontal interface, it is possible to study also the microscopic interfaces associated with the $$\textstyle ({\rm step},m)$$ boundary conditions. When considering these configurations, the step may be viewed as an additional defect on the rigid interface described in Section 2. It is, in fact, a long wall going from one side to the other side of the box $$\textstyle \Lambda$$. The step structure at low temperatures can then be analyzed with the help of a new cluster expansion. As a consequence of this analysis we have the following theorem.
Theorem 6 If the temperature is low enough, i.e., if $$\textstyle \beta J\ge c_0$$, where $$\textstyle c_0$$ is a suitable constant, then the step free energy, $$\textstyle \tau^{\rm step} (\phi)$$, exists, is strictly positive, and extends by positive homogeneity to a strictly convex function. Moreover, $$\textstyle \beta\tau^{\rm step}(\phi)$$ is an analytic function of $$\textstyle \zeta=e^{-2J\beta}$$, for which an explicit convergent series expansion can be found.
Using the above results on the step structure, similar methods allow us to evaluate the increment in surface tension of an interface titled by a very small angle $$\textstyle \theta$$ with respect to the rigid horizontal interface. This increment can be expressed in terms of the step free energy and one obtains the following relation.
Theorem 7 For $$\textstyle \beta J\ge c_0$$, we have
$\partial\tau(\theta,\phi)/\partial\theta\,\vert\,_{\theta=0^+} = \tau^{\rm step}(\phi).$
This relation, together with Theorem 4, implies that one obtains the shape of the facet by means of the two-dimensional Wulff construction applied to the step free energy. The reader will find a detailed discussion on these points, as well as the proofs of Theorems 6 and 7, in Miracle-Sole (1995).
From the properties of $$\textstyle \tau^{\rm step}$$ stated in Theorem 6 it follows that the Wulff equilibrium crystal presents well defined boundary lines, smooth and without straight segments, between a rounded part of the crystal surface and the plane facets parallel to the three main lattice planes.
It is expected, but not proved, that at a higher temperature, but before reaching the critical temperature, the facets associated with the Ising model undergo a roughening transition. It is then natural to believe that the equality in Theorem 7 is true for any $$\textstyle T$$ lower than $$\textstyle T_R$$, and that for $$\textstyle T$$ higher than $$\textstyle T_R$$, both sides in this equality vanish, and thus, the disappearance of the facet is involved. However, the condition that the temperature is low enough is needed in the proofs of Theorems 6 and 7.
References
Aizenman M. (1980): Translation invariance and instability of phase coexistence in the two-dimensional Ising system, Comm. Math. Phys. 73, 83-94.
Beijeren, H. van (1975): Interface sharpness in the Ising model. Commun. Math.Phys. 40, 1-6.
Beijeren, H. van, (1977): Exactly solvable model for the roughening transition of a crystal surface, Phys. Rev. Lett. 38, 993-996.
Bodineau, T. (1999): The Wulff construction in three and more dimensions, Commun. math. Phys. 207, 197-229.
Bodineau, T. (2006): Translation invariant Gibbs states for the Ising model, Probab. Theory Related Fields 135, 153-168.
Bricmont, J., Lebowitz, J.L., Pfister, C.E. (1980): On the surface tension of lattice systems, Ann. Acad. Sci. New York 337, 214-223.
Bricmont, J., El Mellouki, A., Fröhlich, J. (1986): Random surfaces in statistical mechanics: roughening, rounding, wetting,... J. Stat. Phys. 42, 743-798.
Burago, Yu.D., Zalgaller, V.A. (1988): Geometric inequalities, Grundiehren der math. Wissenschaften, vol. 43, Springer, Berlin.
Cerf, R., Pitsztora, A. (2000): On the Wulff crystal in the Ising model, Ann. Probab. 28, 945-1015.
Dobrushin, R.L. (1972): Gibbs state describing the coexistence of phases for a three dimensional Ising model, Theory Probab. Appl. 17, 582-600.
Dobrushin, R.L., Kotecky, R., Shlosman, S.B. (1992): The Wulff Construction: a Global Shape from Local Interactions, American Mathematical Society, Providence.
Gallavotti, G. (1972): The phase separation line in the two-dimensional Ising model, Commun. Math. Phys. 27, 103-136.
Higuchi Y. (1981): On the absence of non-translation invariant Gibbs states for the two-dimensional Ising model, in Colloq. Math. Soc. Janos Bolyai vol. 27, pp. 517-534, North-Holland, Amsterdam.
Kotecky, R. (1989): Statistical mechanics of interfaces and equilibrium crystal shapes. In: IX International Congress of Mathematical Physics, Simon, B. et al., eds., pp. 148-163, Adam Hilger, Bristol.
Kotecky, R., Miracle-Sole, S. (1987): Roughening transition for the Ising model on a bcc lattice. A case in the theory of ground states, J. Stat. Phys. 47, 773-799.
Lebowitz, J.L., Pfister, C.E. (1981): Surface tension and phase coexistence, Phys. Rev. Lett. 46, 1031-1033.
Messager, A., Miracle-Sole, S., Ruiz, S. (1992): Convexity properties of the surface tension and equilibrium crystals, J. Stat. Phys. 67, 449-470.
Miracle-Sole, S., Ruiz, J. (1994): On the Wulff construction as a problem of equivalence of ensembles, On Three Levels, M. Fannes, A. Verbeure (Eds.), pp. 295-302, Plenum Press, New York.
Miracle-Sole, S. (1995): Surface tension, step free energy and facets in the equilibrium crystal shape, J. Stat. Phys. 79, 183-214.
Miracle-Sole, S. (1999): Facet shapes in a Wulff crystal. In: Mathematical Results in Statistical Mechanics, Miracle-Sole S., Ruiz J., Zagrebnov, V., eds., pp. 83-101, World Scientific, Singapore.
Nienhuis, B., Hilhorst, H.J., Blöte, H.B. (1984): Triangular SOS models and cubic crystal shapes, J. Phys. A: Mat. Gen. 17, 3559-3581.
Pfister, C.E. (1991): Large deviations and phase separation in the two-dimensional Ising model, Helv. Phys. Acta 64, 953-1054.
Russo L. (1979): The infinite cluster method in the two-dimensional Ising model, Comm. Math. Phys. 67, 251-266.
Schneer, C.J. (1977): Crystal Form and Structure, Benchmark papers in Geology, vol. 34, Dowden, Hutchinson and Ross, Stroudsbourg (USA).
Taylor, J.E. (1987): Some crystalline variational techniques and results, Asterisque, vol. 154, pp. 307-320.
Wolf, P.E., Balibar, S., Gallet, F. (1983): Experimental observations of a third roughening transition in hcp $$\textstyle ^4$$He crystals, Phys. Rev. Lett. 51, 1366-1369.
|
2018-05-23 07:22:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8726229667663574, "perplexity": 332.70738305997776}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865456.57/warc/CC-MAIN-20180523063435-20180523083435-00557.warc.gz"}
|
https://wumbo.net/symbol/beta/
|
# Beta Symbol
The symbol is the lower case Greek letter beta. The symbol is used in mathematics as a variable.
Symbol Format Data
β Unicode
946
TeX
\beta
SVG
### Usage
Variable | Concept
A variable is a core concept in algebra where a symbol, usualy a lower case latin letter, is used as a placeholder in a math expression.
Greek Alphabet | Concept
The Greek Alphabet is used throughout math to represent variables, constants, and coeffecients within math expressions, formulas, and equations.
### Related Symbols
Capital Beta | Symbol
The capital Greek letter (Β).
|
2020-07-04 21:51:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6676104068756104, "perplexity": 4683.863043304402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886706.29/warc/CC-MAIN-20200704201650-20200704231650-00524.warc.gz"}
|
http://mathoverflow.net/questions/59588/quantum-mathematics?sort=newest
|
# Quantum mathematics?
"Quantum" as a term/prefix used to be genuinely physical: what was supposed to be physically continuous turned out to be physically quantized.
What sense does this distinction make inside mathematics?
Especially: Is "quantum algebra" a well-chosen name? (According to Wikipedia, it's one of the top-level mathematics categories used by the arXiv, but it's not explained any further.)
-
There is a very clear and important mathematical distinction between classical probability theory and quantum (non-commutative) probability. – Gil Kalai Mar 25 '11 at 18:09
Actually, it is explained further, like all the arXiv categories: front.math.ucdavis.edu/categories/math – Ben Webster Mar 25 '11 at 18:10
What does "well-chosen name" mean? Quantum is catchy, historically motivated, sufficiently distinct from other terms that one usually knows what it is intended to mean when one sees is, considerably much better that "basic", and so on.... – Mariano Suárez-Alvarez Mar 25 '11 at 18:17
@Ben: "Quantum groups, skein theories, operadic and diagrammatic algebra, quantum field theory" isn't very much of an explanation. – Hans Stricker Mar 25 '11 at 18:21
This is not to disagree with Gil's comment; but I think Connes has taken issue (informally, in casual interviews) with some uses of "quantum X" as a more fundable synonym of "noncommutative X". My memory is that he politely points out that he can't see what is actually being "quantized" in some of these cases. – Yemon Choi Mar 25 '11 at 18:57
I think that the basic intuition relating quantum algebra and quantum physics is something like:
quantum stuff = classical stuff + $\hbar$ (something complicated)
where $\hbar$ is a "small" formal variable. In other words, the point is to consider that the mathematical objects everybody knows are only approximations of more complicated objects. Hence, quantum mathematics has something to do with perturbation theory, because most of the interesting objects in quantum mathematics are perturbations of trivial solutions of some problems/equations. Here, perturbation means that these objects are formal power series in $\hbar$ whose constant term is a trivial solution (eg: 1 :) ) of some equation (eg: the Yang Baxter equation).
Hence, as John pointed out, quantum algebra involves the study of objects for which classical properties (eg: commutativity) are "almost" true (ie: true modulo $\hbar$).
-
Personally, I like your answer very much: "quantum" involves a small parameter like $\hbar$ or a $q$. But I think that many people working in e.g. noncommutative geometry would disagree: they usually neglegt the deformation aspect and study "hard" quantum algebras where no classical limit is in sight. So one probably should be open also for this point of view. In NCG the analogy with ordinary (=classical) geometry is more important than the deformation aspect. – Stefan Waldmann Mar 26 '11 at 10:31
Working in "quantum mathematics" myself, I should tend to defend this teminology a bit ;) The term is clearly motivated by the usage in physics and, nowadays, is typically used in situations where you have a "classical" mathematical object (ring, algebra, group, whatever) which traditionally is viewed in a commutative context. Then the "quantum" version means to transfer things into a noncommutative context and see what happens.
Of course, this is all very vague, but why do you call groups "groups" and fields "fields"? I guess, it is the intuition which makes this notion useful for the community. The intuition from physics is the transition from commutative to noncommutative, and I think that is really what people usually think if they hear from some "quantum blablabla" in math. So I guess, it is not a completely irritating notion :)
-
On first reading I (mis?)read this as saying that 'traditionally ring, algebra, group are viewed in a commutative context'. Inserting 'a certain' in front of ring could avoid this. – quid Mar 25 '11 at 19:11
@unknown (goolge) Of course, noncommutative rings etc have been studied before any sort of "quantum". I should have been a bit more specific here: it is the analogy (perhaps even a deformation) with a commutative object which makes the noncommutative object "quantum". Clearly not a mathematical definition :) – Stefan Waldmann Mar 26 '11 at 10:33
Thank you for the clarification. – quid Mar 26 '11 at 10:46
I would hold that the term non-commutative algebra is usually used to refer to the study of general noncommutative algebras. Quantum algebra involves the study of certain types of non-commutative algebras, not all non-commutative algebras. It's not black and white, but reasonably well-defined subfamily. The algebras quite often involve a parameter $q$ st when $q=1$ or $0$ the algebra is commutative - take for example Drinfeld--Jimbo algebras. The parallels with quantum theory here are obvious.
-
|
2015-07-01 05:08:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7929046750068665, "perplexity": 1067.0723749514239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094662.41/warc/CC-MAIN-20150627031814-00085-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://codegolf.stackexchange.com/questions/129523/it-was-just-a-bug?page=3&tab=votes
|
# It was just a bug
Inspired by the bugged output in @Carcigenicate's Clojure answer for the Print this diamond challenge.
## Print this exact text:
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1234567890
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
(From the middle outward in both directions, each digit is separated by one more space than the previous line.)
## Challenge rules:
• There will be no input (or an empty unused input).
• Trailing spaces are optional.
• A single trailing new-line is optional.
• Leading spaces or new-lines are not allowed.
• Returning a string-array isn't allowed. You should either output the text, or have a function which returns a single string with correct result.
## General rules:
• This is , so shortest answer in bytes wins.
Don't let code-golf languages discourage you from posting answers with non-codegolfing languages. Try to come up with an as short as possible answer for 'any' programming language.
• Standard rules apply for your answer, so you are allowed to use STDIN/STDOUT, functions/method with the proper parameters and return-type, full programs. Your call.
• Default Loopholes are forbidden.
• Is outputting an array of strings - 1 string per line - allowed? Jul 3 '17 at 9:37
• @Shaggy Sorry, in this case it should either return a single string with new-lines, or output the result. I've added this as rule to the challenge. Jul 3 '17 at 11:06
• No worries, Kevin; was just chancing my arm to see if I could save myself a couple of bytes. Jul 3 '17 at 11:09
• @Shaggy Hehe. What other reason would we have to ask question in a code-golf challenge, besides having the purpose of saving those few bytes? ;) Jul 3 '17 at 11:23
• Ha, that's awesome. I was wondering why that answer suddenly got so much attention. Thanks! Jul 3 '17 at 19:41
# [Java 7], 137 132 bytes
Saved 3 bytes thanx to Kevin Cruijssen and 2 more thanx to TheLethalCoder
void a(){int i,j,k;for(i=-8;i<9;i++){String a="";for(j=1;j<11;j++){a+=j%10;for(k=0;k<(i<0?-i:i);k++)a+=" ";}System.out.println(a);}}
void a(){for(int i=-8;i<9;i++){String a="";for(int j=1;j<11;j++){a+=j%10;for(int k=0;k<Math.abs(i);k++){a+=" ";}}System.out.println(a);}}
Ungolfed:
void a(){
int i,j,k;
for(i=-8;i<9;i++){
String a="";
for(j=1;j<11;j++){
a+=j%10;
for(k=0;k<(i<0?-i:i);k++)a+=" ";
}
System.out.println(a);
}
}
• Declare i, j, and k in one place to save bytes (I'm sure you go do this in Java). Use Java 8 and compile to an anonymous method. I think int i=-9;++i<9;i should work and similar for j and k. I'm not sure if Math needs qualifying but I don't know Java well enough to say for sure. I think you can remove the braces from the k loop as well. Jul 5 '17 at 8:29
• Also it might be cheaper, especially when compiling to a lambda, to return the results as a string. Jul 5 '17 at 8:31
• Welcome to PPCG! There is already a shorter Java answer posted, but that doesn't matter because it uses a different approach. But, you can still shorten your code a bit like this: void a(){for(int i=-8,j,k;i<9;i++){String a="";for(j=1;j<11;)for(a+=j++%10,k=0;k++<(i<0?-i:i);)a+=" ";System.out.println(a);}}. (I've put some things inside the for-loop to get rid of the brackets; I've removed the two int by adding ,j,k in the first loop; and I've changed Math.abs(i) to (i<0?-i:i). It might also be useful to read Tips for golfing in Java. :) Enjoy your stay. Jul 5 '17 at 8:33
# C#, 131 bytes
My first participation
for(int i=8;i>-9;i--){for(int n=1;n<=10;n++){Console.Write((n==10?"0":n.ToString())+new String(' ',Math.Abs(i))+(n==10?"\n":""));}}
for (int i = 8; i > -9; i--)
{
for (int n = 1; n <= 10; n++)
{
Console.Write((n==10 ? "0" : n.ToString()) + new String(' ', Math.Abs(i)) + (n == 10 ? "\n" : ""));
}
}
• Welcome to PPCG.SE! A nice first post though by default, you should supply a full program or function, not just a snippet like you have here. See this meta post for more. I would also recommend you have a look at Try it Online which lets you run your code online and share it with others. It can also format your answers for you to post here. Jul 6 '17 at 10:47
• Welcome to PPCG! In addition to what @Notts90 said, I can also recommend reading Tips for golfing in C# and Tips for golfing in <all languages>. Some things you can golf is removing the brackets and int; change the condition to remove the =; and change the Math.abs, like this: o=>{for(int i=8,n;i>-9;i--)for(n=1;n<11;n++)Console.Write((n>9?0:n)+new String(' ',i<0?-i:i)+(n>9?"\n":""));} (where o=> is an unused parameter). Try it here. Again welcome, and enjoy your stay. :) Jul 6 '17 at 11:03
• Oh, and one more thing after my previous golfed version: o=>{for(int i=8,n;i>-9;i--)for(n=1;n<11;)Console.Write((n>9?0:n)+new String(' ',i<0?-i:i)+(n++>9?"\n":""));} (the n++ is removed and ++ is added to the last n. Try it here. So 108 bytes in total. Jul 6 '17 at 11:47
• Thanks for the comments Notts90 and Kevin, I will look at your code this evening :) Jul 6 '17 at 12:21
# tcl, 112
Very naïve:
proc x n\ s {time {puts [join {1 2 3 4 5 6 7 8 9 0} [format %[incr n $s]s \ ]]} 9} x 10 -1 puts 1234567890 x 0 1 # demo # /// (Slashes), 377 374 bytes /a/1_//b/2_//c/3_//d/4_//e/5_//f/6_//g/7_//h/8_//i/9_//z/0 //./ //-/. //_/../a_b_c_d_e_f_g_h_i_za-b-c-d-e-f-g-h-i-za.b.c.d.e.f.g.h.i.za b c d e f g h i zabcdefghiz1-2-3-4-5-6-7-8-9-z1.2.3.4.5.6.7.8.9.z1 2 3 4 5 6 7 8 9 z123456789z1 2 3 4 5 6 7 8 9 z1.2.3.4.5.6.7.h.9.z1-2-3-4-5-6-7-8-9-zabcdefghiza b c d e f g h i za.b.c.d.e.f.g.h.i.za-b-c-d-e-f-g-h-i-za_b_c_d_e_f_g_h_i_0 Try it online! ## Explanation The original output was initially compressed by doing the following replacements: • (4 spaces)_ • (3 spaces)- • (2 spaces). In a second step, 1_ was replaced by a, 2_ by b and so on. Finally, those mappings were inverted and prepended to the code. Additional step (thx to CalculatorFeline): /./ //-/ //_/ //_/..//-/. //./ / • /./ //-/ //_/ / can be /_/..//-/. //./ /. Imagine the spaces being correct because Markdown doesn't like multiple spaces in inline code blocks >:( Jul 9 '17 at 21:17 • @CalculatorFeline Thanks, missed that one. Btw, you can fake the spaces with "figure space" (U+2007), markdown doesn't seem to collapse those (it's what I did in the post as well). Jul 10 '17 at 7:07 # Sink, 102 100 bytes var v=range 0,17 var p=list.push(range 0,10),0 for var s:v say list.join p,str.pad"",num.abs s-8 end Try it online! (sort of, it's just a repl so you'll need to type it in) -2 bytes thanks to @Kevin Cruijssen • Welcome to PPCG! Nice first answer, +1 from me. I've never even heard of Sink before, so it's always great to see a new programming language. Btw, I tried some things in the REPL, and if I'm not mistaken you can golf two spaces on this line: say list.join p,str.pad"",num.abs s-8. Again welcome, and enjoy your stay. :) Jul 8 '17 at 7:44 # Lua, 106 85 bytes -21 bytes thanks to user202729 n="1L2L3L4L5L6L7L8L9L0"for i=-8,8,1 do print((n:gsub("L",(" "):rep(math.abs(i)))))end Output has a trailing newline Ungolfed n="1L2L3L4L5L6L7L8L9L0" for i=-8,8,1 do print((n:gsub("L",(" "):rep(math.abs(i))))) end Try It Online • Hi there. Would you mind adding a TIO-link with test code? Lua should be available on TIO as well. Apr 5 '18 at 7:36 • You can save 5 bytes by using (" "):rep. Apr 5 '18 at 12:37 • You can also get it down to 82 bytes with pattern. Hint: %1. Apr 5 '18 at 12:43 # brainfuck, 209 bytes ++++++++[->+>+>+>++++>++++++>+>>+<<<<<<<<]>++>+>+>>>>>+<<<<<<-[->>>>>>[-<<<+.>[-<<.>>>+<]>[-<+>]>>+<]>[-<+<<<->>>>]<<<-<.<<<<.>]>[->>>>>[-<<<+.>>>>>[->+<<<<<<<.>>>>>>]>[-<+>]<<+<]>[-<+<<<->>>>]>+<<<<<.<<<<.>>] Try it online! I noticed there wasn’t a brainfuck submission yet, so I gave it my best shot. Definitely not the most golfed solution possible, but I’m not upset with the result. ## Canvas, 20 15 bytes 9{9∔ ×9R0∔∑;*]─ Try it here! Explanation: 9{9∔ ×9R0∔∑;*]─ | (Full code) | Print (implicit) 9∔ × | 9-i spaces 9R0∔∑;* | Inserted between the characters of the string "1234567890" 9{ | For each i in [1-9] ] | Separated by newlines ─ | Palindromized vertically with 1 line of overlap # QBasic, 66 bytes An undeclared subroutine that takes no input and outputs to STDOUT FOR i=-8TO 8 FOR j=49TO 57 ?CHR$(j)SPC(ABS(i));
NEXT
?"0
NEXT
# Elixir, 126 bytes
Enum.each 8..-8,fn n->IO.puts Enum.map Enum.into(1..10,[],&(rem &1,10)),&(Integer.to_string(&1)<>String.duplicate" ",abs n)end
Try it online!
Enum.each 8..-8,
fn n->
IO.puts
Enum.map
Enum.into(1..10, [], &(rem &1,10)),
&(Integer.to_string(&1) <> String.duplicate" ",abs n)
end
For each number in the range 8 to -8, do this anonymous function with the number as n. The function (inner to outer) sets the range 1 to 10 into a list after applying the remainder mod 10 (so we have the list 1,2,3,4,5,6,7,8,9,0). Then maps that list of numbers into strings, and concatenates the absolute value of n spaces onto the end. After getting each string we print out the whole list of strings... and Elixir doesn't put any delimiter on that by default.
Output:
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1234567890
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
1 2 3 4 5 6 7 8 9 0
# Sisi, 239 bytes
0set s8
1set d0-1
2set o""
3set i1
4set o o+i
5jumpif i7
6jump17
7set x s
8jump11
9set o o+" "
10set x x-1
11jumpif x9
12set c9>i
13jumpif c15
14set i0-1
15set i1+i
16jump4
17print o
18set s s+d
19jumpif s21
20set d1
21set q9>s
22jumpif q2
Try it online!
The basic strategy is to run a loop from 8 down to 0 and back up to 8, constructing each line accordingly and printing it. I'm not going to write a detailed explanation, but if you're trying to follow the algorithm, the following key to the variable names may be helpful:
• s: number of spaces to print between digits
• d: direction of change in s (-1 or 1)
• o: the current line of output
• i: the current digit
• x: loop counter while printing spaces
• c: conditional that tests whether i is less than 9 (if not, we set it to -1 so it gets incremented to 0 instead of 10)
• q: conditional that tests whether s is less than 9 (if so, we continue looping)
• Hmm, never seen or heard about the programming language SIsi. Weird how those leading line numbering are mandatory. Nice answer though! Sep 8 '18 at 9:28
• @KevinCruijssen Not surprised that you haven't heard of it--it's a toy language I wrote for an old popularity contest here. :) Sep 8 '18 at 17:38
# ///, 244 bytes
/*/\/\///X/0
1*j/\/X*x/ *S/x *s/SS*-/S *_/s *(/- *)/_ *A/)2)3)4)5)6)7)8)9)*Bj_2_3_4_5_6_7_8_9_*Cjs2s3s4s5s6s7s8s9s*Dj(2(3(4(5(6(7(8(9(*Ej-2-3-4-5-6-7-8-9-*FjS2S3S4S5S6S7S8S9S*Gjx2x3x4x5x6x7x8x9x*Hj 2 3 4 5 6 7 8 9 /1ABCDEFGHX23456789HGFEDCBXA0
Try it online!
# Tcl, 99 bytes
time {incr i;time {puts -nonewline [incr j][string repe " " [expr abs(9-$i)]]} 9;puts 0;unset j} 17 Try it online! # Pushy, 27 bytes Not the most elegant solution, but it does the job. N9X@wL:Ocv:32;ZT:hT%#^"v;T' Try it online! Explanation: N \ Set printing delimiter to empty string 9X \ Push range(9) to stack: [0, 1, 2, 3, 4, 5, 6, 7, 8] @ \ Reverse: [8, 7, 6, 5, 4, 3, 2, 1, 0] w \ Mirror: [8, 7, 6... 1, 0, 1... 6, 7, 8] L: \ Length (17) times do: Oc \ Go to (cleared) auxiliary stack v: \ Pop top of main stack, and that many times: 32; Push character 32 (space) Z \ Push 0 T \ Ten times do: :hT% \ counter = (counter + 1) mod 10 # \ Print the counter ^"v; \ Print the rest of the stack (spaces) T' \ Print a newline • Would you mind adding an explanation? Dec 16 '18 at 19:05 # Rockstar, 127126 124 bytes -1 byte thanks to Kevin F takes X cut 1234567890 into L join L with " "*X say L X's9 while X let X be-1 F taking X while X-8 let X be+1 F taking X Try it here (Code will need to be pasted in) • Based on your other Rockstar answer the space can be removed at be-1, right? Sep 11 '20 at 12:21 • Thanks, @KevinCruijssen. Dunno how that snuck in there; omitting that space is one of my favourite wee byte-saving tricks in Rockstar. Sep 11 '20 at 13:13 # Stax, 12 bytes ü¡↑├☼Rσ┬q▄└m Run and debug it can probably tie mathgolf. Port of Emigna's answer. # Pip-l, 18 16 bytes (\,t)%tJsXPZRV,9 Try it online! ### Explanation t is 10, s is " " (implicit) ,9 Range(9): [0 1 2 3 4 5 6 7 8] RV Reverse: [8 7 6 5 4 3 2 1 0] PZ Palindromize: [8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8] sX For each element of the above list, a string of that many spaces \,t Inclusive-range(10): [1 2 3 4 5 6 7 8 9 10] ( )%t Mod 10: [1 2 3 4 5 6 7 8 9 0] J For each string of spaces, join the above list on that string Output the list of results one per line (-l flag) The PZ operator was added more than a year after this challenge was posted, so here's a 17-byte answer that would have worked at that time: (\,t)%tJsXAB:-8,9 Try it online! # BRASCA, 72 bytes 8[a09Ir,m[oA:a[Eo{]x]xloA{]x09Ir,m[o]lo8[9$-a09Ir,m[oA:a[Eo{]x]xloA}9\$-]
Try it online!
ioi{iii}cccccccc{ddd}o{iii}ccccccccd{ii}ci{dd}cccccccc{ii}c{dd}cccccccci{ii}cd{dd}ccccccccii{ii}cdd{dd}ccccccccd{{{i}}}odsddddcccccccc{{{i}}}oddsddddcccccccci{ddd}soiii{ii}cccccccc{{d}}o{i}ci{d}oi{iii}ccccccc{ddd}o{iii}cccccccd{ii}ci{dd}ccccccc{ii}c{dd}ccccccci{ii}cd{dd}cccccccii{ii}cdd{dd}cccccccd{{{i}}}odsddddccccccc{{{i}}}oddsddddccccccci{ddd}soiii{ii}ccccccc{{d}}o{i}ci{d}oi{iii}cccccc{ddd}o{iii}ccccccd{ii}ci{dd}cccccc{ii}c{dd}cccccci{ii}cd{dd}ccccccii{ii}cdd{dd}ccccccd{{{i}}}odsddddcccccc{{{i}}}oddsddddcccccci{ddd}soiii{ii}cccccc{{d}}o{i}ci{d}oi{iii}ccccc{ddd}o{iii}cccccd{ii}ci{dd}ccccc{ii}c{dd}ccccci{ii}cd{dd}cccccii{ii}cdd{dd}cccccd{{{i}}}odsddddccccc{{{i}}}oddsddddccccci{ddd}soiii{ii}ccccc{{d}}o{i}ci{d}oi{iii}cccc{ddd}o{iii}ccccd{ii}ci{dd}cccc{ii}c{dd}cccci{ii}cd{dd}ccccii{ii}cdd{dd}ccccd{{{i}}}odsddddcccc{{{i}}}oddsddddcccci{ddd}soiii{ii}cccc{{d}}o{i}ci{d}oi{iii}ccc{ddd}o{iii}cccd{ii}ci{dd}ccc{ii}c{dd}ccci{ii}cd{dd}cccii{ii}cdd{dd}cccd{{{i}}}odsddddccc{{{i}}}oddsddddccci{ddd}soiii{ii}ccc{{d}}o{i}ci{d}oi{iii}cc{ddd}o{iii}ccd{ii}ci{dd}cc{ii}c{dd}cci{ii}cd{dd}ccii{ii}cdd{dd}ccd{{{i}}}odsddddcc{{{i}}}oddsddddcci{ddd}soiii{ii}cc{{d}}o{i}ci{d}oi{iii}c{ddd}o{iii}cd{ii}ci{dd}c{ii}c{dd}ci{ii}cd{dd}cii{ii}cdd{dd}cd{{{i}}}odsddddc{{{i}}}oddsddddci{ddd}soiii{ii}c{{d}}o{i}ciio{d}ioioioioioioioi{d}o{i}ci{d}oi{iii}c{ddd}o{iii}cd{ii}ci{dd}c{ii}c{dd}ci{ii}cd{dd}cii{ii}cdd{dd}cd{{{i}}}odsddddc{{{i}}}oddsddddci{ddd}soiii{ii}c{{d}}o{i}ci{d}oi{iii}cc{ddd}o{iii}ccd{ii}ci{dd}cc{ii}c{dd}cci{ii}cd{dd}ccii{ii}cdd{dd}ccd{{{i}}}odsddddcc{{{i}}}oddsddddcci{ddd}soiii{ii}cc{{d}}o{i}ci{d}oi{iii}ccc{ddd}o{iii}cccd{ii}ci{dd}ccc{ii}c{dd}ccci{ii}cd{dd}cccii{ii}cdd{dd}cccd{{{i}}}odsddddccc{{{i}}}oddsddddccci{ddd}soiii{ii}ccc{{d}}o{i}ci{d}oi{iii}cccc{ddd}o{iii}ccccd{ii}ci{dd}cccc{ii}c{dd}cccci{ii}cd{dd}ccccii{ii}cdd{dd}ccccd{{{i}}}odsddddcccc{{{i}}}oddsddddcccci{ddd}soiii{ii}cccc{{d}}o{i}ci{d}oi{iii}ccccc{ddd}o{iii}cccccd{ii}ci{dd}ccccc{ii}c{dd}ccccci{ii}cd{dd}cccccii{ii}cdd{dd}cccccd{{{i}}}odsddddccccc{{{i}}}oddsddddccccci{ddd}soiii{ii}ccccc{{d}}o{i}ci{d}oi{iii}cccccc{ddd}o{iii}ccccccd{ii}ci{dd}cccccc{ii}c{dd}cccccci{ii}cd{dd}ccccccii{ii}cdd{dd}ccccccd{{{i}}}odsddddcccccc{{{i}}}oddsddddcccccci{ddd}soiii{ii}cccccc{{d}}o{i}ci{d}oi{iii}ccccccc{ddd}o{iii}cccccccd{ii}ci{dd}ccccccc{ii}c{dd}ccccccci{ii}cd{dd}cccccccii{ii}cdd{dd}cccccccd{{{i}}}odsddddccccccc{{{i}}}oddsddddccccccci{ddd}soiii{ii}ccccccc{{d}}o{i}ci{d}oi{iii}cccccccc{ddd}o{iii}ccccccccd{ii}ci{dd}cccccccc{ii}c{dd}cccccccci{ii}cd{dd}ccccccccii{ii}cdd{dd}ccccccccd{{{i}}}odsddddcccccccc{{{i}}}oddsddddcccccccci{ddd}soiii{ii}cccccccc{{d}}o
Try it online!
Deadfish~ has two output commands, o to print the accumulator as an integer and c to print a character. All whitespace is printed with c, and 84 of the 190 digits are printed with c, determined by an exhaustive search. If you're interested, here is the semi-golfed and fully unreadable generator: Try it online!
# Excel, 74 bytes
=LET(x,MOD(COLUMN(A:J),10),CONCAT(x&IF(x,REPT(" ",ABS(ROW(1:17)-9)),"
")))
The version without LET that would have been valid when the question was asked is only 2 bytes longer.
## Without LET, 76 bytes
=CONCAT(RIGHT(COLUMN(A:J))&IF(COLUMN(A:J)>9,"
",REPT(" ",ABS(ROW(1:17)-9))))
# Python 2, 72 bytes
for i in range(-8,9):print(('%-'+'%ds'%-~abs(i))*10)%tuple('1234567890')
Try it online!
# Vim, 53 bytes
i1234567890^[^qq8a ^[lq8@qqeYpPqqqlxbequ^qw9@q@eq7@wdj
Where ^[ is the ESC Key.
i1234567890^[^ puts 1-9+0 in the buffer and returns the cursor to the beginning
qq8a ^[lq8@q puts 8 spaces between each of the numbers
qeYpPq records @e to duplicate the line twice and stay on the middle one
qqlxbequ^ records @q to delete one space between numbers and undoes it
qw9@q@eq records @w to do @q 9 times, then @e
7@wdj does @w 7 more times, then deletes two lines
# Python, 84 bytes
for x in range(-8,9):
z=""
for y in range(1,11):z+=str(y%10)+" "*abs(x)
print(z)
Try it online
# VBA, 58 bytes
An immediate window function that takes no input and outputs to the console.
For i=-8To 8:For j=1To 9:?j &""Spc(Abs(i));:Next:?"0":Next
### Ungolfed and Commented
For i=-8To 8 '' Iterate i from -8 to 8
For j=1To 9 '' Iterate j from 1 to 9
?j &""Spc(Abs(i)); '' Print j and i spaces
Next '' Loop j
?"0" '' Print 0, with a newline
Next '' Loop i
# JavaScript, 71 bytes
f=(i=8,q=[..."1234567890"].join(" ".repeat(i)))=>i?q+
+f(i-1)+
+q:q
# Java (JDK), 105 103 bytes
a->{for(int j,i=-9;i++<8;){var s="";for(j=0;j++<9;)s+=j+" ".repeat(i<0?-i:i);System.out.println(s+0);}}
Try it online!
Thanks to ceilingcat for -2
|
2021-09-17 07:02:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19358915090560913, "perplexity": 1580.3032950822364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00273.warc.gz"}
|
https://www.physicsforums.com/threads/trig-limit-question.175469/
|
# Trig limit question
## Homework Statement
evaluate the limit as x goes to 0 of:
tan(4x^2)^2/(3x^4)
(thats suppose to say the tangent squared of 4 x squared all over 3 x to the 4th)
n/a
## The Attempt at a Solution
the only trig limit stuff i know is the limit to 0 of sinx / x = 1
and
limit to 0 of (1-cos x)/x = 0
is there a tangant one i should know? what about tangant squared? i also considered changing it to sin squared over cos squared but i dont know what i could do that with.
if anyone could give me a hint to get me started, i would appreciate it. thanks.
hmmm actually i think i have an idea (strange how just typing it out can make your mind think in different ways).
after changing it to sin^2 over cos^2 i can then rewrite it like this:
(1/3)(sin4x^2/x^2)(sin4x^2/x^2)(1/cos4x^2)(1/cos4x^2)
then multiply the two sin parts by 4/4 which if i then apply the limit will give me:
(1/3)(4/1)(4/1)(1/1)(1/1)
which is 16/3
is that correct?
berkeman
Mentor
tan(4x^2)^2/(3x^4)
(thats suppose to say the tangent squared of 4 x squared all over 3 x to the 4th)
hmmm actually i think i have an idea (strange how just typing it out can make your mind think in different ways).
after changing it to sin^2 over cos^2 i can then rewrite it like this:
(1/3)(sin4x^2/x^2)(sin4x^2/x^2)(1/cos4x^2)(1/cos4x^2)
then multiply the two sin parts by 4/4 which if i then apply the limit will give me:
(1/3)(4/1)(4/1)(1/1)(1/1)
which is 16/3
is that correct?
In LaTex:
$$\frac{tan^2 (4x^2)}{3x^4}$$
or do you mean:
$$\frac{tan^2 (4x)^2}{(3x)^4}$$
Or something else? If you quote my reply, you will see the format of the LaTex inside... It would help to see your equations in LaTex to eliminate the confusion. There's also a new LaTex editing feature in the reply dialog -- in the upper right corner of the entry box, there is a $$\Sigma$$ symbol to click to get the LaTex editor.
Last edited:
its this one:
$$\frac{tan^2 (4x^2)}{3x^4}$$
berkeman
Mentor
its this one:
$$\frac{tan^2 (4x^2)}{3x^4}$$
Okay. Now can you show your steps to the solution using LaTex to make them easier to check? Thanks.
BTW, the LaTex preview feature now works here on the PF (as of a day or two ago). When you're in the Advanced Reply window, just click on "Preview Post" to make sure you're happy with the appearance of the LaTex.
Dick
Homework Helper
You've got two ways to go here. You can blindly apply L'Hopital's rule through 4 differentiations or you can look at an expansion of tan(x) for small x and go more or less directly to the answer.
Or you can write it as $$(\frac{tan4x^2}{4x^2})^2\frac{16}{3}$$. Can you find the limit now?
Last edited:
VietDao29
Homework Helper
hmmm actually i think i have an idea (strange how just typing it out can make your mind think in different ways).
after changing it to sin^2 over cos^2 i can then rewrite it like this:
(1/3)(sin4x^2/x^2)(sin4x^2/x^2)(1/cos4x^2)(1/cos4x^2)
then multiply the two sin parts by 4/4 which if i then apply the limit will give me:
(1/3)(4/1)(4/1)(1/1)(1/1)
which is 16/3
is that correct?
Yes, that's correct.
When seeing tangent function, you can just split it into sin, and cos function, and simply go from there.
Or you can write it as $$(\frac{tan4x^2}{4x^2})^2\frac{16}{3}$$. Can you find the limit now?
interesting. since i know the answer is 16/3, can i assume the limit of $$(\frac{tan4x^2}{4x^2})^2\$$ when approaching 0 is 1? ive never heard of that limit rule for tangent before.
VietDao29
Homework Helper
interesting. since i know the answer is 16/3, can i assume the limit of $$(\frac{tan4x^2}{4x^2})^2\$$ when approaching 0 is 1? ive never heard of that limit rule for tangent before.
Well, actually, you can derive it from the well-known limit:
$$\lim_{x \rightarrow 0} \frac{\sin x}{x} = 1$$
Just split tan into sin, and cos, like this:
$$\lim_{x \rightarrow 0} \frac{\tan x}{x} = \lim_{x \rightarrow 0} \frac{\sin x}{x} \frac{1}{\cos x} = 1 \times \frac{1}{1} = 1$$
Last edited:
Yeah... thats it.
|
2021-06-20 09:00:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9229086637496948, "perplexity": 960.853964786118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487660269.75/warc/CC-MAIN-20210620084505-20210620114505-00562.warc.gz"}
|